Nov 22 07:02:35 crc systemd[1]: Starting Kubernetes Kubelet... Nov 22 07:02:36 crc restorecon[4767]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:02:36 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:02:37 crc restorecon[4767]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 22 07:02:37 crc restorecon[4767]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 22 07:02:38 crc kubenswrapper[4856]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 07:02:38 crc kubenswrapper[4856]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 22 07:02:38 crc kubenswrapper[4856]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 07:02:38 crc kubenswrapper[4856]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 07:02:38 crc kubenswrapper[4856]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 22 07:02:38 crc kubenswrapper[4856]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.339202 4856 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342023 4856 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342041 4856 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342046 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342050 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342055 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342060 4856 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342069 4856 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342073 4856 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342077 4856 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342081 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342085 4856 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342089 4856 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342094 4856 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342099 4856 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342102 4856 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342107 4856 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342110 4856 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342114 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342118 4856 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342121 4856 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342124 4856 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342129 4856 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342134 4856 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342138 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342142 4856 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342146 4856 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342150 4856 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342154 4856 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342158 4856 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342161 4856 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342164 4856 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342168 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342172 4856 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342176 4856 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342179 4856 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342183 4856 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342186 4856 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342190 4856 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342195 4856 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342199 4856 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342202 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342206 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342210 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342214 4856 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342218 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342221 4856 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342225 4856 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342228 4856 feature_gate.go:330] unrecognized feature gate: Example Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342232 4856 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342235 4856 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342238 4856 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342274 4856 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342278 4856 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342281 4856 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342285 4856 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342288 4856 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342291 4856 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342294 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342299 4856 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342304 4856 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342309 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342314 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342318 4856 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342321 4856 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342325 4856 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342328 4856 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342332 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342335 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342338 4856 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342343 4856 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.342347 4856 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345539 4856 flags.go:64] FLAG: --address="0.0.0.0" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345566 4856 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345576 4856 flags.go:64] FLAG: --anonymous-auth="true" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345584 4856 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345591 4856 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345596 4856 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345605 4856 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345611 4856 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345617 4856 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345621 4856 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345627 4856 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345632 4856 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345637 4856 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345643 4856 flags.go:64] FLAG: --cgroup-root="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345648 4856 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345653 4856 flags.go:64] FLAG: --client-ca-file="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345658 4856 flags.go:64] FLAG: --cloud-config="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345662 4856 flags.go:64] FLAG: --cloud-provider="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345667 4856 flags.go:64] FLAG: --cluster-dns="[]" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345673 4856 flags.go:64] FLAG: --cluster-domain="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345678 4856 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345683 4856 flags.go:64] FLAG: --config-dir="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345687 4856 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345693 4856 flags.go:64] FLAG: --container-log-max-files="5" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345700 4856 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345706 4856 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345711 4856 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345716 4856 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345722 4856 flags.go:64] FLAG: --contention-profiling="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345727 4856 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345732 4856 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345738 4856 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345744 4856 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345751 4856 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345756 4856 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345762 4856 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345767 4856 flags.go:64] FLAG: --enable-load-reader="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345772 4856 flags.go:64] FLAG: --enable-server="true" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345777 4856 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345784 4856 flags.go:64] FLAG: --event-burst="100" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345790 4856 flags.go:64] FLAG: --event-qps="50" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345795 4856 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345804 4856 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345809 4856 flags.go:64] FLAG: --eviction-hard="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345814 4856 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345818 4856 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345823 4856 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345828 4856 flags.go:64] FLAG: --eviction-soft="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345832 4856 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345837 4856 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345841 4856 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345845 4856 flags.go:64] FLAG: --experimental-mounter-path="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345850 4856 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345855 4856 flags.go:64] FLAG: --fail-swap-on="true" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345860 4856 flags.go:64] FLAG: --feature-gates="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345867 4856 flags.go:64] FLAG: --file-check-frequency="20s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345873 4856 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345878 4856 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345883 4856 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345888 4856 flags.go:64] FLAG: --healthz-port="10248" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345893 4856 flags.go:64] FLAG: --help="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345898 4856 flags.go:64] FLAG: --hostname-override="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345904 4856 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345910 4856 flags.go:64] FLAG: --http-check-frequency="20s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345916 4856 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345921 4856 flags.go:64] FLAG: --image-credential-provider-config="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345927 4856 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345932 4856 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345936 4856 flags.go:64] FLAG: --image-service-endpoint="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345941 4856 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345945 4856 flags.go:64] FLAG: --kube-api-burst="100" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345949 4856 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345953 4856 flags.go:64] FLAG: --kube-api-qps="50" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345957 4856 flags.go:64] FLAG: --kube-reserved="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345961 4856 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345965 4856 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345969 4856 flags.go:64] FLAG: --kubelet-cgroups="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345974 4856 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345979 4856 flags.go:64] FLAG: --lock-file="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345983 4856 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345987 4856 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345992 4856 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.345998 4856 flags.go:64] FLAG: --log-json-split-stream="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346002 4856 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346006 4856 flags.go:64] FLAG: --log-text-split-stream="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346010 4856 flags.go:64] FLAG: --logging-format="text" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346014 4856 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346019 4856 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346023 4856 flags.go:64] FLAG: --manifest-url="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346028 4856 flags.go:64] FLAG: --manifest-url-header="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346034 4856 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346039 4856 flags.go:64] FLAG: --max-open-files="1000000" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346045 4856 flags.go:64] FLAG: --max-pods="110" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346049 4856 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346054 4856 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346059 4856 flags.go:64] FLAG: --memory-manager-policy="None" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346064 4856 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346068 4856 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346073 4856 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346077 4856 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346090 4856 flags.go:64] FLAG: --node-status-max-images="50" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346094 4856 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346098 4856 flags.go:64] FLAG: --oom-score-adj="-999" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346102 4856 flags.go:64] FLAG: --pod-cidr="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346106 4856 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346114 4856 flags.go:64] FLAG: --pod-manifest-path="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346118 4856 flags.go:64] FLAG: --pod-max-pids="-1" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346122 4856 flags.go:64] FLAG: --pods-per-core="0" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346127 4856 flags.go:64] FLAG: --port="10250" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346131 4856 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346136 4856 flags.go:64] FLAG: --provider-id="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346141 4856 flags.go:64] FLAG: --qos-reserved="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346145 4856 flags.go:64] FLAG: --read-only-port="10255" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346150 4856 flags.go:64] FLAG: --register-node="true" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346157 4856 flags.go:64] FLAG: --register-schedulable="true" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346161 4856 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346174 4856 flags.go:64] FLAG: --registry-burst="10" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346178 4856 flags.go:64] FLAG: --registry-qps="5" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346182 4856 flags.go:64] FLAG: --reserved-cpus="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346186 4856 flags.go:64] FLAG: --reserved-memory="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346192 4856 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346196 4856 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346200 4856 flags.go:64] FLAG: --rotate-certificates="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346204 4856 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346208 4856 flags.go:64] FLAG: --runonce="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346212 4856 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346216 4856 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346220 4856 flags.go:64] FLAG: --seccomp-default="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346225 4856 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346230 4856 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346236 4856 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346268 4856 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346274 4856 flags.go:64] FLAG: --storage-driver-password="root" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346279 4856 flags.go:64] FLAG: --storage-driver-secure="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346284 4856 flags.go:64] FLAG: --storage-driver-table="stats" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346289 4856 flags.go:64] FLAG: --storage-driver-user="root" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346294 4856 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346299 4856 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346303 4856 flags.go:64] FLAG: --system-cgroups="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346308 4856 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346316 4856 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346320 4856 flags.go:64] FLAG: --tls-cert-file="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346324 4856 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346330 4856 flags.go:64] FLAG: --tls-min-version="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346334 4856 flags.go:64] FLAG: --tls-private-key-file="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346338 4856 flags.go:64] FLAG: --topology-manager-policy="none" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346342 4856 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346346 4856 flags.go:64] FLAG: --topology-manager-scope="container" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346350 4856 flags.go:64] FLAG: --v="2" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346356 4856 flags.go:64] FLAG: --version="false" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346362 4856 flags.go:64] FLAG: --vmodule="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346368 4856 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346372 4856 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346529 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346535 4856 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346540 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346544 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346548 4856 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346552 4856 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346556 4856 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346560 4856 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346567 4856 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346570 4856 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346573 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346577 4856 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346580 4856 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346584 4856 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346587 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346591 4856 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346594 4856 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346598 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346602 4856 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346607 4856 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346610 4856 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346614 4856 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346618 4856 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346622 4856 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346627 4856 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346631 4856 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346635 4856 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346639 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346643 4856 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346647 4856 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346651 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346655 4856 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346659 4856 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346663 4856 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346667 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346671 4856 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346674 4856 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346678 4856 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346681 4856 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346686 4856 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346693 4856 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346696 4856 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346743 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346748 4856 feature_gate.go:330] unrecognized feature gate: Example Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346751 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346755 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346759 4856 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346762 4856 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346765 4856 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346769 4856 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346772 4856 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346779 4856 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346783 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346787 4856 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346791 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346795 4856 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346798 4856 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346801 4856 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346805 4856 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346810 4856 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346815 4856 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346819 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346824 4856 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346829 4856 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346835 4856 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346839 4856 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346844 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346848 4856 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346852 4856 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346858 4856 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.346862 4856 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.346874 4856 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.372360 4856 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.372404 4856 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372529 4856 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372548 4856 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372555 4856 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372562 4856 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372568 4856 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372573 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372579 4856 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372585 4856 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372590 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372596 4856 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372601 4856 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372607 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372612 4856 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372617 4856 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372622 4856 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372628 4856 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372633 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372639 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372644 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372650 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372655 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372660 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372666 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372671 4856 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372676 4856 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372681 4856 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372687 4856 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372694 4856 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372702 4856 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372709 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372716 4856 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372723 4856 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372730 4856 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372737 4856 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372743 4856 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372749 4856 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372754 4856 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372760 4856 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372765 4856 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372770 4856 feature_gate.go:330] unrecognized feature gate: Example Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372775 4856 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372781 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372786 4856 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372793 4856 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372800 4856 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372806 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372812 4856 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372818 4856 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372824 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372829 4856 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372836 4856 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372841 4856 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372849 4856 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372855 4856 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372861 4856 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372866 4856 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372871 4856 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372877 4856 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372882 4856 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372887 4856 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372913 4856 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372920 4856 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372926 4856 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372932 4856 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372939 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372959 4856 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372965 4856 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372971 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372976 4856 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372981 4856 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.372986 4856 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.372996 4856 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373157 4856 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373167 4856 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373174 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373180 4856 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373185 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373191 4856 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373196 4856 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373203 4856 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373211 4856 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373217 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373223 4856 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373229 4856 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373235 4856 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373240 4856 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373245 4856 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373250 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373255 4856 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373261 4856 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373267 4856 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373272 4856 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373277 4856 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373282 4856 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373288 4856 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373293 4856 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373298 4856 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373303 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373309 4856 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373314 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373319 4856 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373325 4856 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373330 4856 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373335 4856 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373340 4856 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373345 4856 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373351 4856 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373356 4856 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373361 4856 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373366 4856 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373372 4856 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373377 4856 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373383 4856 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373388 4856 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373393 4856 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373399 4856 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373404 4856 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373410 4856 feature_gate.go:330] unrecognized feature gate: Example Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373415 4856 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373420 4856 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373425 4856 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373430 4856 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373435 4856 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373440 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373445 4856 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373450 4856 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373456 4856 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373461 4856 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373466 4856 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373472 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373477 4856 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373482 4856 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373488 4856 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373493 4856 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373498 4856 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373504 4856 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373532 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373540 4856 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373546 4856 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373551 4856 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373556 4856 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373561 4856 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.373568 4856 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.373577 4856 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.373773 4856 server.go:940] "Client rotation is on, will bootstrap in background" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.383671 4856 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.383767 4856 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.387477 4856 server.go:997] "Starting client certificate rotation" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.387525 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.388624 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-22 07:23:01.09810758 +0000 UTC Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.388720 4856 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 720h20m22.709393029s for next certificate rotation Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.457793 4856 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.463902 4856 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.509808 4856 log.go:25] "Validated CRI v1 runtime API" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.569283 4856 log.go:25] "Validated CRI v1 image API" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.572081 4856 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.580350 4856 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-22-06-57-26-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.580378 4856 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:44 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.596285 4856 manager.go:217] Machine: {Timestamp:2025-11-22 07:02:38.592568035 +0000 UTC m=+1.005961313 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:f77229a0-445b-4f39-ab07-3ae475712a7b BootID:306542ef-d3ef-4be8-9ac9-776f57e8a26c Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:44 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e1:d8:c2 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e1:d8:c2 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:d5:c6:33 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:79:c3:5e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:8a:46:47 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:cc:86:61 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:1b:8f:b2 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:aa:15:c6:7b:82:22 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:42:19:64:9f:7a:94 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.597253 4856 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.597541 4856 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.600066 4856 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.600646 4856 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.600687 4856 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.600946 4856 topology_manager.go:138] "Creating topology manager with none policy" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.600958 4856 container_manager_linux.go:303] "Creating device plugin manager" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.601660 4856 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.601735 4856 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.602001 4856 state_mem.go:36] "Initialized new in-memory state store" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.602147 4856 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.614741 4856 kubelet.go:418] "Attempting to sync node with API server" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.614793 4856 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.614902 4856 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.614936 4856 kubelet.go:324] "Adding apiserver pod source" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.614961 4856 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.623612 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:38 crc kubenswrapper[4856]: E1122 07:02:38.623804 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.623581 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:38 crc kubenswrapper[4856]: E1122 07:02:38.623852 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.626563 4856 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.627910 4856 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.630036 4856 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.638154 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.638206 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.638233 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.638248 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.638274 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.638327 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.638343 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.638368 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.638394 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.638409 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.638429 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.638443 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.639743 4856 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.640733 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.640970 4856 server.go:1280] "Started kubelet" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.641436 4856 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.641466 4856 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.643058 4856 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.643693 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.643741 4856 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.643759 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:25:51.799057272 +0000 UTC Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.643790 4856 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1110h23m13.15526902s for next certificate rotation Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.643959 4856 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.643973 4856 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.644074 4856 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.644109 4856 server.go:460] "Adding debug handlers to kubelet server" Nov 22 07:02:38 crc systemd[1]: Started Kubernetes Kubelet. Nov 22 07:02:38 crc kubenswrapper[4856]: E1122 07:02:38.644840 4856 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.646002 4856 factory.go:55] Registering systemd factory Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.646403 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:38 crc kubenswrapper[4856]: E1122 07:02:38.646547 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.652195 4856 factory.go:221] Registration of the systemd container factory successfully Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.652693 4856 factory.go:153] Registering CRI-O factory Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.652731 4856 factory.go:221] Registration of the crio container factory successfully Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.652819 4856 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.652860 4856 factory.go:103] Registering Raw factory Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.652879 4856 manager.go:1196] Started watching for new ooms in manager Nov 22 07:02:38 crc kubenswrapper[4856]: E1122 07:02:38.653038 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="200ms" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.653465 4856 manager.go:319] Starting recovery of all containers Nov 22 07:02:38 crc kubenswrapper[4856]: E1122 07:02:38.658426 4856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a422b7e6a7fd1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-22 07:02:38.640906193 +0000 UTC m=+1.054299551,LastTimestamp:2025-11-22 07:02:38.640906193 +0000 UTC m=+1.054299551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669558 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669679 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669695 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669706 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669719 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669733 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669783 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669801 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669816 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669829 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669841 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669856 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669869 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669883 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669896 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669908 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669921 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669936 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669948 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669957 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669968 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669980 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.669991 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670024 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670037 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670047 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670060 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670071 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670081 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670092 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670104 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670114 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670124 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670150 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670174 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670186 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670195 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670205 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670215 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670225 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670235 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670245 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670255 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670264 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670274 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670283 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670353 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670379 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670389 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670399 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670410 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670422 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670435 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670446 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670457 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670470 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670481 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670489 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670498 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670521 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670532 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670541 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670552 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670563 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670576 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670588 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670601 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670615 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670627 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670640 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670653 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670667 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670679 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670692 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670705 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670714 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670726 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670734 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670742 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670750 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670761 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670772 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670782 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670793 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670802 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670811 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670821 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670834 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670844 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670855 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670865 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670876 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670887 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670898 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670909 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670919 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670929 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670937 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670947 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670957 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670969 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670979 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.670988 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671003 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671017 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671028 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671039 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671050 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671061 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671072 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671083 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671093 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671105 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671117 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671128 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671138 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671148 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671157 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671167 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671178 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671188 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671198 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671208 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671218 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671229 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671240 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671250 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671259 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671269 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671279 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671289 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671299 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671309 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671319 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671329 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671340 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671349 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671358 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671367 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671376 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671385 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671394 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671402 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671410 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671420 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671429 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671439 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671448 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671457 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671467 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671476 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671486 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671496 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671520 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671530 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671556 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671568 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671583 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.671595 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675391 4856 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675433 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675447 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675458 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675470 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675495 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675525 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675536 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675546 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675557 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675567 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675577 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675607 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675620 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675633 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675645 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675657 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675692 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675708 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675721 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675733 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675763 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675774 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675783 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675796 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675806 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675818 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675849 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675862 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675872 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675884 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675894 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675919 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675929 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675940 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675950 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675960 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675971 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.675999 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.676011 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.676041 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.676053 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.676582 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.676597 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.676648 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.676660 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.676696 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.676752 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.676767 4856 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.676776 4856 reconstruct.go:97] "Volume reconstruction finished" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.676784 4856 reconciler.go:26] "Reconciler: start to sync state" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.681439 4856 manager.go:324] Recovery completed Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.695936 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.698031 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.698073 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.698105 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.699312 4856 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.699343 4856 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.699399 4856 state_mem.go:36] "Initialized new in-memory state store" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.705993 4856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.707803 4856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.707928 4856 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.708089 4856 kubelet.go:2335] "Starting kubelet main sync loop" Nov 22 07:02:38 crc kubenswrapper[4856]: E1122 07:02:38.708430 4856 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 22 07:02:38 crc kubenswrapper[4856]: W1122 07:02:38.709073 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:38 crc kubenswrapper[4856]: E1122 07:02:38.709134 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:02:38 crc kubenswrapper[4856]: E1122 07:02:38.745329 4856 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.746903 4856 policy_none.go:49] "None policy: Start" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.748181 4856 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.748223 4856 state_mem.go:35] "Initializing new in-memory state store" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.798943 4856 manager.go:334] "Starting Device Plugin manager" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.799180 4856 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.799206 4856 server.go:79] "Starting device plugin registration server" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.799714 4856 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.799737 4856 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.800072 4856 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.800186 4856 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.800195 4856 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 22 07:02:38 crc kubenswrapper[4856]: E1122 07:02:38.806358 4856 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.809634 4856 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.809715 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.810777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.810806 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.810831 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.811015 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.811222 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.811256 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.812399 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.812423 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.812436 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.812690 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.812855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.812929 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.812951 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.813559 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.813664 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.814950 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.815032 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.815065 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.815273 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.815500 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.815730 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.817540 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.817573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.817585 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.817699 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.817966 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.818028 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.818146 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.818170 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.818180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.818357 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.818384 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.818422 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.818650 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.818669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.818678 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.818813 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.818843 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.819236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.819373 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.819390 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.819419 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.819479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.819488 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:38 crc kubenswrapper[4856]: E1122 07:02:38.854212 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="400ms" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.879907 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880130 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880173 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880206 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880283 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880343 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880379 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880416 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880442 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880469 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880539 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880633 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880685 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880726 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.880756 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.900404 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.902609 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.902645 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.902657 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.902682 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:02:38 crc kubenswrapper[4856]: E1122 07:02:38.903155 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.982424 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.982592 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.982625 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.982660 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.982716 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.982744 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.982774 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.982807 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.982839 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.982877 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.982910 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.982975 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.983008 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.983039 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.983071 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.983822 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.983902 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.984015 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.984053 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.984123 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.984134 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.984177 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.984194 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.984228 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.984248 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.984283 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.984330 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.984341 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.984365 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 22 07:02:38 crc kubenswrapper[4856]: I1122 07:02:38.984411 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.104309 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.106347 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.106473 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.106490 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.106592 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:02:39 crc kubenswrapper[4856]: E1122 07:02:39.107532 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.152815 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.159765 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.178949 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.196824 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.205531 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:39 crc kubenswrapper[4856]: E1122 07:02:39.255780 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="800ms" Nov 22 07:02:39 crc kubenswrapper[4856]: W1122 07:02:39.257153 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-9916b532607d854b28d060f387abf3eda46b85172cf76bf1f15239b06bffe1c8 WatchSource:0}: Error finding container 9916b532607d854b28d060f387abf3eda46b85172cf76bf1f15239b06bffe1c8: Status 404 returned error can't find the container with id 9916b532607d854b28d060f387abf3eda46b85172cf76bf1f15239b06bffe1c8 Nov 22 07:02:39 crc kubenswrapper[4856]: W1122 07:02:39.260535 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6084f090e7b5b27b5f4d6d5f04ef4d9588bfe4a091872f2210b5c494d76f878e WatchSource:0}: Error finding container 6084f090e7b5b27b5f4d6d5f04ef4d9588bfe4a091872f2210b5c494d76f878e: Status 404 returned error can't find the container with id 6084f090e7b5b27b5f4d6d5f04ef4d9588bfe4a091872f2210b5c494d76f878e Nov 22 07:02:39 crc kubenswrapper[4856]: W1122 07:02:39.262994 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3b70ba5949347e15da1021f495938f47057140e31450ea40b4f43eba1e385f59 WatchSource:0}: Error finding container 3b70ba5949347e15da1021f495938f47057140e31450ea40b4f43eba1e385f59: Status 404 returned error can't find the container with id 3b70ba5949347e15da1021f495938f47057140e31450ea40b4f43eba1e385f59 Nov 22 07:02:39 crc kubenswrapper[4856]: W1122 07:02:39.265977 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e75b03c0fc5205a126bd8ad436be648e048d2ffda837f8137331354296b1673b WatchSource:0}: Error finding container e75b03c0fc5205a126bd8ad436be648e048d2ffda837f8137331354296b1673b: Status 404 returned error can't find the container with id e75b03c0fc5205a126bd8ad436be648e048d2ffda837f8137331354296b1673b Nov 22 07:02:39 crc kubenswrapper[4856]: W1122 07:02:39.273276 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-c57d8b64adad8e8914a88e8522b544e4ed82a0107e2d6687106464ccd8dacc57 WatchSource:0}: Error finding container c57d8b64adad8e8914a88e8522b544e4ed82a0107e2d6687106464ccd8dacc57: Status 404 returned error can't find the container with id c57d8b64adad8e8914a88e8522b544e4ed82a0107e2d6687106464ccd8dacc57 Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.508046 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.510031 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.510090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.510103 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.510146 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:02:39 crc kubenswrapper[4856]: E1122 07:02:39.511046 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Nov 22 07:02:39 crc kubenswrapper[4856]: W1122 07:02:39.592417 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:39 crc kubenswrapper[4856]: E1122 07:02:39.592624 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.641995 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:39 crc kubenswrapper[4856]: W1122 07:02:39.657037 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:39 crc kubenswrapper[4856]: E1122 07:02:39.657176 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.714648 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c57d8b64adad8e8914a88e8522b544e4ed82a0107e2d6687106464ccd8dacc57"} Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.716532 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3b70ba5949347e15da1021f495938f47057140e31450ea40b4f43eba1e385f59"} Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.717776 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6084f090e7b5b27b5f4d6d5f04ef4d9588bfe4a091872f2210b5c494d76f878e"} Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.720858 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9916b532607d854b28d060f387abf3eda46b85172cf76bf1f15239b06bffe1c8"} Nov 22 07:02:39 crc kubenswrapper[4856]: I1122 07:02:39.722202 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e75b03c0fc5205a126bd8ad436be648e048d2ffda837f8137331354296b1673b"} Nov 22 07:02:39 crc kubenswrapper[4856]: W1122 07:02:39.870993 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:39 crc kubenswrapper[4856]: E1122 07:02:39.871076 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:02:40 crc kubenswrapper[4856]: E1122 07:02:40.056841 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="1.6s" Nov 22 07:02:40 crc kubenswrapper[4856]: W1122 07:02:40.143269 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:40 crc kubenswrapper[4856]: E1122 07:02:40.143469 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:02:40 crc kubenswrapper[4856]: I1122 07:02:40.312110 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:40 crc kubenswrapper[4856]: I1122 07:02:40.313216 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:40 crc kubenswrapper[4856]: I1122 07:02:40.313259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:40 crc kubenswrapper[4856]: I1122 07:02:40.313269 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:40 crc kubenswrapper[4856]: I1122 07:02:40.313294 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:02:40 crc kubenswrapper[4856]: E1122 07:02:40.313726 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Nov 22 07:02:40 crc kubenswrapper[4856]: I1122 07:02:40.642469 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.642329 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:41 crc kubenswrapper[4856]: E1122 07:02:41.658363 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="3.2s" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.727813 4856 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c" exitCode=0 Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.727887 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c"} Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.727905 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.728951 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.728984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.728996 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.730290 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab" exitCode=0 Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.730356 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.730346 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab"} Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.731396 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.731422 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.731434 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.732819 4856 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8cd030d1abf19e74d644e8ad0666a2dfa0f72d5ffe43a4ba9771001eb0d6f4bb" exitCode=0 Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.732849 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8cd030d1abf19e74d644e8ad0666a2dfa0f72d5ffe43a4ba9771001eb0d6f4bb"} Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.733229 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.733469 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.736124 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.736161 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.736174 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.736840 4856 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302" exitCode=0 Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.736904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.736940 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302"} Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.736953 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.736965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.737060 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.738253 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.738281 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.738300 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.740885 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41"} Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.740910 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab"} Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.740925 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f"} Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.740958 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c"} Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.741049 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.742142 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.742181 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.742194 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.858320 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.913857 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.916339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.916375 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.916384 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:41 crc kubenswrapper[4856]: I1122 07:02:41.916409 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:02:41 crc kubenswrapper[4856]: E1122 07:02:41.916871 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Nov 22 07:02:42 crc kubenswrapper[4856]: W1122 07:02:42.187080 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:42 crc kubenswrapper[4856]: E1122 07:02:42.187246 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.642179 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:42 crc kubenswrapper[4856]: W1122 07:02:42.718367 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:42 crc kubenswrapper[4856]: E1122 07:02:42.718448 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.747416 4856 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="53cb781d7b38c394963fdb607cd112607d69b59e5df9949c45f4ce258f2bc5ad" exitCode=0 Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.747529 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"53cb781d7b38c394963fdb607cd112607d69b59e5df9949c45f4ce258f2bc5ad"} Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.747615 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.748860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.748900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.748913 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.750988 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.751663 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"289319a76359a865092209ae7b4c1945c02be4817a450a8995562c1296e06772"} Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.752335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.752367 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.752380 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.755051 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"081c75595e1ae1bf22e081b0f1d2f2ab0c2de1faacb2152112039775b82cbf89"} Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.755097 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a763c0331e7cddd0a225fed5d266bb61b64168a86c91607efea86c10d90b6a5a"} Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.757270 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.757822 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1"} Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.757869 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab"} Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.758159 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.758190 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:42 crc kubenswrapper[4856]: I1122 07:02:42.758204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:42 crc kubenswrapper[4856]: W1122 07:02:42.826140 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:42 crc kubenswrapper[4856]: E1122 07:02:42.826303 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:02:42 crc kubenswrapper[4856]: W1122 07:02:42.846237 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:42 crc kubenswrapper[4856]: E1122 07:02:42.846384 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.642290 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.762541 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00"} Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.764380 4856 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="547a78bff8684623b3cddeb8cb58585fe1d54a2289d8bbaa8d9cfa3838f79fbf" exitCode=0 Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.764420 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"547a78bff8684623b3cddeb8cb58585fe1d54a2289d8bbaa8d9cfa3838f79fbf"} Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.764544 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.765955 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.766032 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.766102 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.769264 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bd103654296a981d4f128f44b06cb90a76cf6e7e7351898e3c42ad38e89772dd"} Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.769366 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.769376 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.769575 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.770296 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.770320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.770301 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.770351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.770361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.770330 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.771408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.771448 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:43 crc kubenswrapper[4856]: I1122 07:02:43.771460 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.450694 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.608838 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.641834 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.703808 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.773817 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3"} Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.776885 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"1d8e3caf8922115fcaad23d59bec521f943535bd3e4f1810dac8034fee7f0308"} Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.776956 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.776981 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.777054 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.777883 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.777927 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.777937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.778830 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.778877 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:44 crc kubenswrapper[4856]: I1122 07:02:44.778888 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:44 crc kubenswrapper[4856]: E1122 07:02:44.859417 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="6.4s" Nov 22 07:02:44 crc kubenswrapper[4856]: E1122 07:02:44.861031 4856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187a422b7e6a7fd1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-22 07:02:38.640906193 +0000 UTC m=+1.054299551,LastTimestamp:2025-11-22 07:02:38.640906193 +0000 UTC m=+1.054299551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.117784 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.119722 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.119769 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.119783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.119815 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:02:45 crc kubenswrapper[4856]: E1122 07:02:45.120433 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.642822 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.784385 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b23644990064092a130b76abb48236e12f42692339cc238886316ea01b43f841"} Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.784527 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.785762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.785945 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.786042 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.792522 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0aad34774f3102a86239fa8a3496b6217e3d13d7840cba3b757004860438498f"} Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.792701 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.792914 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.794280 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.794311 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.794321 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.794487 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.794624 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:45 crc kubenswrapper[4856]: I1122 07:02:45.794732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:46 crc kubenswrapper[4856]: I1122 07:02:46.642532 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:46 crc kubenswrapper[4856]: I1122 07:02:46.798722 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 22 07:02:46 crc kubenswrapper[4856]: I1122 07:02:46.801379 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b23644990064092a130b76abb48236e12f42692339cc238886316ea01b43f841" exitCode=255 Nov 22 07:02:46 crc kubenswrapper[4856]: I1122 07:02:46.801468 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b23644990064092a130b76abb48236e12f42692339cc238886316ea01b43f841"} Nov 22 07:02:46 crc kubenswrapper[4856]: I1122 07:02:46.801635 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:46 crc kubenswrapper[4856]: I1122 07:02:46.802656 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:46 crc kubenswrapper[4856]: I1122 07:02:46.802689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:46 crc kubenswrapper[4856]: I1122 07:02:46.802700 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:46 crc kubenswrapper[4856]: I1122 07:02:46.803237 4856 scope.go:117] "RemoveContainer" containerID="b23644990064092a130b76abb48236e12f42692339cc238886316ea01b43f841" Nov 22 07:02:46 crc kubenswrapper[4856]: I1122 07:02:46.806747 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e8e97b6c3ad632234a8454973f580571e68996acb640d3d6fba03fb18490f7a8"} Nov 22 07:02:46 crc kubenswrapper[4856]: I1122 07:02:46.806791 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"20fc6d6741dee9fc3006c3cd0e7060989603bad777998b6caf543a4362de6c6d"} Nov 22 07:02:47 crc kubenswrapper[4856]: W1122 07:02:47.449847 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Nov 22 07:02:47 crc kubenswrapper[4856]: E1122 07:02:47.449993 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Nov 22 07:02:47 crc kubenswrapper[4856]: I1122 07:02:47.811584 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 22 07:02:47 crc kubenswrapper[4856]: I1122 07:02:47.813705 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:47 crc kubenswrapper[4856]: I1122 07:02:47.813706 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400"} Nov 22 07:02:47 crc kubenswrapper[4856]: I1122 07:02:47.813797 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:47 crc kubenswrapper[4856]: I1122 07:02:47.814633 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:47 crc kubenswrapper[4856]: I1122 07:02:47.814682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:47 crc kubenswrapper[4856]: I1122 07:02:47.814696 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:47 crc kubenswrapper[4856]: I1122 07:02:47.818404 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4e7a45773abeca6597b5bd187f5e8a6ecd158ae3b97a2cf0888a920109043dab"} Nov 22 07:02:47 crc kubenswrapper[4856]: I1122 07:02:47.818491 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:47 crc kubenswrapper[4856]: I1122 07:02:47.819164 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:47 crc kubenswrapper[4856]: I1122 07:02:47.819200 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:47 crc kubenswrapper[4856]: I1122 07:02:47.819213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:48 crc kubenswrapper[4856]: I1122 07:02:48.328394 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 22 07:02:48 crc kubenswrapper[4856]: I1122 07:02:48.379470 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 22 07:02:48 crc kubenswrapper[4856]: E1122 07:02:48.806546 4856 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 07:02:48 crc kubenswrapper[4856]: I1122 07:02:48.821852 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:48 crc kubenswrapper[4856]: I1122 07:02:48.822063 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:48 crc kubenswrapper[4856]: I1122 07:02:48.822185 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:48 crc kubenswrapper[4856]: I1122 07:02:48.823208 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:48 crc kubenswrapper[4856]: I1122 07:02:48.823299 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:48 crc kubenswrapper[4856]: I1122 07:02:48.823318 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:48 crc kubenswrapper[4856]: I1122 07:02:48.823882 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:48 crc kubenswrapper[4856]: I1122 07:02:48.823954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:48 crc kubenswrapper[4856]: I1122 07:02:48.823981 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:49 crc kubenswrapper[4856]: I1122 07:02:49.211295 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:49 crc kubenswrapper[4856]: I1122 07:02:49.824598 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:49 crc kubenswrapper[4856]: I1122 07:02:49.824730 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:49 crc kubenswrapper[4856]: I1122 07:02:49.825716 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:49 crc kubenswrapper[4856]: I1122 07:02:49.825757 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:49 crc kubenswrapper[4856]: I1122 07:02:49.825767 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:49 crc kubenswrapper[4856]: I1122 07:02:49.825813 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:49 crc kubenswrapper[4856]: I1122 07:02:49.825839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:49 crc kubenswrapper[4856]: I1122 07:02:49.825849 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:50 crc kubenswrapper[4856]: I1122 07:02:50.327570 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:02:50 crc kubenswrapper[4856]: I1122 07:02:50.827893 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:50 crc kubenswrapper[4856]: I1122 07:02:50.829573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:50 crc kubenswrapper[4856]: I1122 07:02:50.829634 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:50 crc kubenswrapper[4856]: I1122 07:02:50.829646 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:50 crc kubenswrapper[4856]: I1122 07:02:50.881840 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:50 crc kubenswrapper[4856]: I1122 07:02:50.882381 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:50 crc kubenswrapper[4856]: I1122 07:02:50.885294 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:50 crc kubenswrapper[4856]: I1122 07:02:50.885361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:50 crc kubenswrapper[4856]: I1122 07:02:50.885383 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:50 crc kubenswrapper[4856]: I1122 07:02:50.887190 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:02:51 crc kubenswrapper[4856]: I1122 07:02:51.520780 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:51 crc kubenswrapper[4856]: I1122 07:02:51.523555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:51 crc kubenswrapper[4856]: I1122 07:02:51.523616 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:51 crc kubenswrapper[4856]: I1122 07:02:51.523635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:51 crc kubenswrapper[4856]: I1122 07:02:51.523680 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:02:51 crc kubenswrapper[4856]: I1122 07:02:51.831226 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:51 crc kubenswrapper[4856]: I1122 07:02:51.831384 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:51 crc kubenswrapper[4856]: I1122 07:02:51.832659 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:51 crc kubenswrapper[4856]: I1122 07:02:51.832709 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:51 crc kubenswrapper[4856]: I1122 07:02:51.832723 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:51 crc kubenswrapper[4856]: I1122 07:02:51.832883 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:51 crc kubenswrapper[4856]: I1122 07:02:51.832960 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:51 crc kubenswrapper[4856]: I1122 07:02:51.832983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:53 crc kubenswrapper[4856]: I1122 07:02:53.882448 4856 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:02:53 crc kubenswrapper[4856]: I1122 07:02:53.882527 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:02:57 crc kubenswrapper[4856]: W1122 07:02:57.522658 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 22 07:02:57 crc kubenswrapper[4856]: I1122 07:02:57.523265 4856 trace.go:236] Trace[457287971]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Nov-2025 07:02:47.521) (total time: 10001ms): Nov 22 07:02:57 crc kubenswrapper[4856]: Trace[457287971]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:02:57.522) Nov 22 07:02:57 crc kubenswrapper[4856]: Trace[457287971]: [10.001968216s] [10.001968216s] END Nov 22 07:02:57 crc kubenswrapper[4856]: E1122 07:02:57.523408 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 22 07:02:57 crc kubenswrapper[4856]: I1122 07:02:57.643123 4856 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 22 07:02:58 crc kubenswrapper[4856]: W1122 07:02:58.198178 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.198280 4856 trace.go:236] Trace[2102537752]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Nov-2025 07:02:48.196) (total time: 10001ms): Nov 22 07:02:58 crc kubenswrapper[4856]: Trace[2102537752]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:02:58.198) Nov 22 07:02:58 crc kubenswrapper[4856]: Trace[2102537752]: [10.001851053s] [10.001851053s] END Nov 22 07:02:58 crc kubenswrapper[4856]: E1122 07:02:58.198310 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.412294 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.412548 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.414502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.414583 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.414595 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.428408 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 22 07:02:58 crc kubenswrapper[4856]: W1122 07:02:58.585379 4856 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.585475 4856 trace.go:236] Trace[40113094]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (22-Nov-2025 07:02:48.583) (total time: 10002ms): Nov 22 07:02:58 crc kubenswrapper[4856]: Trace[40113094]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (07:02:58.585) Nov 22 07:02:58 crc kubenswrapper[4856]: Trace[40113094]: [10.002000426s] [10.002000426s] END Nov 22 07:02:58 crc kubenswrapper[4856]: E1122 07:02:58.585502 4856 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 22 07:02:58 crc kubenswrapper[4856]: E1122 07:02:58.806721 4856 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.850266 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.850872 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.853338 4856 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400" exitCode=255 Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.853538 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.853476 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400"} Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.853665 4856 scope.go:117] "RemoveContainer" containerID="b23644990064092a130b76abb48236e12f42692339cc238886316ea01b43f841" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.853994 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.854373 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.854401 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.854412 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.855467 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.855551 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.855578 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:02:58 crc kubenswrapper[4856]: I1122 07:02:58.856488 4856 scope.go:117] "RemoveContainer" containerID="b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400" Nov 22 07:02:58 crc kubenswrapper[4856]: E1122 07:02:58.856828 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 22 07:02:59 crc kubenswrapper[4856]: I1122 07:02:59.095561 4856 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 22 07:02:59 crc kubenswrapper[4856]: I1122 07:02:59.095673 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 22 07:02:59 crc kubenswrapper[4856]: I1122 07:02:59.108565 4856 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 22 07:02:59 crc kubenswrapper[4856]: I1122 07:02:59.108624 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 22 07:02:59 crc kubenswrapper[4856]: I1122 07:02:59.858896 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 22 07:03:00 crc kubenswrapper[4856]: I1122 07:03:00.333615 4856 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]log ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]etcd ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/generic-apiserver-start-informers ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/priority-and-fairness-filter ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/start-apiextensions-informers ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/start-apiextensions-controllers ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/crd-informer-synced ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/start-system-namespaces-controller ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 22 07:03:00 crc kubenswrapper[4856]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/bootstrap-controller ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/start-kube-aggregator-informers ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/apiservice-registration-controller ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/apiservice-discovery-controller ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]autoregister-completion ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/apiservice-openapi-controller ok Nov 22 07:03:00 crc kubenswrapper[4856]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 22 07:03:00 crc kubenswrapper[4856]: livez check failed Nov 22 07:03:00 crc kubenswrapper[4856]: I1122 07:03:00.333689 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:03:03 crc kubenswrapper[4856]: I1122 07:03:03.882336 4856 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:03:03 crc kubenswrapper[4856]: I1122 07:03:03.882483 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:03:04 crc kubenswrapper[4856]: E1122 07:03:04.110222 4856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Nov 22 07:03:04 crc kubenswrapper[4856]: I1122 07:03:04.114967 4856 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 22 07:03:04 crc kubenswrapper[4856]: I1122 07:03:04.120967 4856 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 22 07:03:04 crc kubenswrapper[4856]: E1122 07:03:04.121062 4856 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 22 07:03:04 crc kubenswrapper[4856]: I1122 07:03:04.715193 4856 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 22 07:03:04 crc kubenswrapper[4856]: I1122 07:03:04.955230 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:03:04 crc kubenswrapper[4856]: I1122 07:03:04.955601 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:03:04 crc kubenswrapper[4856]: I1122 07:03:04.957580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:04 crc kubenswrapper[4856]: I1122 07:03:04.957637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:04 crc kubenswrapper[4856]: I1122 07:03:04.957657 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:04 crc kubenswrapper[4856]: I1122 07:03:04.958587 4856 scope.go:117] "RemoveContainer" containerID="b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400" Nov 22 07:03:04 crc kubenswrapper[4856]: E1122 07:03:04.958879 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 22 07:03:05 crc kubenswrapper[4856]: I1122 07:03:05.332651 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:03:05 crc kubenswrapper[4856]: I1122 07:03:05.448380 4856 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 22 07:03:05 crc kubenswrapper[4856]: I1122 07:03:05.878828 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:03:05 crc kubenswrapper[4856]: I1122 07:03:05.879784 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:05 crc kubenswrapper[4856]: I1122 07:03:05.879826 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:05 crc kubenswrapper[4856]: I1122 07:03:05.879835 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:05 crc kubenswrapper[4856]: I1122 07:03:05.880471 4856 scope.go:117] "RemoveContainer" containerID="b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400" Nov 22 07:03:05 crc kubenswrapper[4856]: E1122 07:03:05.880639 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 22 07:03:05 crc kubenswrapper[4856]: I1122 07:03:05.883524 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:03:06 crc kubenswrapper[4856]: I1122 07:03:06.880843 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:03:06 crc kubenswrapper[4856]: I1122 07:03:06.881722 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:06 crc kubenswrapper[4856]: I1122 07:03:06.881760 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:06 crc kubenswrapper[4856]: I1122 07:03:06.881770 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:06 crc kubenswrapper[4856]: I1122 07:03:06.882371 4856 scope.go:117] "RemoveContainer" containerID="b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400" Nov 22 07:03:06 crc kubenswrapper[4856]: E1122 07:03:06.882560 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.506538 4856 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.633070 4856 apiserver.go:52] "Watching apiserver" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.639864 4856 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.640410 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-npjs2","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-dns/node-resolver-44lw8","openshift-machine-config-operator/machine-config-daemon-klt85","openshift-multus/multus-fjqpv","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-2685z"] Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.640933 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.641231 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.641340 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.641365 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.641412 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.641487 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.641824 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.641953 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.642395 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.642456 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.643982 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.644097 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.645554 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-44lw8" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.645849 4856 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.646433 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.650236 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.650563 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.651838 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.651928 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.652161 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.653363 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.653590 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.653973 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.657546 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.657580 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.657679 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.657742 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.657779 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.657831 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.657898 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.658073 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.658114 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.658188 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.658249 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.658258 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.658350 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.658347 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.658427 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.658477 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.658733 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.658861 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.658953 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.659136 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.659278 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.659454 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.658957 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.670709 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.681283 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.693588 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.705333 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.715898 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.738762 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.745570 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.745702 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.745895 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.745918 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.745936 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.745955 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.745973 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.745994 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746016 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746067 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746107 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746124 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746138 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746155 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746171 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746188 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746205 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746243 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.746292 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:03:08.246256875 +0000 UTC m=+30.659650133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746359 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746404 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746438 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746462 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746493 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746543 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746556 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746568 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746625 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746650 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746672 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746690 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746711 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746727 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746745 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746765 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746786 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746803 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746826 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746881 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746903 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746921 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746940 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746962 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746984 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747002 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747021 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747043 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747064 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747081 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747130 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747148 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747169 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747188 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747238 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747259 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747282 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747302 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747320 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747363 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747406 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747425 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747446 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747464 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747493 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747542 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747564 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747588 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747612 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747635 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747653 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747670 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747692 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747713 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747731 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747751 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747772 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747791 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746877 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747810 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747820 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747861 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747883 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747906 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747923 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747944 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747962 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747981 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748001 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748023 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748040 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748056 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748073 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748091 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748112 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748128 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748145 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748164 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748184 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748201 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748220 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748241 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748267 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748298 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748322 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748348 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748370 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748390 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748409 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748428 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748449 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748467 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748486 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748503 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748541 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748559 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748637 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.750397 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.751849 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.752817 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.753338 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.753309 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.754275 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.756434 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.756948 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.758740 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.759769 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.760359 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.760467 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.761531 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.762020 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.763202 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.763264 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.763300 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.763331 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.763363 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.763403 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.763438 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.763474 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.763550 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.764427 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.764993 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.746988 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747137 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747098 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747806 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.747975 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748057 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748126 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748185 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748380 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748398 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748466 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748676 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748804 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.748948 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.749065 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.749209 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.749210 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.749497 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.749619 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.751581 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.752032 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.752233 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.752239 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.752404 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.752604 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.752719 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.752819 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.753060 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.753194 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.753270 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.753699 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.754620 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.755813 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.755833 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.756202 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.756597 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.756623 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.755689 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.756755 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.757173 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.757363 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.757345 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.757577 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.757602 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.758018 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.758035 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.758042 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.758217 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.758466 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.758451 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.758576 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.758593 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.758634 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.758957 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.759278 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.759340 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.759500 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.765843 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.760101 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.760314 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.760315 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767004 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767035 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767063 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767103 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767130 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767153 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767185 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767208 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767229 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767253 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767278 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767296 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767321 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767344 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767365 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767385 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767407 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767437 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.768663 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.760382 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.769436 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.769577 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.769694 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.767455 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770336 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770373 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770403 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.760680 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770434 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.760705 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.760944 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.761435 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770462 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770641 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770694 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770712 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770763 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770815 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770698 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770858 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770905 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770948 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770993 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.771025 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.771067 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.771097 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.771131 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.770766 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.761584 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.761931 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.762010 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.762036 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.762733 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.763016 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.763130 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.763215 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.763789 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.771117 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.771424 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.771470 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.771894 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.771911 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.771993 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.772078 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.772190 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.772239 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.772280 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.772317 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.772880 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.772916 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.772953 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.772987 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.772998 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773015 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773053 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773094 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773122 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773151 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773177 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773200 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773242 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773267 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773293 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773323 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773355 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773386 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773414 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773555 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-etc-openvswitch\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773596 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773635 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.773680 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.774135 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-var-lib-openvswitch\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.774195 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.774287 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-run-netns\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.774332 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-system-cni-dir\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.774495 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-cni-bin\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.774559 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-hostroot\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.774597 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/59c3498a-6659-454c-9fe0-361fa7a0783c-multus-daemon-config\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.774636 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.774684 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.774720 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0efefc3f-da5f-4035-81dc-6b5ab51e3df1-proxy-tls\") pod \"machine-config-daemon-klt85\" (UID: \"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\") " pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.774741 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.774750 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-env-overrides\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.774906 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.775275 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-run-ovn-kubernetes\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.775336 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.775607 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.775674 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.776113 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-multus-socket-dir-parent\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.776579 4856 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.776838 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.776740 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-var-lib-kubelet\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.777735 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.777805 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.777817 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7zdl\" (UniqueName: \"kubernetes.io/projected/59c3498a-6659-454c-9fe0-361fa7a0783c-kube-api-access-s7zdl\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.777887 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-systemd\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.777921 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-multus-cni-dir\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.777941 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-os-release\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.777970 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0efefc3f-da5f-4035-81dc-6b5ab51e3df1-rootfs\") pod \"machine-config-daemon-klt85\" (UID: \"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\") " pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.778012 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.778046 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.778121 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.778134 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.778177 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.778277 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-os-release\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.778744 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.778804 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.778827 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.779030 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.779206 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:08.279168252 +0000 UTC m=+30.692561510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.779472 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.779593 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.779839 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.780112 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.780217 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.780262 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.780423 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.781597 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.781736 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.782732 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.782825 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.783315 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.783437 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.783530 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.783679 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-kubelet\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.783720 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-cnibin\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.783756 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/59c3498a-6659-454c-9fe0-361fa7a0783c-cni-binary-copy\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.783966 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-run-k8s-cni-cncf-io\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784039 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-ovnkube-config\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784083 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/752eee1c-98a9-4221-88a7-f332f704d4cf-ovn-node-metrics-cert\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784122 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-var-lib-cni-bin\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784172 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.784352 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784254 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5b51107-7e2b-463e-862c-700ac0976f31-hosts-file\") pod \"node-resolver-44lw8\" (UID: \"f5b51107-7e2b-463e-862c-700ac0976f31\") " pod="openshift-dns/node-resolver-44lw8" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784488 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-etc-kubernetes\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784544 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784578 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-openvswitch\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784599 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-cni-netd\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784617 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxgp8\" (UniqueName: \"kubernetes.io/projected/752eee1c-98a9-4221-88a7-f332f704d4cf-kube-api-access-wxgp8\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784637 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-multus-conf-dir\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.784673 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:08.284637646 +0000 UTC m=+30.698030904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784724 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-cni-binary-copy\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784949 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.784979 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-systemd-units\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785001 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-var-lib-cni-multus\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785032 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbs6l\" (UniqueName: \"kubernetes.io/projected/0efefc3f-da5f-4035-81dc-6b5ab51e3df1-kube-api-access-sbs6l\") pod \"machine-config-daemon-klt85\" (UID: \"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\") " pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785396 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785409 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785473 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sslkm\" (UniqueName: \"kubernetes.io/projected/f5b51107-7e2b-463e-862c-700ac0976f31-kube-api-access-sslkm\") pod \"node-resolver-44lw8\" (UID: \"f5b51107-7e2b-463e-862c-700ac0976f31\") " pod="openshift-dns/node-resolver-44lw8" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785699 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-slash\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785726 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785733 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-node-log\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785805 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785819 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785853 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-cnibin\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785877 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk8gj\" (UniqueName: \"kubernetes.io/projected/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-kube-api-access-pk8gj\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785903 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-ovn\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785942 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-log-socket\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785964 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0efefc3f-da5f-4035-81dc-6b5ab51e3df1-mcd-auth-proxy-config\") pod \"machine-config-daemon-klt85\" (UID: \"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\") " pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.785984 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-run-multus-certs\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786029 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786053 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786088 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786109 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786130 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-ovnkube-script-lib\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786165 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-system-cni-dir\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786186 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-run-netns\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786209 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786266 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786473 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786492 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786524 4856 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786534 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786544 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786737 4856 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.786757 4856 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791222 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791267 4856 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.787298 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791286 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.787418 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.788777 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.788194 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.788958 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.790387 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791302 4856 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791441 4856 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.790612 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791492 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791551 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791575 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791595 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791609 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791625 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791637 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791650 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791663 4856 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791679 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791691 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791704 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791716 4856 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791731 4856 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791742 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791761 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791774 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791792 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791803 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791813 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791825 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791836 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791846 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791859 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791873 4856 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791884 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791894 4856 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791916 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791930 4856 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791941 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791951 4856 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791965 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.791977 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792004 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792016 4856 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792033 4856 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792044 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792054 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792065 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792079 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792090 4856 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792102 4856 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792111 4856 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792124 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792134 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792143 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792155 4856 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792167 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792177 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792192 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792205 4856 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792220 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792230 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792240 4856 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792252 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792264 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.792996 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793192 4856 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793218 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793229 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793239 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793266 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793250 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793293 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793305 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793316 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793340 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793353 4856 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793362 4856 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793372 4856 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793382 4856 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793394 4856 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793403 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793413 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793440 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793450 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793461 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793472 4856 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793473 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793485 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793534 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793545 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793555 4856 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793568 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793582 4856 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793592 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793602 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793628 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793638 4856 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793648 4856 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793695 4856 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793707 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793721 4856 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793735 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793751 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793764 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793794 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793806 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793822 4856 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793835 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793846 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793861 4856 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793886 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793906 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.793984 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.794010 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.794033 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.794050 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.794069 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.794083 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.794096 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.794109 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.794126 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.794175 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.796355 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.797548 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.798040 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.798060 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.798076 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.798139 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:08.298119705 +0000 UTC m=+30.711513173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.799703 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.799873 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.800330 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.801164 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.801463 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.801533 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.801549 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:07 crc kubenswrapper[4856]: E1122 07:03:07.801606 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:08.301587673 +0000 UTC m=+30.714980931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.801618 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.801654 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.801826 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.801851 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.801997 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.802237 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.802478 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.804540 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.805149 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.808939 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.809334 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.810061 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.810153 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.810491 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.810648 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.810719 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.810805 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.810830 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.810810 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.811293 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.811765 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.813287 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.813304 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.813239 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.813576 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.813672 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.815228 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.815271 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.816605 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.817027 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.817195 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.817197 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.819441 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.821086 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.822578 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.822714 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.825281 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.825981 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.826186 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.826419 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.826597 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.827782 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.827934 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.828000 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.828029 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.828128 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.828166 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.828180 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.828592 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.828713 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.829156 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.829193 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.829526 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.830041 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.830218 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.831786 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.832145 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.832167 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.832173 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.832994 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.837010 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.837662 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.845278 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.848306 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.860425 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.894618 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0efefc3f-da5f-4035-81dc-6b5ab51e3df1-proxy-tls\") pod \"machine-config-daemon-klt85\" (UID: \"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\") " pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.894658 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-system-cni-dir\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.894675 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-cni-bin\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.894691 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-hostroot\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.894708 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/59c3498a-6659-454c-9fe0-361fa7a0783c-multus-daemon-config\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.894723 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-env-overrides\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.894738 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-run-ovn-kubernetes\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.894810 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-cni-bin\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.894859 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-system-cni-dir\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.894919 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-hostroot\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.894993 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-run-ovn-kubernetes\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895087 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-multus-socket-dir-parent\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895111 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-multus-cni-dir\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895127 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-os-release\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895143 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-multus-socket-dir-parent\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895145 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-var-lib-kubelet\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895166 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-var-lib-kubelet\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895211 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-multus-cni-dir\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895248 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7zdl\" (UniqueName: \"kubernetes.io/projected/59c3498a-6659-454c-9fe0-361fa7a0783c-kube-api-access-s7zdl\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895264 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-systemd\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895279 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0efefc3f-da5f-4035-81dc-6b5ab51e3df1-rootfs\") pod \"machine-config-daemon-klt85\" (UID: \"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\") " pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895386 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-systemd\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895469 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/59c3498a-6659-454c-9fe0-361fa7a0783c-multus-daemon-config\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895522 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0efefc3f-da5f-4035-81dc-6b5ab51e3df1-rootfs\") pod \"machine-config-daemon-klt85\" (UID: \"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\") " pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895583 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-os-release\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895653 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-env-overrides\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895665 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/59c3498a-6659-454c-9fe0-361fa7a0783c-cni-binary-copy\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895723 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-run-k8s-cni-cncf-io\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895747 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-os-release\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895767 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-kubelet\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895784 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-cnibin\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895802 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-run-k8s-cni-cncf-io\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895815 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5b51107-7e2b-463e-862c-700ac0976f31-hosts-file\") pod \"node-resolver-44lw8\" (UID: \"f5b51107-7e2b-463e-862c-700ac0976f31\") " pod="openshift-dns/node-resolver-44lw8" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895832 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-os-release\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895841 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-cnibin\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895834 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-ovnkube-config\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895854 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-kubelet\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895866 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/752eee1c-98a9-4221-88a7-f332f704d4cf-ovn-node-metrics-cert\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895889 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f5b51107-7e2b-463e-862c-700ac0976f31-hosts-file\") pod \"node-resolver-44lw8\" (UID: \"f5b51107-7e2b-463e-862c-700ac0976f31\") " pod="openshift-dns/node-resolver-44lw8" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895902 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-var-lib-cni-bin\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895930 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxgp8\" (UniqueName: \"kubernetes.io/projected/752eee1c-98a9-4221-88a7-f332f704d4cf-kube-api-access-wxgp8\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895943 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-var-lib-cni-bin\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895956 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-multus-conf-dir\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.895979 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-etc-kubernetes\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896002 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-multus-conf-dir\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896018 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-openvswitch\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896039 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-etc-kubernetes\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896051 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-cni-netd\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896066 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-openvswitch\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896077 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-cni-binary-copy\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896096 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-cni-netd\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896101 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896128 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-systemd-units\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896153 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-var-lib-cni-multus\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896176 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbs6l\" (UniqueName: \"kubernetes.io/projected/0efefc3f-da5f-4035-81dc-6b5ab51e3df1-kube-api-access-sbs6l\") pod \"machine-config-daemon-klt85\" (UID: \"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\") " pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896200 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-cnibin\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896211 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-systemd-units\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896232 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-var-lib-cni-multus\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896264 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-ovnkube-config\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896336 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-cnibin\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.896728 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk8gj\" (UniqueName: \"kubernetes.io/projected/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-kube-api-access-pk8gj\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.897045 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.897100 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/59c3498a-6659-454c-9fe0-361fa7a0783c-cni-binary-copy\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.897185 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-cni-binary-copy\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.897844 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sslkm\" (UniqueName: \"kubernetes.io/projected/f5b51107-7e2b-463e-862c-700ac0976f31-kube-api-access-sslkm\") pod \"node-resolver-44lw8\" (UID: \"f5b51107-7e2b-463e-862c-700ac0976f31\") " pod="openshift-dns/node-resolver-44lw8" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898248 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-slash\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898329 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-node-log\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898352 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-ovn\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898369 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-log-socket\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898422 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-slash\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898421 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-node-log\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898453 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-ovn\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898469 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0efefc3f-da5f-4035-81dc-6b5ab51e3df1-mcd-auth-proxy-config\") pod \"machine-config-daemon-klt85\" (UID: \"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\") " pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898487 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-log-socket\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898548 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-run-multus-certs\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898598 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898603 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-run-multus-certs\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898638 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898655 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898664 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-ovnkube-script-lib\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898685 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-system-cni-dir\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898706 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-run-netns\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898729 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-etc-openvswitch\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898750 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898771 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898800 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-var-lib-openvswitch\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.898823 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-run-netns\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.899206 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-etc-openvswitch\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.899222 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.899272 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-var-lib-openvswitch\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.899313 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-system-cni-dir\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.899278 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.899550 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.899665 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-run-netns\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.899708 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/59c3498a-6659-454c-9fe0-361fa7a0783c-host-run-netns\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.899879 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-ovnkube-script-lib\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.899942 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0efefc3f-da5f-4035-81dc-6b5ab51e3df1-mcd-auth-proxy-config\") pod \"machine-config-daemon-klt85\" (UID: \"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\") " pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.900277 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0efefc3f-da5f-4035-81dc-6b5ab51e3df1-proxy-tls\") pod \"machine-config-daemon-klt85\" (UID: \"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\") " pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904588 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904625 4856 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904639 4856 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904656 4856 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904670 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904682 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904694 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904707 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904719 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904731 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904746 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904757 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904802 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904821 4856 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904833 4856 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904844 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904858 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904871 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904883 4856 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904918 4856 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904931 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904943 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904955 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904967 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904980 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.904994 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905006 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905017 4856 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905029 4856 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905040 4856 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905055 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905066 4856 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905077 4856 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905090 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905102 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905114 4856 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905126 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905138 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905150 4856 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905205 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905218 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905230 4856 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905242 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905254 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905265 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905278 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905290 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905301 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905313 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905325 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905338 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905349 4856 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905360 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905376 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905388 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905400 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905412 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905424 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905473 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905486 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905497 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905521 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905539 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905553 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905565 4856 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905577 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905588 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905602 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905613 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905677 4856 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905691 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.905729 4856 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.908624 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/752eee1c-98a9-4221-88a7-f332f704d4cf-ovn-node-metrics-cert\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.912918 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxgp8\" (UniqueName: \"kubernetes.io/projected/752eee1c-98a9-4221-88a7-f332f704d4cf-kube-api-access-wxgp8\") pod \"ovnkube-node-2685z\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.912943 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7zdl\" (UniqueName: \"kubernetes.io/projected/59c3498a-6659-454c-9fe0-361fa7a0783c-kube-api-access-s7zdl\") pod \"multus-fjqpv\" (UID: \"59c3498a-6659-454c-9fe0-361fa7a0783c\") " pod="openshift-multus/multus-fjqpv" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.913974 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk8gj\" (UniqueName: \"kubernetes.io/projected/5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f-kube-api-access-pk8gj\") pod \"multus-additional-cni-plugins-npjs2\" (UID: \"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\") " pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.918053 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbs6l\" (UniqueName: \"kubernetes.io/projected/0efefc3f-da5f-4035-81dc-6b5ab51e3df1-kube-api-access-sbs6l\") pod \"machine-config-daemon-klt85\" (UID: \"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\") " pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.918766 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sslkm\" (UniqueName: \"kubernetes.io/projected/f5b51107-7e2b-463e-862c-700ac0976f31-kube-api-access-sslkm\") pod \"node-resolver-44lw8\" (UID: \"f5b51107-7e2b-463e-862c-700ac0976f31\") " pod="openshift-dns/node-resolver-44lw8" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.963865 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.978961 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.985814 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-npjs2" Nov 22 07:03:07 crc kubenswrapper[4856]: I1122 07:03:07.995037 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.004293 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.013489 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fjqpv" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.023121 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.030818 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-44lw8" Nov 22 07:03:08 crc kubenswrapper[4856]: W1122 07:03:08.044999 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod752eee1c_98a9_4221_88a7_f332f704d4cf.slice/crio-2fb6064aa74579d426014990b59839b5244e3e70b91052ef254e2eab72f5f77a WatchSource:0}: Error finding container 2fb6064aa74579d426014990b59839b5244e3e70b91052ef254e2eab72f5f77a: Status 404 returned error can't find the container with id 2fb6064aa74579d426014990b59839b5244e3e70b91052ef254e2eab72f5f77a Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.310646 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.311157 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.311210 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.311241 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.311277 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:08 crc kubenswrapper[4856]: E1122 07:03:08.311793 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:08 crc kubenswrapper[4856]: E1122 07:03:08.311818 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:08 crc kubenswrapper[4856]: E1122 07:03:08.311830 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:08 crc kubenswrapper[4856]: E1122 07:03:08.311890 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:09.31186247 +0000 UTC m=+31.725255718 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:08 crc kubenswrapper[4856]: E1122 07:03:08.312313 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:03:09.312304393 +0000 UTC m=+31.725697651 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:03:08 crc kubenswrapper[4856]: E1122 07:03:08.312372 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:08 crc kubenswrapper[4856]: E1122 07:03:08.312384 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:08 crc kubenswrapper[4856]: E1122 07:03:08.312392 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:08 crc kubenswrapper[4856]: E1122 07:03:08.312414 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:09.312407896 +0000 UTC m=+31.725801154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:08 crc kubenswrapper[4856]: E1122 07:03:08.312454 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:08 crc kubenswrapper[4856]: E1122 07:03:08.312487 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:09.312480278 +0000 UTC m=+31.725873536 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:08 crc kubenswrapper[4856]: E1122 07:03:08.312651 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:08 crc kubenswrapper[4856]: E1122 07:03:08.312689 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:09.312681794 +0000 UTC m=+31.726075052 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.682660 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-c9svb"] Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.683097 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-c9svb" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.685525 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.685687 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.686216 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.689093 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.701124 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.712855 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.713208 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.714122 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.715049 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.715793 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.716416 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.717895 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.718552 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.719762 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.720522 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.721630 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.722251 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.723499 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.724098 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.724643 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.726019 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.727559 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.728273 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.729247 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.730152 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.731024 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.733290 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.733923 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.734688 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.735736 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.736531 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.737625 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.738302 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.739546 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.740117 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.741188 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.741770 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.742293 4856 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.742421 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.744707 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.745271 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.746250 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.747832 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.748596 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.749598 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.750275 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.750989 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.751404 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.752131 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.752771 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.754013 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.755269 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.756235 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.757324 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.758087 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.759781 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.760730 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.761449 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.762026 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.763195 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.763857 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.764833 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.764989 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.774749 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.786164 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.799040 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.812879 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.817750 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b296m\" (UniqueName: \"kubernetes.io/projected/2f19c49d-eee1-47ff-813d-51642778850a-kube-api-access-b296m\") pod \"node-ca-c9svb\" (UID: \"2f19c49d-eee1-47ff-813d-51642778850a\") " pod="openshift-image-registry/node-ca-c9svb" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.817816 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f19c49d-eee1-47ff-813d-51642778850a-host\") pod \"node-ca-c9svb\" (UID: \"2f19c49d-eee1-47ff-813d-51642778850a\") " pod="openshift-image-registry/node-ca-c9svb" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.817905 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2f19c49d-eee1-47ff-813d-51642778850a-serviceca\") pod \"node-ca-c9svb\" (UID: \"2f19c49d-eee1-47ff-813d-51642778850a\") " pod="openshift-image-registry/node-ca-c9svb" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.830215 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.839315 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.848363 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.858768 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.870468 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.882107 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.886021 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"15aa5d97e1b017d1869ad2b0aa3eaaa15327508fa08f001998a637c11da9d0bf"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.888570 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-44lw8" event={"ID":"f5b51107-7e2b-463e-862c-700ac0976f31","Type":"ContainerStarted","Data":"219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.888651 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-44lw8" event={"ID":"f5b51107-7e2b-463e-862c-700ac0976f31","Type":"ContainerStarted","Data":"9d87e485134e29f2022340e565e0deb5d49c8d6e67d8af7aaf14d067bd198701"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.890406 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.890432 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a962d30cfa93a149af91a16acf508c0e9ceab307a82548ce6f7b1bea8484ad52"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.891908 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.891929 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.891938 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c8e1c2d500b5513c2a9863fdf9c85d3e9583c5766e4a48d3e8eb055ac6ddefec"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.892947 4856 generic.go:334] "Generic (PLEG): container finished" podID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerID="608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec" exitCode=0 Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.892983 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.892997 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerStarted","Data":"2fb6064aa74579d426014990b59839b5244e3e70b91052ef254e2eab72f5f77a"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.895876 4856 generic.go:334] "Generic (PLEG): container finished" podID="5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f" containerID="321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa" exitCode=0 Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.895938 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" event={"ID":"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f","Type":"ContainerDied","Data":"321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.895999 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" event={"ID":"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f","Type":"ContainerStarted","Data":"da7299ab1454e9773bc7683a2915ee1e1341b5df07503656bafb376b8f42eb7c"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.899243 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjqpv" event={"ID":"59c3498a-6659-454c-9fe0-361fa7a0783c","Type":"ContainerStarted","Data":"89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.899278 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjqpv" event={"ID":"59c3498a-6659-454c-9fe0-361fa7a0783c","Type":"ContainerStarted","Data":"b35178c3a8d52cce9fedff6ba5032ea4428c3ba4e8efa0533b608a6e0ad5edd6"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.899603 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.902524 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.902595 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.902611 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"d3d702318caafc5b35851d14958358f7ccb06ebfeab34710adb13418ea8afedf"} Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.914909 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.919208 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2f19c49d-eee1-47ff-813d-51642778850a-serviceca\") pod \"node-ca-c9svb\" (UID: \"2f19c49d-eee1-47ff-813d-51642778850a\") " pod="openshift-image-registry/node-ca-c9svb" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.919269 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b296m\" (UniqueName: \"kubernetes.io/projected/2f19c49d-eee1-47ff-813d-51642778850a-kube-api-access-b296m\") pod \"node-ca-c9svb\" (UID: \"2f19c49d-eee1-47ff-813d-51642778850a\") " pod="openshift-image-registry/node-ca-c9svb" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.919301 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f19c49d-eee1-47ff-813d-51642778850a-host\") pod \"node-ca-c9svb\" (UID: \"2f19c49d-eee1-47ff-813d-51642778850a\") " pod="openshift-image-registry/node-ca-c9svb" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.919356 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2f19c49d-eee1-47ff-813d-51642778850a-host\") pod \"node-ca-c9svb\" (UID: \"2f19c49d-eee1-47ff-813d-51642778850a\") " pod="openshift-image-registry/node-ca-c9svb" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.920321 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2f19c49d-eee1-47ff-813d-51642778850a-serviceca\") pod \"node-ca-c9svb\" (UID: \"2f19c49d-eee1-47ff-813d-51642778850a\") " pod="openshift-image-registry/node-ca-c9svb" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.927085 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.940787 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.944834 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b296m\" (UniqueName: \"kubernetes.io/projected/2f19c49d-eee1-47ff-813d-51642778850a-kube-api-access-b296m\") pod \"node-ca-c9svb\" (UID: \"2f19c49d-eee1-47ff-813d-51642778850a\") " pod="openshift-image-registry/node-ca-c9svb" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.952418 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.964526 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.984954 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.996847 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:08 crc kubenswrapper[4856]: I1122 07:03:08.999185 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-c9svb" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.005455 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.019401 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.034947 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.044345 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.054239 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.073456 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.088748 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.111250 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.127203 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.144358 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.162245 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.178485 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.193920 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.324840 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.324965 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.325010 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:03:11.324985427 +0000 UTC m=+33.738378695 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.325042 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.325058 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.325078 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.325101 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:11.32508856 +0000 UTC m=+33.738481828 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.325122 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.325238 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.325256 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.325269 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.325306 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.325345 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.325324 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.325362 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.325312 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:11.325299916 +0000 UTC m=+33.738693174 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.325574 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:11.325553163 +0000 UTC m=+33.738946491 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.325612 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:11.325581654 +0000 UTC m=+33.738975012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.708676 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.709059 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.708801 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.709225 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.708801 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:09 crc kubenswrapper[4856]: E1122 07:03:09.709320 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.909723 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerStarted","Data":"39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0"} Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.909784 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerStarted","Data":"000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417"} Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.909796 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerStarted","Data":"b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a"} Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.909807 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerStarted","Data":"3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908"} Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.909818 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerStarted","Data":"fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b"} Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.911008 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-c9svb" event={"ID":"2f19c49d-eee1-47ff-813d-51642778850a","Type":"ContainerStarted","Data":"89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397"} Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.911039 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-c9svb" event={"ID":"2f19c49d-eee1-47ff-813d-51642778850a","Type":"ContainerStarted","Data":"c3dd903d6cb6a50e61ca9de74f6529fae6eeebbfcfb963e66a95962344070243"} Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.921315 4856 generic.go:334] "Generic (PLEG): container finished" podID="5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f" containerID="e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8" exitCode=0 Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.921378 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" event={"ID":"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f","Type":"ContainerDied","Data":"e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8"} Nov 22 07:03:09 crc kubenswrapper[4856]: I1122 07:03:09.944974 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:09.999695 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:09Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.033975 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.061757 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.079244 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.092197 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.107671 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.122585 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.134033 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.151569 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.169193 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.182646 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.205944 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.217943 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.233411 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.249927 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.263661 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.287739 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.304790 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.317654 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.331903 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.348128 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.363158 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.381908 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.888146 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.893713 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.903087 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.911540 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.933610 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c"} Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.935887 4856 generic.go:334] "Generic (PLEG): container finished" podID="5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f" containerID="1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f" exitCode=0 Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.935990 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" event={"ID":"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f","Type":"ContainerDied","Data":"1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f"} Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.942201 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.942430 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerStarted","Data":"2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd"} Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.956340 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.973141 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:10 crc kubenswrapper[4856]: I1122 07:03:10.990325 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:10Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.007559 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.022989 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.037891 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.054675 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.085859 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.103534 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.114256 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.122061 4856 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.125480 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.125551 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.125573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.125782 4856 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.129499 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.135109 4856 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.135620 4856 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.138143 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.138195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.138210 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.138232 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.138244 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:11Z","lastTransitionTime":"2025-11-22T07:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.142331 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.158917 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.162339 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.166816 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.166855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.166865 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.166879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.166889 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:11Z","lastTransitionTime":"2025-11-22T07:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.172022 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.178128 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.182695 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.182764 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.182781 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.182801 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.182812 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:11Z","lastTransitionTime":"2025-11-22T07:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.186807 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.196032 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.200459 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.201259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.201315 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.201328 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.201350 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.201365 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:11Z","lastTransitionTime":"2025-11-22T07:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.216223 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.221746 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.221800 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.221812 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.221831 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.221846 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:11Z","lastTransitionTime":"2025-11-22T07:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.227265 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.241677 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.241938 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.244459 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.244530 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.244547 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.244569 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.244580 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:11Z","lastTransitionTime":"2025-11-22T07:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.245994 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.263953 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.281701 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.298701 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.313429 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.330807 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.344400 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.344659 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.344703 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.344749 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:03:15.344699625 +0000 UTC m=+37.758092893 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.344890 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.344923 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.345019 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.345093 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:15.345050755 +0000 UTC m=+37.758444153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.345096 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.345136 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.345153 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.345236 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:15.34520919 +0000 UTC m=+37.758602438 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.345271 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.345291 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.345305 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.345349 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:15.345341743 +0000 UTC m=+37.758735001 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.345273 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.345387 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:15.345380605 +0000 UTC m=+37.758773863 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.347622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.347682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.347701 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.347735 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.347755 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:11Z","lastTransitionTime":"2025-11-22T07:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.452031 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.452113 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.452133 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.452161 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.452181 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:11Z","lastTransitionTime":"2025-11-22T07:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.554864 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.554948 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.554965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.555002 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.555020 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:11Z","lastTransitionTime":"2025-11-22T07:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.658285 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.658327 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.658336 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.658352 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.658362 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:11Z","lastTransitionTime":"2025-11-22T07:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.709172 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.709217 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.709214 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.709311 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.709385 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:11 crc kubenswrapper[4856]: E1122 07:03:11.709593 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.761096 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.761172 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.761186 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.761211 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.761226 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:11Z","lastTransitionTime":"2025-11-22T07:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.863501 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.863583 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.863598 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.863622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.863637 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:11Z","lastTransitionTime":"2025-11-22T07:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.951959 4856 generic.go:334] "Generic (PLEG): container finished" podID="5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f" containerID="c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93" exitCode=0 Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.952077 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" event={"ID":"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f","Type":"ContainerDied","Data":"c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93"} Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.966052 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.966165 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.966181 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.966296 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.966319 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:11Z","lastTransitionTime":"2025-11-22T07:03:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:11 crc kubenswrapper[4856]: I1122 07:03:11.990636 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:11Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.013600 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.024773 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.036438 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.049305 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.063042 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.076428 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.076464 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.076473 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.076486 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.076495 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:12Z","lastTransitionTime":"2025-11-22T07:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.082901 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.099476 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.115065 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.131753 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.146096 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.162852 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.178244 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.179165 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.179238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.179250 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.179274 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.179289 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:12Z","lastTransitionTime":"2025-11-22T07:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.283873 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.283931 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.283945 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.283970 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.283985 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:12Z","lastTransitionTime":"2025-11-22T07:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.387678 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.387743 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.387757 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.387783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.387798 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:12Z","lastTransitionTime":"2025-11-22T07:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.492182 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.492250 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.492270 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.492298 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.492316 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:12Z","lastTransitionTime":"2025-11-22T07:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.594782 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.594832 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.594844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.594862 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.594873 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:12Z","lastTransitionTime":"2025-11-22T07:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.697560 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.697592 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.697601 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.697614 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.697622 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:12Z","lastTransitionTime":"2025-11-22T07:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.799973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.800038 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.800050 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.800065 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.800075 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:12Z","lastTransitionTime":"2025-11-22T07:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.905705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.905763 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.905776 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.905799 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.905812 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:12Z","lastTransitionTime":"2025-11-22T07:03:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.961083 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerStarted","Data":"833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40"} Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.964977 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" event={"ID":"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f","Type":"ContainerStarted","Data":"0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f"} Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.984963 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:12 crc kubenswrapper[4856]: I1122 07:03:12.998527 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.008319 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.008361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.008371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.008386 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.008398 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:13Z","lastTransitionTime":"2025-11-22T07:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.018003 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.030762 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.041066 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.054266 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.069548 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.083107 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.094670 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.110843 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.110878 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.110890 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.110903 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.110912 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:13Z","lastTransitionTime":"2025-11-22T07:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.112843 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.127580 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.139459 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.153811 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.213021 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.213083 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.213093 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.213109 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.213118 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:13Z","lastTransitionTime":"2025-11-22T07:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.315185 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.315228 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.315238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.315254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.315265 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:13Z","lastTransitionTime":"2025-11-22T07:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.417609 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.417676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.417690 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.417709 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.417721 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:13Z","lastTransitionTime":"2025-11-22T07:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.520738 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.520796 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.520805 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.520820 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.520835 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:13Z","lastTransitionTime":"2025-11-22T07:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.623093 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.623134 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.623145 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.623162 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.623174 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:13Z","lastTransitionTime":"2025-11-22T07:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.709103 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.709137 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:13 crc kubenswrapper[4856]: E1122 07:03:13.709260 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.709295 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:13 crc kubenswrapper[4856]: E1122 07:03:13.709473 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:13 crc kubenswrapper[4856]: E1122 07:03:13.709550 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.725465 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.725502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.725539 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.725555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.725565 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:13Z","lastTransitionTime":"2025-11-22T07:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.827754 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.827798 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.827808 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.827823 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.827835 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:13Z","lastTransitionTime":"2025-11-22T07:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.930303 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.930351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.930363 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.930380 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.930394 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:13Z","lastTransitionTime":"2025-11-22T07:03:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.970942 4856 generic.go:334] "Generic (PLEG): container finished" podID="5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f" containerID="0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f" exitCode=0 Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.970992 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" event={"ID":"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f","Type":"ContainerDied","Data":"0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f"} Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.984576 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:13 crc kubenswrapper[4856]: I1122 07:03:13.996360 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.008618 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.029155 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.032756 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.032817 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.032829 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.032844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.032854 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:14Z","lastTransitionTime":"2025-11-22T07:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.041321 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.057139 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.072216 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.086000 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.100091 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.111870 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.127630 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.135815 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.135860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.135871 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.135889 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.135900 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:14Z","lastTransitionTime":"2025-11-22T07:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.143362 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.155798 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.239288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.239632 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.239647 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.239737 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.239752 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:14Z","lastTransitionTime":"2025-11-22T07:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.342950 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.343001 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.343009 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.343022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.343046 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:14Z","lastTransitionTime":"2025-11-22T07:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.445236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.445282 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.445298 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.445317 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.445330 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:14Z","lastTransitionTime":"2025-11-22T07:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.547957 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.547998 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.548008 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.548023 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.548037 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:14Z","lastTransitionTime":"2025-11-22T07:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.650652 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.650676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.650685 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.650698 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.650707 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:14Z","lastTransitionTime":"2025-11-22T07:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.752672 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.752779 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.752804 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.752834 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.752857 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:14Z","lastTransitionTime":"2025-11-22T07:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.855578 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.855612 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.855622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.855635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.855644 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:14Z","lastTransitionTime":"2025-11-22T07:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.962179 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.962892 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.962907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.962923 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.962932 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:14Z","lastTransitionTime":"2025-11-22T07:03:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.981367 4856 generic.go:334] "Generic (PLEG): container finished" podID="5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f" containerID="1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847" exitCode=0 Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.981447 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" event={"ID":"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f","Type":"ContainerDied","Data":"1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847"} Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.986361 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerStarted","Data":"497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b"} Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.986944 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.987092 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.987172 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:14 crc kubenswrapper[4856]: I1122 07:03:14.996457 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:14Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.012734 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.022063 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.022569 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.023883 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.043628 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.056601 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.065349 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.065385 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.065394 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.065408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.065417 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:15Z","lastTransitionTime":"2025-11-22T07:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.067453 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.079408 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.092054 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.104029 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.115986 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.130165 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.141552 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.153763 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.168367 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.168584 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.168628 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.168638 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.168654 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.168663 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:15Z","lastTransitionTime":"2025-11-22T07:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.180272 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.193884 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.208031 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.218990 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.237924 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.249859 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.261127 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.271119 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.271158 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.271168 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.271183 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.271193 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:15Z","lastTransitionTime":"2025-11-22T07:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.274168 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.288476 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.301660 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.314362 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.327280 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:15Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.373255 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.373297 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.373306 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.373323 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.373336 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:15Z","lastTransitionTime":"2025-11-22T07:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.388706 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.388806 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.388830 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.388882 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:03:23.388862534 +0000 UTC m=+45.802255792 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.388911 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.388983 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:23.388971137 +0000 UTC m=+45.802364395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.389052 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.389096 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:23.38908806 +0000 UTC m=+45.802481318 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.389299 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.389430 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.389447 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.389458 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.389489 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:23.389481651 +0000 UTC m=+45.802874909 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.389540 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.389622 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.389631 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.389638 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.389670 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:23.389663286 +0000 UTC m=+45.803056534 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.475912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.475944 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.475953 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.475966 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.475977 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:15Z","lastTransitionTime":"2025-11-22T07:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.581315 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.581370 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.581383 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.581400 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.581413 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:15Z","lastTransitionTime":"2025-11-22T07:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.682951 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.682983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.682992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.683003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.683011 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:15Z","lastTransitionTime":"2025-11-22T07:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.709492 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.709538 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.709615 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.709712 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.709798 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:15 crc kubenswrapper[4856]: E1122 07:03:15.709880 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.786409 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.786759 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.786771 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.786787 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.787078 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:15Z","lastTransitionTime":"2025-11-22T07:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.889262 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.889293 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.889301 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.889315 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.889324 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:15Z","lastTransitionTime":"2025-11-22T07:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.991031 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.991074 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.991088 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.991104 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.991116 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:15Z","lastTransitionTime":"2025-11-22T07:03:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:15 crc kubenswrapper[4856]: I1122 07:03:15.993463 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" event={"ID":"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f","Type":"ContainerStarted","Data":"29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966"} Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.007979 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.022248 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.037114 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.052442 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.065127 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.078480 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.089386 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.092501 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.092546 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.092557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.092573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.092586 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:16Z","lastTransitionTime":"2025-11-22T07:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.102917 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.117556 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.139433 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.153625 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.163682 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.178055 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:16Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.197280 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.197335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.197346 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.197362 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.197374 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:16Z","lastTransitionTime":"2025-11-22T07:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.299531 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.299562 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.299572 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.299585 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.299594 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:16Z","lastTransitionTime":"2025-11-22T07:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.401806 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.401847 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.401856 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.401871 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.401880 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:16Z","lastTransitionTime":"2025-11-22T07:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.504818 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.504852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.504863 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.504881 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.504892 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:16Z","lastTransitionTime":"2025-11-22T07:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.607408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.607446 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.607464 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.607477 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.607487 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:16Z","lastTransitionTime":"2025-11-22T07:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.709992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.710034 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.710044 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.710054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.710063 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:16Z","lastTransitionTime":"2025-11-22T07:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.812781 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.812833 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.812842 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.812861 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.812871 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:16Z","lastTransitionTime":"2025-11-22T07:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.916124 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.916180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.916195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.916220 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:16 crc kubenswrapper[4856]: I1122 07:03:16.916237 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:16Z","lastTransitionTime":"2025-11-22T07:03:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.019543 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.019612 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.019637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.019662 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.019679 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:17Z","lastTransitionTime":"2025-11-22T07:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.123464 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.123559 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.123583 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.123614 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.123635 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:17Z","lastTransitionTime":"2025-11-22T07:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.226626 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.226717 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.226742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.226776 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.226785 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:17Z","lastTransitionTime":"2025-11-22T07:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.329383 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.329424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.329433 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.329446 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.329455 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:17Z","lastTransitionTime":"2025-11-22T07:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.433004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.433065 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.433082 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.433119 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.433156 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:17Z","lastTransitionTime":"2025-11-22T07:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.535348 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.535385 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.535394 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.535409 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.535419 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:17Z","lastTransitionTime":"2025-11-22T07:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.638208 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.638260 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.638273 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.638295 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.638307 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:17Z","lastTransitionTime":"2025-11-22T07:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.709026 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.709044 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:17 crc kubenswrapper[4856]: E1122 07:03:17.709215 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.709058 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:17 crc kubenswrapper[4856]: E1122 07:03:17.709311 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:17 crc kubenswrapper[4856]: E1122 07:03:17.709391 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.741643 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.741697 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.741707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.741726 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.741736 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:17Z","lastTransitionTime":"2025-11-22T07:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.845201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.845258 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.845271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.845291 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.845304 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:17Z","lastTransitionTime":"2025-11-22T07:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.947860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.947924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.947941 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.947983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:17 crc kubenswrapper[4856]: I1122 07:03:17.948001 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:17Z","lastTransitionTime":"2025-11-22T07:03:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.050237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.050278 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.050287 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.050303 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.050317 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:18Z","lastTransitionTime":"2025-11-22T07:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.153413 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.153475 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.153486 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.153502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.153542 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:18Z","lastTransitionTime":"2025-11-22T07:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.256726 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.256775 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.256785 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.256799 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.256808 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:18Z","lastTransitionTime":"2025-11-22T07:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.359499 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.359616 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.359632 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.359655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.359690 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:18Z","lastTransitionTime":"2025-11-22T07:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.462262 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.462296 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.462303 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.462316 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.462324 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:18Z","lastTransitionTime":"2025-11-22T07:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.564270 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.564316 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.564325 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.564339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.564348 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:18Z","lastTransitionTime":"2025-11-22T07:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.666091 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.666132 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.666170 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.666188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.666202 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:18Z","lastTransitionTime":"2025-11-22T07:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.725949 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.747739 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.764132 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.768114 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.768148 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.768163 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.768180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.768189 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:18Z","lastTransitionTime":"2025-11-22T07:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.785198 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.804080 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.821048 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.837061 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.848216 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.862201 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.870410 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.870451 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.870465 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.870482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.870495 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:18Z","lastTransitionTime":"2025-11-22T07:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.874089 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.890299 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.903230 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.916104 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:18Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.972845 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.972894 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.972907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.972925 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:18 crc kubenswrapper[4856]: I1122 07:03:18.972936 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:18Z","lastTransitionTime":"2025-11-22T07:03:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.008281 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/0.log" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.010886 4856 generic.go:334] "Generic (PLEG): container finished" podID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerID="497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b" exitCode=1 Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.010913 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b"} Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.011733 4856 scope.go:117] "RemoveContainer" containerID="497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.030017 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.051110 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.065867 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.074792 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.074834 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.074844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.074859 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.074868 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:19Z","lastTransitionTime":"2025-11-22T07:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.102242 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:18Z\\\",\\\"message\\\":\\\" 6219 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.398677 6219 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:03:18.398986 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.399350 6219 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:03:18.399394 6219 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 07:03:18.399403 6219 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 07:03:18.399439 6219 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:03:18.399488 6219 factory.go:656] Stopping watch factory\\\\nI1122 07:03:18.399540 6219 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:03:18.399547 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 07:03:18.399559 6219 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:03:18.399587 6219 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.120481 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.121534 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25"] Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.121944 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.124697 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.124729 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.135383 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.147543 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.158775 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.169736 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.177919 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.177952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.177962 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.177977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.177995 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:19Z","lastTransitionTime":"2025-11-22T07:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.181826 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.198288 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.212034 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.225560 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.227933 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/79ea67c8-6903-4252-a766-446631a43c49-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6df25\" (UID: \"79ea67c8-6903-4252-a766-446631a43c49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.227969 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/79ea67c8-6903-4252-a766-446631a43c49-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6df25\" (UID: \"79ea67c8-6903-4252-a766-446631a43c49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.228063 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/79ea67c8-6903-4252-a766-446631a43c49-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6df25\" (UID: \"79ea67c8-6903-4252-a766-446631a43c49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.228100 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b59tk\" (UniqueName: \"kubernetes.io/projected/79ea67c8-6903-4252-a766-446631a43c49-kube-api-access-b59tk\") pod \"ovnkube-control-plane-749d76644c-6df25\" (UID: \"79ea67c8-6903-4252-a766-446631a43c49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.237030 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.249599 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.260805 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.277878 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:18Z\\\",\\\"message\\\":\\\" 6219 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.398677 6219 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:03:18.398986 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.399350 6219 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:03:18.399394 6219 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 07:03:18.399403 6219 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 07:03:18.399439 6219 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:03:18.399488 6219 factory.go:656] Stopping watch factory\\\\nI1122 07:03:18.399540 6219 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:03:18.399547 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 07:03:18.399559 6219 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:03:18.399587 6219 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.279664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.279698 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.279749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.279777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.279791 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:19Z","lastTransitionTime":"2025-11-22T07:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.294688 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.304585 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.318602 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.328748 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/79ea67c8-6903-4252-a766-446631a43c49-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6df25\" (UID: \"79ea67c8-6903-4252-a766-446631a43c49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.328781 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/79ea67c8-6903-4252-a766-446631a43c49-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6df25\" (UID: \"79ea67c8-6903-4252-a766-446631a43c49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.328814 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/79ea67c8-6903-4252-a766-446631a43c49-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6df25\" (UID: \"79ea67c8-6903-4252-a766-446631a43c49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.328828 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b59tk\" (UniqueName: \"kubernetes.io/projected/79ea67c8-6903-4252-a766-446631a43c49-kube-api-access-b59tk\") pod \"ovnkube-control-plane-749d76644c-6df25\" (UID: \"79ea67c8-6903-4252-a766-446631a43c49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.329461 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/79ea67c8-6903-4252-a766-446631a43c49-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6df25\" (UID: \"79ea67c8-6903-4252-a766-446631a43c49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.329767 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/79ea67c8-6903-4252-a766-446631a43c49-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6df25\" (UID: \"79ea67c8-6903-4252-a766-446631a43c49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.332546 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.339067 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/79ea67c8-6903-4252-a766-446631a43c49-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6df25\" (UID: \"79ea67c8-6903-4252-a766-446631a43c49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.343489 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.344880 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b59tk\" (UniqueName: \"kubernetes.io/projected/79ea67c8-6903-4252-a766-446631a43c49-kube-api-access-b59tk\") pod \"ovnkube-control-plane-749d76644c-6df25\" (UID: \"79ea67c8-6903-4252-a766-446631a43c49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.355080 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.366991 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.377671 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.381685 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.381723 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.381735 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.381753 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.381765 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:19Z","lastTransitionTime":"2025-11-22T07:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.387231 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.396308 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:19Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.435328 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" Nov 22 07:03:19 crc kubenswrapper[4856]: W1122 07:03:19.445642 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79ea67c8_6903_4252_a766_446631a43c49.slice/crio-61e7462a55e2669af45099c62a18971f444a7295c116e26fe3212a007a2f1a65 WatchSource:0}: Error finding container 61e7462a55e2669af45099c62a18971f444a7295c116e26fe3212a007a2f1a65: Status 404 returned error can't find the container with id 61e7462a55e2669af45099c62a18971f444a7295c116e26fe3212a007a2f1a65 Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.484527 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.484552 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.484560 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.484572 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.484582 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:19Z","lastTransitionTime":"2025-11-22T07:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.586694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.586737 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.586749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.586767 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.586778 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:19Z","lastTransitionTime":"2025-11-22T07:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.691629 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.691686 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.691698 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.691718 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.691729 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:19Z","lastTransitionTime":"2025-11-22T07:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.709799 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.709854 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.709936 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:19 crc kubenswrapper[4856]: E1122 07:03:19.710235 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:19 crc kubenswrapper[4856]: E1122 07:03:19.710478 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:19 crc kubenswrapper[4856]: E1122 07:03:19.710583 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.722605 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.722933 4856 scope.go:117] "RemoveContainer" containerID="b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.795694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.795761 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.795777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.795844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.795864 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:19Z","lastTransitionTime":"2025-11-22T07:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.899461 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.899557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.899575 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.899600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:19 crc kubenswrapper[4856]: I1122 07:03:19.899613 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:19Z","lastTransitionTime":"2025-11-22T07:03:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.002737 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.002808 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.002829 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.002860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.002880 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:20Z","lastTransitionTime":"2025-11-22T07:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.016353 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" event={"ID":"79ea67c8-6903-4252-a766-446631a43c49","Type":"ContainerStarted","Data":"61e7462a55e2669af45099c62a18971f444a7295c116e26fe3212a007a2f1a65"} Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.105795 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.105843 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.105854 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.105869 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.105881 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:20Z","lastTransitionTime":"2025-11-22T07:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.208942 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.208995 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.209007 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.209027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.209040 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:20Z","lastTransitionTime":"2025-11-22T07:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.255956 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-722tb"] Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.256665 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:20 crc kubenswrapper[4856]: E1122 07:03:20.256753 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.277348 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.291432 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.311840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.311913 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.311925 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.311954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.311967 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:20Z","lastTransitionTime":"2025-11-22T07:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.321116 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:18Z\\\",\\\"message\\\":\\\" 6219 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.398677 6219 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:03:18.398986 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.399350 6219 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:03:18.399394 6219 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 07:03:18.399403 6219 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 07:03:18.399439 6219 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:03:18.399488 6219 factory.go:656] Stopping watch factory\\\\nI1122 07:03:18.399540 6219 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:03:18.399547 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 07:03:18.399559 6219 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:03:18.399587 6219 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.337000 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.347152 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.347265 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf49l\" (UniqueName: \"kubernetes.io/projected/dda6b6e5-61a2-459c-9207-5e5aa500869f-kube-api-access-hf49l\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.351006 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.368391 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.385774 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.404537 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.414677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.414722 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.414731 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.414749 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.414760 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:20Z","lastTransitionTime":"2025-11-22T07:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.419617 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.436237 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.448196 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.448245 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf49l\" (UniqueName: \"kubernetes.io/projected/dda6b6e5-61a2-459c-9207-5e5aa500869f-kube-api-access-hf49l\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:20 crc kubenswrapper[4856]: E1122 07:03:20.448407 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:20 crc kubenswrapper[4856]: E1122 07:03:20.448774 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs podName:dda6b6e5-61a2-459c-9207-5e5aa500869f nodeName:}" failed. No retries permitted until 2025-11-22 07:03:20.948492485 +0000 UTC m=+43.361885763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs") pod "network-metrics-daemon-722tb" (UID: "dda6b6e5-61a2-459c-9207-5e5aa500869f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.449931 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.464285 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.464895 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf49l\" (UniqueName: \"kubernetes.io/projected/dda6b6e5-61a2-459c-9207-5e5aa500869f-kube-api-access-hf49l\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.480142 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.492186 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.506666 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.518344 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.518403 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.518418 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.518441 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.518459 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:20Z","lastTransitionTime":"2025-11-22T07:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.521193 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:20Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.621483 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.621529 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.621541 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.621555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.621563 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:20Z","lastTransitionTime":"2025-11-22T07:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.724316 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.724360 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.724371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.724387 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.724397 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:20Z","lastTransitionTime":"2025-11-22T07:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.826788 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.826822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.826831 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.826845 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.826853 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:20Z","lastTransitionTime":"2025-11-22T07:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.928967 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.928994 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.929002 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.929015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.929023 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:20Z","lastTransitionTime":"2025-11-22T07:03:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:20 crc kubenswrapper[4856]: I1122 07:03:20.953260 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:20 crc kubenswrapper[4856]: E1122 07:03:20.953388 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:20 crc kubenswrapper[4856]: E1122 07:03:20.953439 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs podName:dda6b6e5-61a2-459c-9207-5e5aa500869f nodeName:}" failed. No retries permitted until 2025-11-22 07:03:21.953425902 +0000 UTC m=+44.366819160 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs") pod "network-metrics-daemon-722tb" (UID: "dda6b6e5-61a2-459c-9207-5e5aa500869f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.021709 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.023470 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597"} Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.024667 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.026609 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/0.log" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.030248 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerStarted","Data":"ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669"} Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.030555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.030607 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.030627 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.030658 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.030675 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.030722 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.032188 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" event={"ID":"79ea67c8-6903-4252-a766-446631a43c49","Type":"ContainerStarted","Data":"87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b"} Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.039681 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.052081 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.062639 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.074890 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.089332 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.105112 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.119579 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.132475 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.132553 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.132574 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.132597 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.132610 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.141486 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:18Z\\\",\\\"message\\\":\\\" 6219 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.398677 6219 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:03:18.398986 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.399350 6219 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:03:18.399394 6219 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 07:03:18.399403 6219 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 07:03:18.399439 6219 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:03:18.399488 6219 factory.go:656] Stopping watch factory\\\\nI1122 07:03:18.399540 6219 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:03:18.399547 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 07:03:18.399559 6219 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:03:18.399587 6219 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.158681 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.169768 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.180214 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.192009 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.207390 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.219238 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.231071 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.235050 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.235086 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.235096 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.235109 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.235119 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.245803 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.259965 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.276403 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:18Z\\\",\\\"message\\\":\\\" 6219 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.398677 6219 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:03:18.398986 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.399350 6219 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:03:18.399394 6219 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 07:03:18.399403 6219 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 07:03:18.399439 6219 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:03:18.399488 6219 factory.go:656] Stopping watch factory\\\\nI1122 07:03:18.399540 6219 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:03:18.399547 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 07:03:18.399559 6219 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:03:18.399587 6219 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.295070 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.308410 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.322132 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.337459 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.337505 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.337539 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.337556 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.337569 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.338566 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.354630 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.374665 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.396377 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.414613 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.430157 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.439916 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.439954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.439964 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.439978 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.439992 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.445979 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.460649 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.483576 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.508993 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.524976 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.542554 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.542595 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.542603 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.542617 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.542629 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.633595 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.633645 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.633655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.633671 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.633682 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: E1122 07:03:21.645958 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.650473 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.650531 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.650541 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.650556 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.650567 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: E1122 07:03:21.664684 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.668553 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.668595 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.668605 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.668620 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.668631 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: E1122 07:03:21.678799 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.681731 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.681766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.681775 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.681791 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.681801 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: E1122 07:03:21.695616 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.698470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.698531 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.698548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.698567 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.698578 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.708763 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.709175 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:21 crc kubenswrapper[4856]: E1122 07:03:21.709304 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:21 crc kubenswrapper[4856]: E1122 07:03:21.709590 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:21 crc kubenswrapper[4856]: E1122 07:03:21.710480 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:21 crc kubenswrapper[4856]: E1122 07:03:21.710638 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.712358 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:21 crc kubenswrapper[4856]: E1122 07:03:21.712469 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.718579 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:21 crc kubenswrapper[4856]: E1122 07:03:21.719968 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.720067 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.720099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.720109 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.720124 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.720134 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.822245 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.822290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.822304 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.822320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.822332 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.923981 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.924029 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.924040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.924057 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.924068 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:21Z","lastTransitionTime":"2025-11-22T07:03:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:21 crc kubenswrapper[4856]: I1122 07:03:21.961913 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:21 crc kubenswrapper[4856]: E1122 07:03:21.962092 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:21 crc kubenswrapper[4856]: E1122 07:03:21.962174 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs podName:dda6b6e5-61a2-459c-9207-5e5aa500869f nodeName:}" failed. No retries permitted until 2025-11-22 07:03:23.962155484 +0000 UTC m=+46.375548742 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs") pod "network-metrics-daemon-722tb" (UID: "dda6b6e5-61a2-459c-9207-5e5aa500869f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.026295 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.026340 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.026356 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.026372 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.026382 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:22Z","lastTransitionTime":"2025-11-22T07:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.036049 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" event={"ID":"79ea67c8-6903-4252-a766-446631a43c49","Type":"ContainerStarted","Data":"23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642"} Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.037828 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/1.log" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.038360 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/0.log" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.041811 4856 generic.go:334] "Generic (PLEG): container finished" podID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerID="ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669" exitCode=1 Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.041908 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669"} Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.041974 4856 scope.go:117] "RemoveContainer" containerID="497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.042448 4856 scope.go:117] "RemoveContainer" containerID="ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669" Nov 22 07:03:22 crc kubenswrapper[4856]: E1122 07:03:22.042634 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.062038 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.073342 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.090303 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:18Z\\\",\\\"message\\\":\\\" 6219 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.398677 6219 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:03:18.398986 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.399350 6219 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:03:18.399394 6219 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 07:03:18.399403 6219 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 07:03:18.399439 6219 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:03:18.399488 6219 factory.go:656] Stopping watch factory\\\\nI1122 07:03:18.399540 6219 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:03:18.399547 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 07:03:18.399559 6219 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:03:18.399587 6219 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.104156 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.114395 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.127854 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.129442 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.129497 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.129534 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.129555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.129567 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:22Z","lastTransitionTime":"2025-11-22T07:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.145962 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.161751 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.174676 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.189977 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.205278 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.217766 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.229088 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.231810 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.231855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.231865 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.231880 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.231890 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:22Z","lastTransitionTime":"2025-11-22T07:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.241171 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.254813 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.271346 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.294383 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://497f00bec13eb06b4297801b9c8af3cd5a0ef6279260eecb0163dfe7cd72d67b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:18Z\\\",\\\"message\\\":\\\" 6219 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.398677 6219 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1122 07:03:18.398986 6219 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1122 07:03:18.399350 6219 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1122 07:03:18.399394 6219 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1122 07:03:18.399403 6219 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1122 07:03:18.399439 6219 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1122 07:03:18.399488 6219 factory.go:656] Stopping watch factory\\\\nI1122 07:03:18.399540 6219 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:03:18.399547 6219 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1122 07:03:18.399559 6219 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1122 07:03:18.399587 6219 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"lane-749d76644c-6df25\\\\nF1122 07:03:21.863883 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:03:21.863896 6363 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25\\\\nI1122 07:03:21.863905 6363 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 in node crc\\\\nI1122 07:03:21.863911 6363 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 after 0 failed attempt(s)\\\\nI1122 07:03:21.863914 6363 obj_retry.go\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.306634 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.315812 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.326146 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.333856 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.333907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.333917 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.333934 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.334259 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:22Z","lastTransitionTime":"2025-11-22T07:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.336332 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.347225 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.358576 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.373829 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.382431 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.395629 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.408862 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.418414 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.429074 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.436789 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.436821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.436830 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.436844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.436853 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:22Z","lastTransitionTime":"2025-11-22T07:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.442182 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.455186 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.468057 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:22Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.539246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.539293 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.539304 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.539321 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.539334 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:22Z","lastTransitionTime":"2025-11-22T07:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.642959 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.643261 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.643274 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.643291 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.643303 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:22Z","lastTransitionTime":"2025-11-22T07:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.746163 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.746213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.746230 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.746442 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.746464 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:22Z","lastTransitionTime":"2025-11-22T07:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.849015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.849055 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.849067 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.849082 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.849092 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:22Z","lastTransitionTime":"2025-11-22T07:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.951292 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.951325 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.951335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.951347 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:22 crc kubenswrapper[4856]: I1122 07:03:22.951360 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:22Z","lastTransitionTime":"2025-11-22T07:03:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.047167 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/1.log" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.050274 4856 scope.go:117] "RemoveContainer" containerID="ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669" Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.050436 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.053122 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.053159 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.053173 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.053188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.053200 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:23Z","lastTransitionTime":"2025-11-22T07:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.064309 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.075965 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.090942 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.100896 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.118361 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.129728 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.140750 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.154989 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.155036 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.155051 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.155070 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.155084 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:23Z","lastTransitionTime":"2025-11-22T07:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.160121 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"lane-749d76644c-6df25\\\\nF1122 07:03:21.863883 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:03:21.863896 6363 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25\\\\nI1122 07:03:21.863905 6363 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 in node crc\\\\nI1122 07:03:21.863911 6363 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 after 0 failed attempt(s)\\\\nI1122 07:03:21.863914 6363 obj_retry.go\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.174199 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.186059 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.199940 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.216088 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.229564 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.239657 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.252857 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.256976 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.257003 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.257011 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.257025 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.257033 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:23Z","lastTransitionTime":"2025-11-22T07:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.262321 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:23Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.359420 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.359500 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.359557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.359581 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.359598 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:23Z","lastTransitionTime":"2025-11-22T07:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.462364 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.462401 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.462411 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.462424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.462434 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:23Z","lastTransitionTime":"2025-11-22T07:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.477091 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.477190 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.477220 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.477240 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.477260 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.477292 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:03:39.477262275 +0000 UTC m=+61.890655533 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.477351 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.477361 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.477377 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.477392 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.477412 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.477424 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.477472 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.477492 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.477393 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:39.477384049 +0000 UTC m=+61.890777307 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.477735 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:39.477662626 +0000 UTC m=+61.891055914 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.477789 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:39.477770949 +0000 UTC m=+61.891164237 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.477828 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:03:39.47781289 +0000 UTC m=+61.891206368 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.564395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.564454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.564463 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.564476 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.564484 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:23Z","lastTransitionTime":"2025-11-22T07:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.667129 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.667183 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.667195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.667210 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.667220 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:23Z","lastTransitionTime":"2025-11-22T07:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.709029 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.709175 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.709255 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.709066 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.709277 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.709363 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.709481 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.709567 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.769695 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.769731 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.769739 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.769752 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.769760 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:23Z","lastTransitionTime":"2025-11-22T07:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.871609 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.871640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.871650 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.871664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.871673 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:23Z","lastTransitionTime":"2025-11-22T07:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.974553 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.974596 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.974608 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.974625 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.974638 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:23Z","lastTransitionTime":"2025-11-22T07:03:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:23 crc kubenswrapper[4856]: I1122 07:03:23.983256 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.983382 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:23 crc kubenswrapper[4856]: E1122 07:03:23.983434 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs podName:dda6b6e5-61a2-459c-9207-5e5aa500869f nodeName:}" failed. No retries permitted until 2025-11-22 07:03:27.983420046 +0000 UTC m=+50.396813304 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs") pod "network-metrics-daemon-722tb" (UID: "dda6b6e5-61a2-459c-9207-5e5aa500869f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.077077 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.077106 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.077114 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.077128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.077136 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:24Z","lastTransitionTime":"2025-11-22T07:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.183212 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.183259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.183269 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.183289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.183307 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:24Z","lastTransitionTime":"2025-11-22T07:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.285292 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.285333 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.285344 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.285360 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.285371 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:24Z","lastTransitionTime":"2025-11-22T07:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.388290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.388401 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.388414 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.388432 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.388445 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:24Z","lastTransitionTime":"2025-11-22T07:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.490348 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.490435 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.490449 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.490462 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.490471 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:24Z","lastTransitionTime":"2025-11-22T07:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.592682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.592753 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.592783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.592815 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.592837 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:24Z","lastTransitionTime":"2025-11-22T07:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.696238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.696277 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.696286 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.696301 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.696311 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:24Z","lastTransitionTime":"2025-11-22T07:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.797879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.797908 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.797917 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.797930 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.797940 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:24Z","lastTransitionTime":"2025-11-22T07:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.900165 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.900229 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.900248 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.900265 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:24 crc kubenswrapper[4856]: I1122 07:03:24.900280 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:24Z","lastTransitionTime":"2025-11-22T07:03:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.003311 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.003362 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.003377 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.003397 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.003411 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:25Z","lastTransitionTime":"2025-11-22T07:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.106170 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.106246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.106273 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.106304 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.106328 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:25Z","lastTransitionTime":"2025-11-22T07:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.208989 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.209018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.209026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.209039 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.209047 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:25Z","lastTransitionTime":"2025-11-22T07:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.311260 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.311300 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.311310 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.311322 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.311331 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:25Z","lastTransitionTime":"2025-11-22T07:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.414805 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.414879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.414902 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.414934 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.414959 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:25Z","lastTransitionTime":"2025-11-22T07:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.517573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.517631 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.517654 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.517687 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.517713 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:25Z","lastTransitionTime":"2025-11-22T07:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.620289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.620327 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.620335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.620353 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.620361 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:25Z","lastTransitionTime":"2025-11-22T07:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.709481 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.709593 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.709651 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:25 crc kubenswrapper[4856]: E1122 07:03:25.709709 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.709721 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:25 crc kubenswrapper[4856]: E1122 07:03:25.709831 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:25 crc kubenswrapper[4856]: E1122 07:03:25.709928 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:25 crc kubenswrapper[4856]: E1122 07:03:25.710058 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.727598 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.727685 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.727712 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.727742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.727771 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:25Z","lastTransitionTime":"2025-11-22T07:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.831090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.831155 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.831176 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.831204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.831225 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:25Z","lastTransitionTime":"2025-11-22T07:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.933852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.933888 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.933897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.933910 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:25 crc kubenswrapper[4856]: I1122 07:03:25.933918 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:25Z","lastTransitionTime":"2025-11-22T07:03:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.037454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.037839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.038150 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.038350 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.038591 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:26Z","lastTransitionTime":"2025-11-22T07:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.140783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.140833 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.140845 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.140861 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.140872 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:26Z","lastTransitionTime":"2025-11-22T07:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.243054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.243098 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.243110 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.243128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.243139 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:26Z","lastTransitionTime":"2025-11-22T07:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.345877 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.346196 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.346355 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.346498 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.346644 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:26Z","lastTransitionTime":"2025-11-22T07:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.448987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.449220 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.449278 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.449335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.449387 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:26Z","lastTransitionTime":"2025-11-22T07:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.552180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.552219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.552230 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.552246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.552257 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:26Z","lastTransitionTime":"2025-11-22T07:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.655895 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.655977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.656010 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.656040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.656061 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:26Z","lastTransitionTime":"2025-11-22T07:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.758884 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.758940 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.758954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.758973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.758986 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:26Z","lastTransitionTime":"2025-11-22T07:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.862135 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.862184 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.862195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.862211 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.862223 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:26Z","lastTransitionTime":"2025-11-22T07:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.965080 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.965172 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.965197 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.965225 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:26 crc kubenswrapper[4856]: I1122 07:03:26.965247 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:26Z","lastTransitionTime":"2025-11-22T07:03:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.067821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.068134 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.068250 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.068340 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.068488 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:27Z","lastTransitionTime":"2025-11-22T07:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.172320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.172365 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.172383 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.172397 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.172407 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:27Z","lastTransitionTime":"2025-11-22T07:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.274717 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.274752 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.274762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.274774 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.274784 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:27Z","lastTransitionTime":"2025-11-22T07:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.377369 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.377412 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.377422 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.377436 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.377447 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:27Z","lastTransitionTime":"2025-11-22T07:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.479727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.479773 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.479786 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.479807 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.479819 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:27Z","lastTransitionTime":"2025-11-22T07:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.583424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.583560 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.583582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.583606 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.583623 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:27Z","lastTransitionTime":"2025-11-22T07:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.689947 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.689987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.689997 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.690012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.690022 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:27Z","lastTransitionTime":"2025-11-22T07:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.708777 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:27 crc kubenswrapper[4856]: E1122 07:03:27.708910 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.708973 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.709004 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:27 crc kubenswrapper[4856]: E1122 07:03:27.709173 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:27 crc kubenswrapper[4856]: E1122 07:03:27.709231 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.709359 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:27 crc kubenswrapper[4856]: E1122 07:03:27.709494 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.792728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.792778 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.792790 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.792809 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.792822 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:27Z","lastTransitionTime":"2025-11-22T07:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.895556 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.895599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.895613 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.895634 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.895648 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:27Z","lastTransitionTime":"2025-11-22T07:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.998458 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.998489 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.998497 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.998537 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:27 crc kubenswrapper[4856]: I1122 07:03:27.998546 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:27Z","lastTransitionTime":"2025-11-22T07:03:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.026746 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:28 crc kubenswrapper[4856]: E1122 07:03:28.026960 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:28 crc kubenswrapper[4856]: E1122 07:03:28.027102 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs podName:dda6b6e5-61a2-459c-9207-5e5aa500869f nodeName:}" failed. No retries permitted until 2025-11-22 07:03:36.027065061 +0000 UTC m=+58.440458379 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs") pod "network-metrics-daemon-722tb" (UID: "dda6b6e5-61a2-459c-9207-5e5aa500869f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.102686 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.103147 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.103376 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.103706 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.103998 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:28Z","lastTransitionTime":"2025-11-22T07:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.207179 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.207282 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.207331 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.207359 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.207378 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:28Z","lastTransitionTime":"2025-11-22T07:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.310488 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.310563 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.310578 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.310594 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.310606 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:28Z","lastTransitionTime":"2025-11-22T07:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.413247 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.413290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.413300 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.413314 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.413326 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:28Z","lastTransitionTime":"2025-11-22T07:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.515889 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.515930 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.515942 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.515957 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.515973 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:28Z","lastTransitionTime":"2025-11-22T07:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.618212 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.618250 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.618261 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.618275 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.618284 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:28Z","lastTransitionTime":"2025-11-22T07:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.720248 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.720285 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.720296 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.720312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.720324 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:28Z","lastTransitionTime":"2025-11-22T07:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.727408 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.739644 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.749525 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.762542 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.775878 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.787649 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.797977 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.816900 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"lane-749d76644c-6df25\\\\nF1122 07:03:21.863883 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:03:21.863896 6363 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25\\\\nI1122 07:03:21.863905 6363 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 in node crc\\\\nI1122 07:03:21.863911 6363 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 after 0 failed attempt(s)\\\\nI1122 07:03:21.863914 6363 obj_retry.go\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.822153 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.822182 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.822190 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.822204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.822213 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:28Z","lastTransitionTime":"2025-11-22T07:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.830878 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.840128 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.849391 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.863728 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.878375 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.889071 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.901498 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.915284 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:28Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.924454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.924695 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.924811 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.924913 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:28 crc kubenswrapper[4856]: I1122 07:03:28.925005 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:28Z","lastTransitionTime":"2025-11-22T07:03:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.027263 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.027308 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.027320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.027334 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.027343 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:29Z","lastTransitionTime":"2025-11-22T07:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.150867 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.150912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.150923 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.150940 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.150952 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:29Z","lastTransitionTime":"2025-11-22T07:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.253119 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.253159 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.253169 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.253195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.253208 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:29Z","lastTransitionTime":"2025-11-22T07:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.355331 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.355371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.355395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.355409 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.355420 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:29Z","lastTransitionTime":"2025-11-22T07:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.457804 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.457862 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.457873 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.457887 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.457899 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:29Z","lastTransitionTime":"2025-11-22T07:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.560014 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.560062 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.560075 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.560093 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.560106 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:29Z","lastTransitionTime":"2025-11-22T07:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.662575 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.662631 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.662648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.662668 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.662683 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:29Z","lastTransitionTime":"2025-11-22T07:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.709228 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.709271 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.709277 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.709292 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:29 crc kubenswrapper[4856]: E1122 07:03:29.709369 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:29 crc kubenswrapper[4856]: E1122 07:03:29.709438 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:29 crc kubenswrapper[4856]: E1122 07:03:29.709501 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:29 crc kubenswrapper[4856]: E1122 07:03:29.709562 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.765184 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.765219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.765229 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.765245 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.765257 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:29Z","lastTransitionTime":"2025-11-22T07:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.867632 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.867670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.867681 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.867697 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.867710 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:29Z","lastTransitionTime":"2025-11-22T07:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.969921 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.969974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.969984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.970004 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:29 crc kubenswrapper[4856]: I1122 07:03:29.970014 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:29Z","lastTransitionTime":"2025-11-22T07:03:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.072175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.072226 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.072238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.072252 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.072262 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:30Z","lastTransitionTime":"2025-11-22T07:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.174254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.174287 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.174295 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.174309 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.174318 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:30Z","lastTransitionTime":"2025-11-22T07:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.276467 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.276555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.276580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.276608 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.276628 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:30Z","lastTransitionTime":"2025-11-22T07:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.379495 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.379573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.379589 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.379606 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.379621 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:30Z","lastTransitionTime":"2025-11-22T07:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.483048 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.483089 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.483100 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.483115 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.483126 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:30Z","lastTransitionTime":"2025-11-22T07:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.586436 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.586479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.586489 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.586503 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.586525 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:30Z","lastTransitionTime":"2025-11-22T07:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.688939 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.688976 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.688984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.688998 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.689008 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:30Z","lastTransitionTime":"2025-11-22T07:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.728784 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.742610 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.755523 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.770023 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.783495 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.791211 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.791258 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.791269 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.791288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.791300 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:30Z","lastTransitionTime":"2025-11-22T07:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.805750 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"lane-749d76644c-6df25\\\\nF1122 07:03:21.863883 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:03:21.863896 6363 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25\\\\nI1122 07:03:21.863905 6363 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 in node crc\\\\nI1122 07:03:21.863911 6363 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 after 0 failed attempt(s)\\\\nI1122 07:03:21.863914 6363 obj_retry.go\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.817902 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.832650 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.842968 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.854193 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.865182 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.877635 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.888770 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.893140 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.893194 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.893208 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.893225 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.893236 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:30Z","lastTransitionTime":"2025-11-22T07:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.902132 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.916586 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.927327 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.938331 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:30Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.995166 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.995243 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.995257 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.995275 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:30 crc kubenswrapper[4856]: I1122 07:03:30.995286 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:30Z","lastTransitionTime":"2025-11-22T07:03:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.097977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.098027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.098037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.098052 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.098060 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:31Z","lastTransitionTime":"2025-11-22T07:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.200783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.200832 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.200852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.200869 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.200882 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:31Z","lastTransitionTime":"2025-11-22T07:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.303814 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.303861 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.303873 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.303891 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.303905 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:31Z","lastTransitionTime":"2025-11-22T07:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.407166 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.407215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.407227 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.407244 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.407256 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:31Z","lastTransitionTime":"2025-11-22T07:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.510997 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.511123 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.511149 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.511182 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.511206 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:31Z","lastTransitionTime":"2025-11-22T07:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.613693 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.613755 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.613777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.613810 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.613833 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:31Z","lastTransitionTime":"2025-11-22T07:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.709689 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.709757 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.709813 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.709807 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:31 crc kubenswrapper[4856]: E1122 07:03:31.709966 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:31 crc kubenswrapper[4856]: E1122 07:03:31.710163 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:31 crc kubenswrapper[4856]: E1122 07:03:31.710336 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:31 crc kubenswrapper[4856]: E1122 07:03:31.710450 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.716371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.716435 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.716468 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.716500 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.716557 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:31Z","lastTransitionTime":"2025-11-22T07:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.819029 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.819070 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.819081 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.819098 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.819109 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:31Z","lastTransitionTime":"2025-11-22T07:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.910735 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.918749 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.921829 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.921858 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.921866 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.921876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.921885 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:31Z","lastTransitionTime":"2025-11-22T07:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.926816 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.927034 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.927090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.927111 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.927141 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.927164 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:31Z","lastTransitionTime":"2025-11-22T07:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.938982 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:31 crc kubenswrapper[4856]: E1122 07:03:31.946583 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.950905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.950966 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.950987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.951012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.951029 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:31Z","lastTransitionTime":"2025-11-22T07:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.951749 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.968664 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:31 crc kubenswrapper[4856]: E1122 07:03:31.976954 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.981905 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.982235 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.982290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.982305 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.982322 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.982333 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:31Z","lastTransitionTime":"2025-11-22T07:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:31 crc kubenswrapper[4856]: E1122 07:03:31.993803 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.997071 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.997105 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.997114 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.997128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:31 crc kubenswrapper[4856]: I1122 07:03:31.997138 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:31Z","lastTransitionTime":"2025-11-22T07:03:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.001246 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"lane-749d76644c-6df25\\\\nF1122 07:03:21.863883 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:03:21.863896 6363 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25\\\\nI1122 07:03:21.863905 6363 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 in node crc\\\\nI1122 07:03:21.863911 6363 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 after 0 failed attempt(s)\\\\nI1122 07:03:21.863914 6363 obj_retry.go\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:31Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:32 crc kubenswrapper[4856]: E1122 07:03:32.009096 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.013245 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.013372 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.013445 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.013557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.013636 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:32Z","lastTransitionTime":"2025-11-22T07:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.014257 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:32 crc kubenswrapper[4856]: E1122 07:03:32.024757 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:32 crc kubenswrapper[4856]: E1122 07:03:32.025049 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.026660 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.026756 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.026832 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.026904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.026973 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:32Z","lastTransitionTime":"2025-11-22T07:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.028639 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.038314 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.050219 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.061637 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.080026 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.093564 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.108773 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.125035 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.130426 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.130473 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.130484 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.130500 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.130546 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:32Z","lastTransitionTime":"2025-11-22T07:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.138389 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:32Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.233453 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.233528 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.233544 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.233565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.233578 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:32Z","lastTransitionTime":"2025-11-22T07:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.336962 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.337012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.337022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.337042 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.337053 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:32Z","lastTransitionTime":"2025-11-22T07:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.440268 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.440319 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.440332 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.440458 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.440472 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:32Z","lastTransitionTime":"2025-11-22T07:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.543885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.543931 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.543942 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.543962 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.543976 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:32Z","lastTransitionTime":"2025-11-22T07:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.647011 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.647074 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.647089 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.647114 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.647131 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:32Z","lastTransitionTime":"2025-11-22T07:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.750097 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.750140 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.750152 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.750168 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.750179 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:32Z","lastTransitionTime":"2025-11-22T07:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.853881 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.853947 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.853963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.853996 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.854019 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:32Z","lastTransitionTime":"2025-11-22T07:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.958401 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.958439 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.958453 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.958480 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:32 crc kubenswrapper[4856]: I1122 07:03:32.958494 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:32Z","lastTransitionTime":"2025-11-22T07:03:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.069002 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.069083 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.069098 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.069126 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.069141 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:33Z","lastTransitionTime":"2025-11-22T07:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.172122 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.172149 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.172157 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.172170 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.172178 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:33Z","lastTransitionTime":"2025-11-22T07:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.280176 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.280224 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.280236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.280255 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.280269 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:33Z","lastTransitionTime":"2025-11-22T07:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.382535 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.382583 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.382594 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.382608 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.382617 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:33Z","lastTransitionTime":"2025-11-22T07:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.485660 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.485706 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.485716 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.485737 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.485751 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:33Z","lastTransitionTime":"2025-11-22T07:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.588360 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.588395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.588404 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.588418 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.588428 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:33Z","lastTransitionTime":"2025-11-22T07:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.691129 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.691173 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.691185 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.691202 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.691215 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:33Z","lastTransitionTime":"2025-11-22T07:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.709200 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.709221 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.709214 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.709264 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:33 crc kubenswrapper[4856]: E1122 07:03:33.709360 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:33 crc kubenswrapper[4856]: E1122 07:03:33.709464 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:33 crc kubenswrapper[4856]: E1122 07:03:33.709553 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:33 crc kubenswrapper[4856]: E1122 07:03:33.709636 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.793208 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.793566 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.793597 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.793629 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.793649 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:33Z","lastTransitionTime":"2025-11-22T07:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.895704 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.895762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.895777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.895799 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.895814 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:33Z","lastTransitionTime":"2025-11-22T07:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.998399 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.998444 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.998455 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.998473 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:33 crc kubenswrapper[4856]: I1122 07:03:33.998485 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:33Z","lastTransitionTime":"2025-11-22T07:03:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.101413 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.101450 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.101459 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.101494 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.101541 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:34Z","lastTransitionTime":"2025-11-22T07:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.203883 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.203924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.203938 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.203952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.203962 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:34Z","lastTransitionTime":"2025-11-22T07:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.306942 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.307020 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.307037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.307059 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.307074 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:34Z","lastTransitionTime":"2025-11-22T07:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.410224 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.410260 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.410274 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.410296 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.410307 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:34Z","lastTransitionTime":"2025-11-22T07:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.512963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.513044 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.513091 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.513118 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.513136 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:34Z","lastTransitionTime":"2025-11-22T07:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.615386 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.615432 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.615442 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.615463 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.615474 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:34Z","lastTransitionTime":"2025-11-22T07:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.717674 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.717728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.717745 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.717765 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.717781 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:34Z","lastTransitionTime":"2025-11-22T07:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.820027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.820064 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.820074 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.820090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.820099 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:34Z","lastTransitionTime":"2025-11-22T07:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.922408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.922454 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.922462 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.922475 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:34 crc kubenswrapper[4856]: I1122 07:03:34.922483 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:34Z","lastTransitionTime":"2025-11-22T07:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.026229 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.026260 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.026268 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.026281 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.026290 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:35Z","lastTransitionTime":"2025-11-22T07:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.128182 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.128221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.128230 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.128244 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.128253 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:35Z","lastTransitionTime":"2025-11-22T07:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.230863 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.230904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.230914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.230929 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.230938 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:35Z","lastTransitionTime":"2025-11-22T07:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.333931 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.333986 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.334001 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.334020 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.334032 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:35Z","lastTransitionTime":"2025-11-22T07:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.436974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.437031 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.437041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.437057 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.437068 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:35Z","lastTransitionTime":"2025-11-22T07:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.539876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.539943 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.539962 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.539988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.540009 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:35Z","lastTransitionTime":"2025-11-22T07:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.643240 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.643289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.643300 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.643313 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.643322 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:35Z","lastTransitionTime":"2025-11-22T07:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.709221 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.709318 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.709329 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:35 crc kubenswrapper[4856]: E1122 07:03:35.709445 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.709493 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:35 crc kubenswrapper[4856]: E1122 07:03:35.709637 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:35 crc kubenswrapper[4856]: E1122 07:03:35.709718 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:35 crc kubenswrapper[4856]: E1122 07:03:35.709772 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.746061 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.746148 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.746162 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.746180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.746191 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:35Z","lastTransitionTime":"2025-11-22T07:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.848964 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.849016 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.849049 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.849063 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.849073 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:35Z","lastTransitionTime":"2025-11-22T07:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.951869 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.951911 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.951925 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.951940 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:35 crc kubenswrapper[4856]: I1122 07:03:35.951948 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:35Z","lastTransitionTime":"2025-11-22T07:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.054007 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.054069 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.054086 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.054111 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.054127 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:36Z","lastTransitionTime":"2025-11-22T07:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.111871 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:36 crc kubenswrapper[4856]: E1122 07:03:36.112050 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:36 crc kubenswrapper[4856]: E1122 07:03:36.112127 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs podName:dda6b6e5-61a2-459c-9207-5e5aa500869f nodeName:}" failed. No retries permitted until 2025-11-22 07:03:52.112109807 +0000 UTC m=+74.525503065 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs") pod "network-metrics-daemon-722tb" (UID: "dda6b6e5-61a2-459c-9207-5e5aa500869f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.157773 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.157857 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.157879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.157911 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.157934 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:36Z","lastTransitionTime":"2025-11-22T07:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.261166 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.261218 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.261237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.261253 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.261264 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:36Z","lastTransitionTime":"2025-11-22T07:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.364611 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.364677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.364692 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.364715 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.364729 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:36Z","lastTransitionTime":"2025-11-22T07:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.468034 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.468102 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.468125 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.468180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.468209 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:36Z","lastTransitionTime":"2025-11-22T07:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.570807 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.570860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.570873 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.570897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.570910 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:36Z","lastTransitionTime":"2025-11-22T07:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.673778 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.673825 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.673840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.673858 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.673871 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:36Z","lastTransitionTime":"2025-11-22T07:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.776670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.776754 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.776788 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.776820 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.776843 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:36Z","lastTransitionTime":"2025-11-22T07:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.879432 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.879471 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.879481 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.879493 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.879530 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:36Z","lastTransitionTime":"2025-11-22T07:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.982009 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.982064 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.982082 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.982107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:36 crc kubenswrapper[4856]: I1122 07:03:36.982126 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:36Z","lastTransitionTime":"2025-11-22T07:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.084121 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.084168 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.084183 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.084204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.084218 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:37Z","lastTransitionTime":"2025-11-22T07:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.186405 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.186445 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.186457 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.186473 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.186487 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:37Z","lastTransitionTime":"2025-11-22T07:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.289312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.289343 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.289358 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.289373 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.289384 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:37Z","lastTransitionTime":"2025-11-22T07:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.391528 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.391559 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.391569 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.391581 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.391589 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:37Z","lastTransitionTime":"2025-11-22T07:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.494797 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.494839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.494850 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.494864 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.494874 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:37Z","lastTransitionTime":"2025-11-22T07:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.597537 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.597587 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.597600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.597619 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.597631 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:37Z","lastTransitionTime":"2025-11-22T07:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.700371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.700430 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.700446 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.700467 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.700482 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:37Z","lastTransitionTime":"2025-11-22T07:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.708829 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.708860 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:37 crc kubenswrapper[4856]: E1122 07:03:37.708958 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.709029 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.709073 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:37 crc kubenswrapper[4856]: E1122 07:03:37.709208 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:37 crc kubenswrapper[4856]: E1122 07:03:37.709303 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:37 crc kubenswrapper[4856]: E1122 07:03:37.709545 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.710449 4856 scope.go:117] "RemoveContainer" containerID="ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.803323 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.803379 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.803390 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.803405 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.803419 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:37Z","lastTransitionTime":"2025-11-22T07:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.909737 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.909786 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.909798 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.909814 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:37 crc kubenswrapper[4856]: I1122 07:03:37.909825 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:37Z","lastTransitionTime":"2025-11-22T07:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.012072 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.012100 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.012111 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.012127 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.012139 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:38Z","lastTransitionTime":"2025-11-22T07:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.101716 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/1.log" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.104545 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerStarted","Data":"5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2"} Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.104952 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.114200 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.114233 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.114245 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.114261 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.114273 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:38Z","lastTransitionTime":"2025-11-22T07:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.121003 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40c496e-61c8-4075-8458-a68ff5cce142\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a763c0331e7cddd0a225fed5d266bb61b64168a86c91607efea86c10d90b6a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081c75595e1ae1bf22e081b0f1d2f2ab0c2de1faacb2152112039775b82cbf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd103654296a981d4f128f44b06cb90a76cf6e7e7351898e3c42ad38e89772dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.135003 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.150454 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.179164 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"lane-749d76644c-6df25\\\\nF1122 07:03:21.863883 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:03:21.863896 6363 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25\\\\nI1122 07:03:21.863905 6363 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 in node crc\\\\nI1122 07:03:21.863911 6363 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 after 0 failed attempt(s)\\\\nI1122 07:03:21.863914 6363 obj_retry.go\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.195441 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.213698 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.216335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.216384 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.216404 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.216427 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.216444 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:38Z","lastTransitionTime":"2025-11-22T07:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.231951 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.247239 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.291899 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.311424 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.319463 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.319568 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.319591 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.319617 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.319639 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:38Z","lastTransitionTime":"2025-11-22T07:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.333604 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.352978 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.364291 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.378074 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.389377 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.402331 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.416851 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.422945 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.422990 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.423001 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.423017 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.423028 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:38Z","lastTransitionTime":"2025-11-22T07:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.526139 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.526269 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.526290 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.526315 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.526333 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:38Z","lastTransitionTime":"2025-11-22T07:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.629034 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.629092 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.629110 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.629133 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.629151 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:38Z","lastTransitionTime":"2025-11-22T07:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.726468 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.731957 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.732012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.732028 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.732055 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.732072 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:38Z","lastTransitionTime":"2025-11-22T07:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.742242 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.755681 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.768068 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.783081 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.795464 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.813692 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.831415 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.834189 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.834358 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.834471 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.834632 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.834777 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:38Z","lastTransitionTime":"2025-11-22T07:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.845277 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.859540 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.873063 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40c496e-61c8-4075-8458-a68ff5cce142\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a763c0331e7cddd0a225fed5d266bb61b64168a86c91607efea86c10d90b6a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081c75595e1ae1bf22e081b0f1d2f2ab0c2de1faacb2152112039775b82cbf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd103654296a981d4f128f44b06cb90a76cf6e7e7351898e3c42ad38e89772dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.884350 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.896472 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.908451 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.930238 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"lane-749d76644c-6df25\\\\nF1122 07:03:21.863883 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:03:21.863896 6363 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25\\\\nI1122 07:03:21.863905 6363 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 in node crc\\\\nI1122 07:03:21.863911 6363 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 after 0 failed attempt(s)\\\\nI1122 07:03:21.863914 6363 obj_retry.go\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.943307 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.943369 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.943385 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.943410 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.943427 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:38Z","lastTransitionTime":"2025-11-22T07:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.945710 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:38 crc kubenswrapper[4856]: I1122 07:03:38.960365 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:38Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.047562 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.047892 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.047904 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.047925 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.047938 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:39Z","lastTransitionTime":"2025-11-22T07:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.112299 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/2.log" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.113323 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/1.log" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.117884 4856 generic.go:334] "Generic (PLEG): container finished" podID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerID="5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2" exitCode=1 Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.117939 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2"} Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.118081 4856 scope.go:117] "RemoveContainer" containerID="ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.119083 4856 scope.go:117] "RemoveContainer" containerID="5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2" Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.119527 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.137280 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40c496e-61c8-4075-8458-a68ff5cce142\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a763c0331e7cddd0a225fed5d266bb61b64168a86c91607efea86c10d90b6a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081c75595e1ae1bf22e081b0f1d2f2ab0c2de1faacb2152112039775b82cbf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd103654296a981d4f128f44b06cb90a76cf6e7e7351898e3c42ad38e89772dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.153371 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.153447 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.153458 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.153477 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.153492 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:39Z","lastTransitionTime":"2025-11-22T07:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.154981 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.171373 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.186148 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.206366 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ebf8bf31a17a9dc128d8cb87a7467053d4ceaa29d6d182d98d8d1284d02e0669\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:21Z\\\",\\\"message\\\":\\\"lane-749d76644c-6df25\\\\nF1122 07:03:21.863883 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:21Z is after 2025-08-24T17:21:41Z]\\\\nI1122 07:03:21.863896 6363 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25\\\\nI1122 07:03:21.863905 6363 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 in node crc\\\\nI1122 07:03:21.863911 6363 obj_retry.go:386] Retry successful for *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25 after 0 failed attempt(s)\\\\nI1122 07:03:21.863914 6363 obj_retry.go\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:38Z\\\",\\\"message\\\":\\\"ices.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1122 07:03:38.666573 6600 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nF1122 07:03:38.666563 6600 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.221255 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.234725 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.252209 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.256608 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.256658 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.256668 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.256687 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.256699 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:39Z","lastTransitionTime":"2025-11-22T07:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.272397 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.291787 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.310696 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.343324 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.360278 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.360328 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.360342 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.360365 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.360381 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:39Z","lastTransitionTime":"2025-11-22T07:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.365126 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.388267 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.407103 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.421351 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.437404 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:39Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.462548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.462590 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.462599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.462613 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.462621 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:39Z","lastTransitionTime":"2025-11-22T07:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.550206 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.550438 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.550475 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:04:11.550435358 +0000 UTC m=+93.963828636 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.550580 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.550624 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.550638 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.550725 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:04:11.550694295 +0000 UTC m=+93.964087593 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.550762 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.550849 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.550868 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.550884 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.550918 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.550985 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.550933 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:04:11.550922211 +0000 UTC m=+93.964315479 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.551032 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.551069 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.551081 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:04:11.551054946 +0000 UTC m=+93.964448214 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.551148 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:04:11.551129188 +0000 UTC m=+93.964522486 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.565979 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.566037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.566052 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.566078 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.566094 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:39Z","lastTransitionTime":"2025-11-22T07:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.668858 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.668921 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.668937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.668955 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.668968 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:39Z","lastTransitionTime":"2025-11-22T07:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.709534 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.709574 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.709672 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.709681 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.709804 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.709949 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.710070 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:39 crc kubenswrapper[4856]: E1122 07:03:39.710268 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.771216 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.771274 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.771285 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.771305 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.771316 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:39Z","lastTransitionTime":"2025-11-22T07:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.875096 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.875175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.875197 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.875222 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.875241 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:39Z","lastTransitionTime":"2025-11-22T07:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.977714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.977772 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.977783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.977803 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:39 crc kubenswrapper[4856]: I1122 07:03:39.977815 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:39Z","lastTransitionTime":"2025-11-22T07:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.081268 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.081327 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.081340 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.081365 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.081382 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:40Z","lastTransitionTime":"2025-11-22T07:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.125997 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/2.log" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.132283 4856 scope.go:117] "RemoveContainer" containerID="5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2" Nov 22 07:03:40 crc kubenswrapper[4856]: E1122 07:03:40.132500 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.150291 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.167290 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.184966 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.185571 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.185635 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.185673 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.185755 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:40Z","lastTransitionTime":"2025-11-22T07:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.186589 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.206307 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.222011 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.234876 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.249164 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.261661 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.275906 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.289095 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.289617 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.289679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.289694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.289714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.289729 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:40Z","lastTransitionTime":"2025-11-22T07:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.302620 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40c496e-61c8-4075-8458-a68ff5cce142\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a763c0331e7cddd0a225fed5d266bb61b64168a86c91607efea86c10d90b6a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081c75595e1ae1bf22e081b0f1d2f2ab0c2de1faacb2152112039775b82cbf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd103654296a981d4f128f44b06cb90a76cf6e7e7351898e3c42ad38e89772dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.314541 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.327567 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.336687 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.354035 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:38Z\\\",\\\"message\\\":\\\"ices.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1122 07:03:38.666573 6600 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nF1122 07:03:38.666563 6600 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.367822 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.380361 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:40Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.392666 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.392705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.392718 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.392742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.392755 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:40Z","lastTransitionTime":"2025-11-22T07:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.496404 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.496483 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.496496 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.496543 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.496560 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:40Z","lastTransitionTime":"2025-11-22T07:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.600110 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.600201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.600220 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.600251 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.600272 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:40Z","lastTransitionTime":"2025-11-22T07:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.703473 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.703575 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.703599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.703637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.703667 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:40Z","lastTransitionTime":"2025-11-22T07:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.807002 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.807060 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.807073 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.807097 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.807110 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:40Z","lastTransitionTime":"2025-11-22T07:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.910608 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.910707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.910722 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.910752 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:40 crc kubenswrapper[4856]: I1122 07:03:40.910767 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:40Z","lastTransitionTime":"2025-11-22T07:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.014270 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.014339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.014356 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.014380 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.014398 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:41Z","lastTransitionTime":"2025-11-22T07:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.117673 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.117740 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.117752 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.117773 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.117788 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:41Z","lastTransitionTime":"2025-11-22T07:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.220481 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.220566 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.220581 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.220601 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.220613 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:41Z","lastTransitionTime":"2025-11-22T07:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.323398 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.323444 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.323458 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.323736 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.323762 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:41Z","lastTransitionTime":"2025-11-22T07:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.427394 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.427432 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.427664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.427685 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.427749 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:41Z","lastTransitionTime":"2025-11-22T07:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.530798 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.530859 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.530900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.530920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.530933 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:41Z","lastTransitionTime":"2025-11-22T07:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.634072 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.634121 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.634130 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.634143 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.634154 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:41Z","lastTransitionTime":"2025-11-22T07:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.709091 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.709193 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.709197 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.709234 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:41 crc kubenswrapper[4856]: E1122 07:03:41.709379 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:41 crc kubenswrapper[4856]: E1122 07:03:41.709482 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:41 crc kubenswrapper[4856]: E1122 07:03:41.709634 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:41 crc kubenswrapper[4856]: E1122 07:03:41.709791 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.737558 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.737608 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.737633 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.737691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.737707 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:41Z","lastTransitionTime":"2025-11-22T07:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.841115 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.841165 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.841178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.841193 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.841204 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:41Z","lastTransitionTime":"2025-11-22T07:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.943387 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.943425 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.943434 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.943450 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:41 crc kubenswrapper[4856]: I1122 07:03:41.943461 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:41Z","lastTransitionTime":"2025-11-22T07:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.046312 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.046349 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.046359 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.046375 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.046387 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.148916 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.148958 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.148972 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.148991 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.149006 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.182627 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.182666 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.182676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.182691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.182700 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: E1122 07:03:42.195959 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:42Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.199965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.200006 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.200019 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.200037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.200048 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: E1122 07:03:42.215893 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:42Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.219840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.219894 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.219910 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.219931 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.219947 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: E1122 07:03:42.235963 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:42Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.240112 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.240162 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.240177 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.240199 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.240211 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: E1122 07:03:42.256314 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:42Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.260493 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.260545 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.260558 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.260577 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.260590 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: E1122 07:03:42.273929 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:42Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:42 crc kubenswrapper[4856]: E1122 07:03:42.274049 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.275555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.275621 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.275637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.275666 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.275690 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.378460 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.378501 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.378527 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.378544 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.378554 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.481832 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.481880 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.481893 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.481916 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.481937 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.586035 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.586079 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.586091 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.586113 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.586128 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.689106 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.689151 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.689162 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.689180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.689196 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.792484 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.792554 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.792566 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.792586 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.792598 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.894354 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.894392 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.894400 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.894414 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.894422 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.997280 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.997342 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.997362 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.997541 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:42 crc kubenswrapper[4856]: I1122 07:03:42.997591 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:42Z","lastTransitionTime":"2025-11-22T07:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.100492 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.100545 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.100554 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.100570 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.100579 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:43Z","lastTransitionTime":"2025-11-22T07:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.203332 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.203384 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.203398 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.203419 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.203434 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:43Z","lastTransitionTime":"2025-11-22T07:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.305571 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.305615 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.305640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.305657 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.305668 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:43Z","lastTransitionTime":"2025-11-22T07:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.408429 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.408529 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.408541 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.408558 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.408568 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:43Z","lastTransitionTime":"2025-11-22T07:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.511703 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.511741 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.511750 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.511766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.511779 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:43Z","lastTransitionTime":"2025-11-22T07:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.614424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.614969 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.615069 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.615187 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.615429 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:43Z","lastTransitionTime":"2025-11-22T07:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.709235 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.709771 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.709840 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:43 crc kubenswrapper[4856]: E1122 07:03:43.710010 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.710299 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:43 crc kubenswrapper[4856]: E1122 07:03:43.710296 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:43 crc kubenswrapper[4856]: E1122 07:03:43.710525 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:43 crc kubenswrapper[4856]: E1122 07:03:43.710600 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.719261 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.719387 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.719468 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.719593 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.719678 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:43Z","lastTransitionTime":"2025-11-22T07:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.824160 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.824212 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.824225 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.824248 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.824265 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:43Z","lastTransitionTime":"2025-11-22T07:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.927502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.927601 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.927623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.927648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:43 crc kubenswrapper[4856]: I1122 07:03:43.927674 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:43Z","lastTransitionTime":"2025-11-22T07:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.031028 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.031297 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.031397 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.031530 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.031605 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:44Z","lastTransitionTime":"2025-11-22T07:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.135073 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.135144 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.135161 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.135186 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.135201 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:44Z","lastTransitionTime":"2025-11-22T07:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.237989 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.238063 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.238083 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.238113 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.238138 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:44Z","lastTransitionTime":"2025-11-22T07:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.342033 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.342078 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.342088 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.342108 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.342120 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:44Z","lastTransitionTime":"2025-11-22T07:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.445464 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.445498 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.445529 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.445543 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.445554 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:44Z","lastTransitionTime":"2025-11-22T07:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.547827 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.547860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.547869 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.547900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.547910 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:44Z","lastTransitionTime":"2025-11-22T07:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.650707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.650764 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.650776 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.650794 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.650806 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:44Z","lastTransitionTime":"2025-11-22T07:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.753478 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.753737 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.753881 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.753974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.754052 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:44Z","lastTransitionTime":"2025-11-22T07:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.856619 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.856665 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.856674 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.856691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.856700 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:44Z","lastTransitionTime":"2025-11-22T07:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.960042 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.960608 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.960684 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.960780 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:44 crc kubenswrapper[4856]: I1122 07:03:44.960853 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:44Z","lastTransitionTime":"2025-11-22T07:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.063577 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.063619 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.063630 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.063646 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.063658 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:45Z","lastTransitionTime":"2025-11-22T07:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.166035 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.166102 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.166122 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.166147 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.166164 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:45Z","lastTransitionTime":"2025-11-22T07:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.269242 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.269295 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.269313 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.269331 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.269341 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:45Z","lastTransitionTime":"2025-11-22T07:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.371709 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.371949 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.372009 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.372069 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.372132 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:45Z","lastTransitionTime":"2025-11-22T07:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.474088 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.474138 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.474147 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.474161 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.474169 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:45Z","lastTransitionTime":"2025-11-22T07:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.577118 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.577162 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.577174 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.577190 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.577201 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:45Z","lastTransitionTime":"2025-11-22T07:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.680138 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.680198 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.680209 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.680226 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.680236 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:45Z","lastTransitionTime":"2025-11-22T07:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.708727 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:45 crc kubenswrapper[4856]: E1122 07:03:45.708884 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.708961 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.708970 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.708967 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:45 crc kubenswrapper[4856]: E1122 07:03:45.709053 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:45 crc kubenswrapper[4856]: E1122 07:03:45.709132 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:45 crc kubenswrapper[4856]: E1122 07:03:45.709221 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.783542 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.783590 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.783603 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.783622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.783638 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:45Z","lastTransitionTime":"2025-11-22T07:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.886579 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.886616 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.886629 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.886646 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.886658 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:45Z","lastTransitionTime":"2025-11-22T07:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.989146 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.989178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.989186 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.989199 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:45 crc kubenswrapper[4856]: I1122 07:03:45.989209 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:45Z","lastTransitionTime":"2025-11-22T07:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.091671 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.091711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.091722 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.091738 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.091749 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:46Z","lastTransitionTime":"2025-11-22T07:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.194243 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.194340 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.194356 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.194378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.194394 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:46Z","lastTransitionTime":"2025-11-22T07:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.296738 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.296776 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.296788 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.296803 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.296816 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:46Z","lastTransitionTime":"2025-11-22T07:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.400630 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.400686 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.400704 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.400756 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.400769 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:46Z","lastTransitionTime":"2025-11-22T07:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.503653 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.503694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.503704 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.503719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.503728 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:46Z","lastTransitionTime":"2025-11-22T07:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.606993 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.607027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.607036 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.607048 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.607058 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:46Z","lastTransitionTime":"2025-11-22T07:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.711536 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.711574 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.711585 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.711602 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.711613 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:46Z","lastTransitionTime":"2025-11-22T07:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.720411 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.814415 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.814447 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.814457 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.814473 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.814486 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:46Z","lastTransitionTime":"2025-11-22T07:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.916966 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.917007 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.917022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.917040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:46 crc kubenswrapper[4856]: I1122 07:03:46.917053 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:46Z","lastTransitionTime":"2025-11-22T07:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.019016 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.019054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.019068 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.019086 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.019098 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:47Z","lastTransitionTime":"2025-11-22T07:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.121820 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.121882 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.121902 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.121924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.121939 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:47Z","lastTransitionTime":"2025-11-22T07:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.224876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.224925 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.224937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.224955 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.224969 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:47Z","lastTransitionTime":"2025-11-22T07:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.327540 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.327581 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.327592 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.327609 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.327620 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:47Z","lastTransitionTime":"2025-11-22T07:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.429821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.429886 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.429896 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.429913 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.429950 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:47Z","lastTransitionTime":"2025-11-22T07:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.532239 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.532277 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.532287 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.532299 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.532308 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:47Z","lastTransitionTime":"2025-11-22T07:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.634770 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.634813 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.634822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.634837 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.634847 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:47Z","lastTransitionTime":"2025-11-22T07:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.709670 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.709676 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.709690 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.709735 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:47 crc kubenswrapper[4856]: E1122 07:03:47.710320 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:47 crc kubenswrapper[4856]: E1122 07:03:47.710394 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:47 crc kubenswrapper[4856]: E1122 07:03:47.710561 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:47 crc kubenswrapper[4856]: E1122 07:03:47.710603 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.737361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.737393 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.737401 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.737414 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.737423 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:47Z","lastTransitionTime":"2025-11-22T07:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.839823 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.840080 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.840223 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.840335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.840408 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:47Z","lastTransitionTime":"2025-11-22T07:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.943716 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.944600 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.944744 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.944913 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:47 crc kubenswrapper[4856]: I1122 07:03:47.945093 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:47Z","lastTransitionTime":"2025-11-22T07:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.048207 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.048277 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.048289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.048326 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.048342 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:48Z","lastTransitionTime":"2025-11-22T07:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.151409 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.151440 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.151468 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.151484 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.151495 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:48Z","lastTransitionTime":"2025-11-22T07:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.254659 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.254704 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.254712 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.254726 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.254739 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:48Z","lastTransitionTime":"2025-11-22T07:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.357055 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.357259 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.357365 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.357500 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.357598 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:48Z","lastTransitionTime":"2025-11-22T07:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.468182 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.468246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.468262 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.468284 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.468298 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:48Z","lastTransitionTime":"2025-11-22T07:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.571326 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.571390 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.571404 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.571433 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.571451 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:48Z","lastTransitionTime":"2025-11-22T07:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.674179 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.674214 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.674223 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.674236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.674247 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:48Z","lastTransitionTime":"2025-11-22T07:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.721465 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.735386 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.747315 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.759498 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.774727 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.776645 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.776705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.776715 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.776730 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.776739 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:48Z","lastTransitionTime":"2025-11-22T07:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.785059 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.796041 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.806048 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.813861 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.823400 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.835191 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f96481a3-6094-4d09-b606-a53e1d016e5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://289319a76359a865092209ae7b4c1945c02be4817a450a8995562c1296e06772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.847160 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40c496e-61c8-4075-8458-a68ff5cce142\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a763c0331e7cddd0a225fed5d266bb61b64168a86c91607efea86c10d90b6a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081c75595e1ae1bf22e081b0f1d2f2ab0c2de1faacb2152112039775b82cbf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd103654296a981d4f128f44b06cb90a76cf6e7e7351898e3c42ad38e89772dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.857800 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.867895 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.877864 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.880035 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.880084 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.880101 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.880123 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.880139 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:48Z","lastTransitionTime":"2025-11-22T07:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.892865 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:38Z\\\",\\\"message\\\":\\\"ices.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1122 07:03:38.666573 6600 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nF1122 07:03:38.666563 6600 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.903295 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.912085 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:48Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.982560 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.982616 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.982629 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.982648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:48 crc kubenswrapper[4856]: I1122 07:03:48.982659 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:48Z","lastTransitionTime":"2025-11-22T07:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.085184 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.085225 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.085237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.085255 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.085267 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:49Z","lastTransitionTime":"2025-11-22T07:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.188479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.188539 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.188554 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.188573 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.188597 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:49Z","lastTransitionTime":"2025-11-22T07:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.291303 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.291346 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.291354 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.291369 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.291378 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:49Z","lastTransitionTime":"2025-11-22T07:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.393147 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.393185 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.393193 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.393206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.393215 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:49Z","lastTransitionTime":"2025-11-22T07:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.496062 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.496095 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.496106 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.496121 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.496134 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:49Z","lastTransitionTime":"2025-11-22T07:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.598234 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.598271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.598282 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.598296 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.598307 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:49Z","lastTransitionTime":"2025-11-22T07:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.699914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.699953 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.699963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.699976 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.699986 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:49Z","lastTransitionTime":"2025-11-22T07:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.709457 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.709502 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.709470 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.709468 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:49 crc kubenswrapper[4856]: E1122 07:03:49.709595 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:49 crc kubenswrapper[4856]: E1122 07:03:49.709698 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:49 crc kubenswrapper[4856]: E1122 07:03:49.709807 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:49 crc kubenswrapper[4856]: E1122 07:03:49.709886 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.802039 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.802074 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.802086 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.802100 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.802109 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:49Z","lastTransitionTime":"2025-11-22T07:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.904795 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.904902 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.904918 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.904939 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:49 crc kubenswrapper[4856]: I1122 07:03:49.904953 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:49Z","lastTransitionTime":"2025-11-22T07:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.007682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.007719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.007728 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.007741 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.007750 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:50Z","lastTransitionTime":"2025-11-22T07:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.109945 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.110006 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.110018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.110040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.110052 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:50Z","lastTransitionTime":"2025-11-22T07:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.212331 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.212380 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.212389 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.212402 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.212412 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:50Z","lastTransitionTime":"2025-11-22T07:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.314333 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.314381 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.314392 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.314408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.314419 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:50Z","lastTransitionTime":"2025-11-22T07:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.417238 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.417287 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.417300 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.417317 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.417330 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:50Z","lastTransitionTime":"2025-11-22T07:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.520413 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.520485 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.520536 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.520569 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.520587 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:50Z","lastTransitionTime":"2025-11-22T07:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.623320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.623389 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.623407 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.623430 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.623448 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:50Z","lastTransitionTime":"2025-11-22T07:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.726949 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.727012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.727028 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.727080 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.727098 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:50Z","lastTransitionTime":"2025-11-22T07:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.829129 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.829178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.829189 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.829203 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.829214 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:50Z","lastTransitionTime":"2025-11-22T07:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.931581 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.931644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.931660 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.931680 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:50 crc kubenswrapper[4856]: I1122 07:03:50.931698 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:50Z","lastTransitionTime":"2025-11-22T07:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.033982 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.034027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.034043 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.034066 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.034076 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:51Z","lastTransitionTime":"2025-11-22T07:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.136716 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.136764 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.136772 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.136786 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.136795 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:51Z","lastTransitionTime":"2025-11-22T07:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.238828 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.238871 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.238879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.238893 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.238902 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:51Z","lastTransitionTime":"2025-11-22T07:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.341917 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.341961 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.341969 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.341982 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.341991 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:51Z","lastTransitionTime":"2025-11-22T07:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.444700 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.444744 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.444756 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.444772 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.444788 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:51Z","lastTransitionTime":"2025-11-22T07:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.547404 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.547478 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.547495 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.547551 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.547570 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:51Z","lastTransitionTime":"2025-11-22T07:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.650281 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.650326 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.650337 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.650352 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.650362 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:51Z","lastTransitionTime":"2025-11-22T07:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.708966 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:51 crc kubenswrapper[4856]: E1122 07:03:51.709475 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.709699 4856 scope.go:117] "RemoveContainer" containerID="5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2" Nov 22 07:03:51 crc kubenswrapper[4856]: E1122 07:03:51.709839 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.708993 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:51 crc kubenswrapper[4856]: E1122 07:03:51.709908 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.708980 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:51 crc kubenswrapper[4856]: E1122 07:03:51.709949 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.709018 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:51 crc kubenswrapper[4856]: E1122 07:03:51.710002 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.753065 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.753115 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.753126 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.753221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.753232 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:51Z","lastTransitionTime":"2025-11-22T07:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.855979 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.856310 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.856372 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.856432 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.856498 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:51Z","lastTransitionTime":"2025-11-22T07:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.958918 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.958958 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.958970 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.958988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:51 crc kubenswrapper[4856]: I1122 07:03:51.958999 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:51Z","lastTransitionTime":"2025-11-22T07:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.062175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.062220 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.062232 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.062248 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.062260 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.164606 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.164639 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.164649 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.164664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.164673 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.183175 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:52 crc kubenswrapper[4856]: E1122 07:03:52.183335 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:52 crc kubenswrapper[4856]: E1122 07:03:52.183605 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs podName:dda6b6e5-61a2-459c-9207-5e5aa500869f nodeName:}" failed. No retries permitted until 2025-11-22 07:04:24.183589094 +0000 UTC m=+106.596982352 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs") pod "network-metrics-daemon-722tb" (UID: "dda6b6e5-61a2-459c-9207-5e5aa500869f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.266835 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.266862 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.266870 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.266882 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.266891 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.368707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.368738 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.368750 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.368766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.368775 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.467906 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.467965 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.467992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.468010 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.468020 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: E1122 07:03:52.479324 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:52Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.482188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.482219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.482228 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.482242 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.482251 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: E1122 07:03:52.496627 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:52Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.499721 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.499748 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.499756 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.499770 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.499779 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: E1122 07:03:52.511808 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:52Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.515039 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.515070 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.515079 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.515090 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.515100 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: E1122 07:03:52.527424 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:52Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.531374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.531407 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.531419 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.531434 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.531448 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: E1122 07:03:52.543166 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:52Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:52 crc kubenswrapper[4856]: E1122 07:03:52.543319 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.544610 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.544640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.544649 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.544666 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.544678 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.647223 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.647265 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.647276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.647296 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.647308 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.748963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.749009 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.749026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.749047 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.749063 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.852098 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.852152 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.852170 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.852195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.852212 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.954249 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.954291 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.954314 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.954332 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:52 crc kubenswrapper[4856]: I1122 07:03:52.954344 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:52Z","lastTransitionTime":"2025-11-22T07:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.057257 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.057301 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.057309 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.057322 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.057331 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:53Z","lastTransitionTime":"2025-11-22T07:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.159844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.160708 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.160841 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.160867 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.160884 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:53Z","lastTransitionTime":"2025-11-22T07:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.263363 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.263418 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.263438 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.263465 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.263486 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:53Z","lastTransitionTime":"2025-11-22T07:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.366176 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.366233 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.366250 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.366271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.366286 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:53Z","lastTransitionTime":"2025-11-22T07:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.468868 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.468898 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.468905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.468920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.468928 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:53Z","lastTransitionTime":"2025-11-22T07:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.571626 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.571655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.571663 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.571677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.571688 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:53Z","lastTransitionTime":"2025-11-22T07:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.674395 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.674452 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.674471 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.674496 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.674540 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:53Z","lastTransitionTime":"2025-11-22T07:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.708817 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.708946 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.708977 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.709047 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:53 crc kubenswrapper[4856]: E1122 07:03:53.709051 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:53 crc kubenswrapper[4856]: E1122 07:03:53.709192 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:53 crc kubenswrapper[4856]: E1122 07:03:53.710153 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:53 crc kubenswrapper[4856]: E1122 07:03:53.710226 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.778170 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.778251 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.778284 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.778313 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.778331 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:53Z","lastTransitionTime":"2025-11-22T07:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.881060 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.881155 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.881178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.881210 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.881235 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:53Z","lastTransitionTime":"2025-11-22T07:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.984702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.984769 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.984794 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.984822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:53 crc kubenswrapper[4856]: I1122 07:03:53.984843 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:53Z","lastTransitionTime":"2025-11-22T07:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.087667 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.087736 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.087761 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.087788 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.087809 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:54Z","lastTransitionTime":"2025-11-22T07:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.192674 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.192731 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.192752 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.192776 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.192794 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:54Z","lastTransitionTime":"2025-11-22T07:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.296054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.296093 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.296107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.296131 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.296145 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:54Z","lastTransitionTime":"2025-11-22T07:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.398867 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.398905 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.398914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.398929 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.398939 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:54Z","lastTransitionTime":"2025-11-22T07:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.501579 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.501612 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.501622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.501639 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.501650 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:54Z","lastTransitionTime":"2025-11-22T07:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.604053 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.604325 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.604394 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.604465 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.604580 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:54Z","lastTransitionTime":"2025-11-22T07:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.707047 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.707639 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.707991 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.708484 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.708796 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:54Z","lastTransitionTime":"2025-11-22T07:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.812144 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.812199 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.812212 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.812233 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.812247 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:54Z","lastTransitionTime":"2025-11-22T07:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.914781 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.914822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.914835 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.914852 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:54 crc kubenswrapper[4856]: I1122 07:03:54.914892 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:54Z","lastTransitionTime":"2025-11-22T07:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.017921 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.017978 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.017997 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.018022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.018040 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:55Z","lastTransitionTime":"2025-11-22T07:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.120072 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.120107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.120117 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.120139 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.120149 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:55Z","lastTransitionTime":"2025-11-22T07:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.222677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.222745 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.222762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.222783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.222799 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:55Z","lastTransitionTime":"2025-11-22T07:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.325675 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.325714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.325725 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.325739 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.325767 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:55Z","lastTransitionTime":"2025-11-22T07:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.428422 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.428463 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.428472 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.428486 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.428495 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:55Z","lastTransitionTime":"2025-11-22T07:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.531618 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.531682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.531705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.531732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.531754 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:55Z","lastTransitionTime":"2025-11-22T07:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.633928 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.633968 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.633978 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.633993 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.634002 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:55Z","lastTransitionTime":"2025-11-22T07:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.708879 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.708916 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.708898 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:55 crc kubenswrapper[4856]: E1122 07:03:55.709030 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:55 crc kubenswrapper[4856]: E1122 07:03:55.709166 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:55 crc kubenswrapper[4856]: E1122 07:03:55.709234 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.709339 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:55 crc kubenswrapper[4856]: E1122 07:03:55.709483 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.736936 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.736972 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.736984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.737000 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.737011 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:55Z","lastTransitionTime":"2025-11-22T07:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.838838 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.839141 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.839252 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.839349 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.839443 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:55Z","lastTransitionTime":"2025-11-22T07:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.942407 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.942447 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.942456 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.942470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:55 crc kubenswrapper[4856]: I1122 07:03:55.942480 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:55Z","lastTransitionTime":"2025-11-22T07:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.045342 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.045399 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.045417 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.045441 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.045457 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:56Z","lastTransitionTime":"2025-11-22T07:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.148128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.148193 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.148212 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.148237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.148256 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:56Z","lastTransitionTime":"2025-11-22T07:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.180006 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjqpv_59c3498a-6659-454c-9fe0-361fa7a0783c/kube-multus/0.log" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.180068 4856 generic.go:334] "Generic (PLEG): container finished" podID="59c3498a-6659-454c-9fe0-361fa7a0783c" containerID="89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4" exitCode=1 Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.180101 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjqpv" event={"ID":"59c3498a-6659-454c-9fe0-361fa7a0783c","Type":"ContainerDied","Data":"89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4"} Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.180585 4856 scope.go:117] "RemoveContainer" containerID="89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.201780 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.227165 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.244867 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.251588 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.251640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.251651 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.251670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.251684 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:56Z","lastTransitionTime":"2025-11-22T07:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.257747 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.278023 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.292714 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.306171 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.320905 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.332199 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.343850 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.353381 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f96481a3-6094-4d09-b606-a53e1d016e5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://289319a76359a865092209ae7b4c1945c02be4817a450a8995562c1296e06772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.354213 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.354271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.354288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.354310 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.354329 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:56Z","lastTransitionTime":"2025-11-22T07:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.365627 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40c496e-61c8-4075-8458-a68ff5cce142\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a763c0331e7cddd0a225fed5d266bb61b64168a86c91607efea86c10d90b6a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081c75595e1ae1bf22e081b0f1d2f2ab0c2de1faacb2152112039775b82cbf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd103654296a981d4f128f44b06cb90a76cf6e7e7351898e3c42ad38e89772dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.376392 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.389872 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.401688 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.428305 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:38Z\\\",\\\"message\\\":\\\"ices.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1122 07:03:38.666573 6600 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nF1122 07:03:38.666563 6600 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.444628 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:55Z\\\",\\\"message\\\":\\\"2025-11-22T07:03:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7\\\\n2025-11-22T07:03:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7 to /host/opt/cni/bin/\\\\n2025-11-22T07:03:10Z [verbose] multus-daemon started\\\\n2025-11-22T07:03:10Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:03:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.456939 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.456987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.456999 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.457019 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.457031 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:56Z","lastTransitionTime":"2025-11-22T07:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.458717 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:56Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.558920 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.558959 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.558971 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.558987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.558999 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:56Z","lastTransitionTime":"2025-11-22T07:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.662006 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.662070 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.662085 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.662102 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.662114 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:56Z","lastTransitionTime":"2025-11-22T07:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.764227 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.764271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.764304 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.764324 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.764336 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:56Z","lastTransitionTime":"2025-11-22T07:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.867075 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.867133 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.867151 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.867173 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.867191 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:56Z","lastTransitionTime":"2025-11-22T07:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.973804 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.973848 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.973860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.973882 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:56 crc kubenswrapper[4856]: I1122 07:03:56.973901 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:56Z","lastTransitionTime":"2025-11-22T07:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.075966 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.076037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.076055 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.076089 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.076107 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:57Z","lastTransitionTime":"2025-11-22T07:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.178943 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.179005 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.179027 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.179052 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.179068 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:57Z","lastTransitionTime":"2025-11-22T07:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.185129 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjqpv_59c3498a-6659-454c-9fe0-361fa7a0783c/kube-multus/0.log" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.185184 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjqpv" event={"ID":"59c3498a-6659-454c-9fe0-361fa7a0783c","Type":"ContainerStarted","Data":"85802497d854ef578bcdbf9c5f897f66796b88ff8d65b13ddd9e41b1be93d956"} Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.200471 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.212330 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.221966 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.233649 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.246816 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f96481a3-6094-4d09-b606-a53e1d016e5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://289319a76359a865092209ae7b4c1945c02be4817a450a8995562c1296e06772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.260230 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40c496e-61c8-4075-8458-a68ff5cce142\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a763c0331e7cddd0a225fed5d266bb61b64168a86c91607efea86c10d90b6a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081c75595e1ae1bf22e081b0f1d2f2ab0c2de1faacb2152112039775b82cbf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd103654296a981d4f128f44b06cb90a76cf6e7e7351898e3c42ad38e89772dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.274413 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.281452 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.281565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.281638 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.281712 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.281776 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:57Z","lastTransitionTime":"2025-11-22T07:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.287009 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.298307 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.314830 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:38Z\\\",\\\"message\\\":\\\"ices.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1122 07:03:38.666573 6600 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nF1122 07:03:38.666563 6600 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.329068 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85802497d854ef578bcdbf9c5f897f66796b88ff8d65b13ddd9e41b1be93d956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:55Z\\\",\\\"message\\\":\\\"2025-11-22T07:03:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7\\\\n2025-11-22T07:03:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7 to /host/opt/cni/bin/\\\\n2025-11-22T07:03:10Z [verbose] multus-daemon started\\\\n2025-11-22T07:03:10Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:03:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.341286 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.351415 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.366223 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.380366 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.385162 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.385240 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.385258 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.385289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.385306 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:57Z","lastTransitionTime":"2025-11-22T07:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.393827 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.406065 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.425600 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:57Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.487828 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.488101 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.488344 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.488564 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.488801 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:57Z","lastTransitionTime":"2025-11-22T07:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.591385 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.591433 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.591445 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.591463 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.591475 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:57Z","lastTransitionTime":"2025-11-22T07:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.693712 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.693768 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.693785 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.693806 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.693823 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:57Z","lastTransitionTime":"2025-11-22T07:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.709384 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.709403 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:03:57 crc kubenswrapper[4856]: E1122 07:03:57.709551 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.709376 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.709394 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:03:57 crc kubenswrapper[4856]: E1122 07:03:57.709872 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:03:57 crc kubenswrapper[4856]: E1122 07:03:57.709782 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:03:57 crc kubenswrapper[4856]: E1122 07:03:57.709720 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.796625 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.796679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.796721 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.796745 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.796762 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:57Z","lastTransitionTime":"2025-11-22T07:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.899249 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.899308 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.899323 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.899345 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:57 crc kubenswrapper[4856]: I1122 07:03:57.899365 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:57Z","lastTransitionTime":"2025-11-22T07:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.002023 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.002086 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.002097 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.002116 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.002131 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:58Z","lastTransitionTime":"2025-11-22T07:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.105108 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.105201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.105218 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.105245 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.105261 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:58Z","lastTransitionTime":"2025-11-22T07:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.208694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.208747 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.208762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.208785 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.208800 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:58Z","lastTransitionTime":"2025-11-22T07:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.311780 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.311823 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.311838 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.311855 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.311868 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:58Z","lastTransitionTime":"2025-11-22T07:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.414714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.414786 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.414805 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.414831 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.414848 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:58Z","lastTransitionTime":"2025-11-22T07:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.517590 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.517669 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.517701 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.517729 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.517752 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:58Z","lastTransitionTime":"2025-11-22T07:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.622050 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.622099 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.622113 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.622137 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.622149 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:58Z","lastTransitionTime":"2025-11-22T07:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.723942 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.724041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.724053 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.724067 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.724077 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:58Z","lastTransitionTime":"2025-11-22T07:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.725938 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.748238 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.771756 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.790999 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.811439 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.823212 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f96481a3-6094-4d09-b606-a53e1d016e5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://289319a76359a865092209ae7b4c1945c02be4817a450a8995562c1296e06772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.827880 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.827927 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.827937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.827958 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.827973 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:58Z","lastTransitionTime":"2025-11-22T07:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.838761 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40c496e-61c8-4075-8458-a68ff5cce142\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a763c0331e7cddd0a225fed5d266bb61b64168a86c91607efea86c10d90b6a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081c75595e1ae1bf22e081b0f1d2f2ab0c2de1faacb2152112039775b82cbf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd103654296a981d4f128f44b06cb90a76cf6e7e7351898e3c42ad38e89772dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.852461 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85802497d854ef578bcdbf9c5f897f66796b88ff8d65b13ddd9e41b1be93d956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:55Z\\\",\\\"message\\\":\\\"2025-11-22T07:03:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7\\\\n2025-11-22T07:03:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7 to /host/opt/cni/bin/\\\\n2025-11-22T07:03:10Z [verbose] multus-daemon started\\\\n2025-11-22T07:03:10Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:03:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.862096 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.873974 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.883633 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.906087 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:38Z\\\",\\\"message\\\":\\\"ices.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1122 07:03:38.666573 6600 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nF1122 07:03:38.666563 6600 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.918387 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.930367 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.930413 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.930421 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.930439 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.930450 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:58Z","lastTransitionTime":"2025-11-22T07:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.934692 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.946595 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.964596 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.982649 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:58 crc kubenswrapper[4856]: I1122 07:03:58.996185 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:03:58Z is after 2025-08-24T17:21:41Z" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.033709 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.033787 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.033808 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.033840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.033858 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:59Z","lastTransitionTime":"2025-11-22T07:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.137357 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.137398 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.137408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.137425 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.137435 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:59Z","lastTransitionTime":"2025-11-22T07:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.240565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.240615 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.240624 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.240639 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.240650 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:59Z","lastTransitionTime":"2025-11-22T07:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.347589 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.347640 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.347650 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.347665 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.347677 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:59Z","lastTransitionTime":"2025-11-22T07:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.450298 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.450330 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.450339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.450354 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.450364 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:59Z","lastTransitionTime":"2025-11-22T07:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.553547 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.553585 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.553597 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.553614 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.553626 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:59Z","lastTransitionTime":"2025-11-22T07:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.656860 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.656927 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.656941 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.656962 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:03:59 crc kubenswrapper[4856]: I1122 07:03:59.657011 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:03:59Z","lastTransitionTime":"2025-11-22T07:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:01 crc kubenswrapper[4856]: I1122 07:04:01.929626 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:01 crc kubenswrapper[4856]: E1122 07:04:01.929773 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:01 crc kubenswrapper[4856]: I1122 07:04:01.930164 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:01 crc kubenswrapper[4856]: E1122 07:04:01.930459 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:01 crc kubenswrapper[4856]: I1122 07:04:01.930544 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:01 crc kubenswrapper[4856]: E1122 07:04:01.930607 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:01 crc kubenswrapper[4856]: I1122 07:04:01.931565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:01 crc kubenswrapper[4856]: I1122 07:04:01.931595 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:01 crc kubenswrapper[4856]: I1122 07:04:01.931609 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:01 crc kubenswrapper[4856]: I1122 07:04:01.931630 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:01 crc kubenswrapper[4856]: I1122 07:04:01.931644 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:01Z","lastTransitionTime":"2025-11-22T07:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:01 crc kubenswrapper[4856]: I1122 07:04:01.931832 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:01 crc kubenswrapper[4856]: E1122 07:04:01.931931 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.034588 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.034909 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.034921 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.034935 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.034946 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.137714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.137772 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.137783 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.137806 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.137818 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.241409 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.241452 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.241462 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.241480 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.241491 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.344130 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.344200 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.344215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.344239 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.344255 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.447625 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.447687 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.447700 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.447720 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.447733 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.551276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.551352 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.551390 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.551422 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.551441 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.655866 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.655933 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.655942 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.655960 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.655971 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.728929 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.728963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.728972 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.728986 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.728998 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:02 crc kubenswrapper[4856]: E1122 07:04:02.747237 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.751862 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.751930 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.751946 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.751973 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.751997 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:02 crc kubenswrapper[4856]: E1122 07:04:02.767697 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.771926 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.772000 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.772022 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.772052 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.772073 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:02 crc kubenswrapper[4856]: E1122 07:04:02.796813 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.802825 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.802885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.802925 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.802947 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.802961 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:02 crc kubenswrapper[4856]: E1122 07:04:02.818667 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.823422 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.823460 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.823472 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.823492 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.823524 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:02 crc kubenswrapper[4856]: E1122 07:04:02.841740 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:02Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:02 crc kubenswrapper[4856]: E1122 07:04:02.841878 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.844388 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.844550 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.844623 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.844691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.844757 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.946714 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.946772 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.946788 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.946811 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:02 crc kubenswrapper[4856]: I1122 07:04:02.946827 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:02Z","lastTransitionTime":"2025-11-22T07:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.049950 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.049997 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.050009 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.050026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.050038 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:03Z","lastTransitionTime":"2025-11-22T07:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.153817 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.153873 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.153889 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.153912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.153929 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:03Z","lastTransitionTime":"2025-11-22T07:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.257219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.257321 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.257344 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.257375 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.257395 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:03Z","lastTransitionTime":"2025-11-22T07:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.360192 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.360251 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.360271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.360307 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.360326 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:03Z","lastTransitionTime":"2025-11-22T07:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.463733 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.463788 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.463804 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.463826 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.463841 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:03Z","lastTransitionTime":"2025-11-22T07:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.566644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.566733 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.566778 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.566799 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.566814 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:03Z","lastTransitionTime":"2025-11-22T07:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.669980 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.670034 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.670046 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.670067 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.670080 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:03Z","lastTransitionTime":"2025-11-22T07:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.708667 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.708717 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.708722 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.708755 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:03 crc kubenswrapper[4856]: E1122 07:04:03.708861 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:03 crc kubenswrapper[4856]: E1122 07:04:03.709059 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:03 crc kubenswrapper[4856]: E1122 07:04:03.709411 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:03 crc kubenswrapper[4856]: E1122 07:04:03.709560 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.709957 4856 scope.go:117] "RemoveContainer" containerID="5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.772476 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.772535 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.772548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.772568 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.772583 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:03Z","lastTransitionTime":"2025-11-22T07:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.876711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.876782 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.876809 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.876843 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.876868 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:03Z","lastTransitionTime":"2025-11-22T07:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.947069 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/2.log" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.950424 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerStarted","Data":"31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e"} Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.950965 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.965286 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f96481a3-6094-4d09-b606-a53e1d016e5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://289319a76359a865092209ae7b4c1945c02be4817a450a8995562c1296e06772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.979932 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40c496e-61c8-4075-8458-a68ff5cce142\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a763c0331e7cddd0a225fed5d266bb61b64168a86c91607efea86c10d90b6a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081c75595e1ae1bf22e081b0f1d2f2ab0c2de1faacb2152112039775b82cbf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd103654296a981d4f128f44b06cb90a76cf6e7e7351898e3c42ad38e89772dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.980701 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.980747 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.980795 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.980818 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.980831 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:03Z","lastTransitionTime":"2025-11-22T07:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:03 crc kubenswrapper[4856]: I1122 07:04:03.997302 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:03Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.013751 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.028578 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.051251 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:38Z\\\",\\\"message\\\":\\\"ices.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1122 07:03:38.666573 6600 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nF1122 07:03:38.666563 6600 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.066335 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85802497d854ef578bcdbf9c5f897f66796b88ff8d65b13ddd9e41b1be93d956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:55Z\\\",\\\"message\\\":\\\"2025-11-22T07:03:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7\\\\n2025-11-22T07:03:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7 to /host/opt/cni/bin/\\\\n2025-11-22T07:03:10Z [verbose] multus-daemon started\\\\n2025-11-22T07:03:10Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:03:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.078064 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.087473 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.087591 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.087605 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.087633 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.087689 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:04Z","lastTransitionTime":"2025-11-22T07:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.089982 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.104957 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.125386 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.142286 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.161674 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.180416 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.191015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.191206 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.191313 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.191391 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.191548 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:04Z","lastTransitionTime":"2025-11-22T07:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.197640 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.214153 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.226803 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.240329 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.293834 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.293880 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.293896 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.293914 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.293928 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:04Z","lastTransitionTime":"2025-11-22T07:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.397102 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.397160 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.397173 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.397193 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.397205 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:04Z","lastTransitionTime":"2025-11-22T07:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.501182 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.501240 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.501254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.501276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.501287 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:04Z","lastTransitionTime":"2025-11-22T07:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.604239 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.604297 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.604308 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.604327 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.604339 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:04Z","lastTransitionTime":"2025-11-22T07:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.706298 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.706342 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.706374 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.706390 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.706400 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:04Z","lastTransitionTime":"2025-11-22T07:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.809626 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.809670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.809682 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.809703 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.809717 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:04Z","lastTransitionTime":"2025-11-22T07:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.913167 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.913233 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.913251 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.913277 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.913294 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:04Z","lastTransitionTime":"2025-11-22T07:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.956101 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/3.log" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.957421 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/2.log" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.961555 4856 generic.go:334] "Generic (PLEG): container finished" podID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerID="31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e" exitCode=1 Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.961610 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e"} Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.961681 4856 scope.go:117] "RemoveContainer" containerID="5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.962275 4856 scope.go:117] "RemoveContainer" containerID="31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e" Nov 22 07:04:04 crc kubenswrapper[4856]: E1122 07:04:04.962453 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" Nov 22 07:04:04 crc kubenswrapper[4856]: I1122 07:04:04.976925 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:04Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.016307 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.016351 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.016361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.016379 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.016392 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:05Z","lastTransitionTime":"2025-11-22T07:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.026412 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ca321da6e2d97dc35e31450b62701a4e3b306db6bf903519c625a4b57345ca2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:38Z\\\",\\\"message\\\":\\\"ices.LB{services.LB{Name:\\\\\\\"Service_openshift-apiserver/api_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-apiserver/api\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.37\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1122 07:03:38.666573 6600 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nF1122 07:03:38.666563 6600 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to c\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:04:04Z\\\",\\\"message\\\":\\\"9.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 07:04:04.703905 6998 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 07:04:04.703870 6998 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1122 07:04:04.704050 6998 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:04:04.704087 6998 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 07:04:04.704206 6998 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:04:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.050334 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85802497d854ef578bcdbf9c5f897f66796b88ff8d65b13ddd9e41b1be93d956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:55Z\\\",\\\"message\\\":\\\"2025-11-22T07:03:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7\\\\n2025-11-22T07:03:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7 to /host/opt/cni/bin/\\\\n2025-11-22T07:03:10Z [verbose] multus-daemon started\\\\n2025-11-22T07:03:10Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:03:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.062982 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.079092 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.094173 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.108580 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.119537 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.119643 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.119677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.119699 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.119713 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:05Z","lastTransitionTime":"2025-11-22T07:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.120408 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.138250 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.151475 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.167718 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.183561 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.196255 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.210212 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.221988 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.222021 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.222030 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.222043 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.222053 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:05Z","lastTransitionTime":"2025-11-22T07:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.227640 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.242288 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40c496e-61c8-4075-8458-a68ff5cce142\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a763c0331e7cddd0a225fed5d266bb61b64168a86c91607efea86c10d90b6a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081c75595e1ae1bf22e081b0f1d2f2ab0c2de1faacb2152112039775b82cbf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd103654296a981d4f128f44b06cb90a76cf6e7e7351898e3c42ad38e89772dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.258161 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.271239 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f96481a3-6094-4d09-b606-a53e1d016e5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://289319a76359a865092209ae7b4c1945c02be4817a450a8995562c1296e06772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.325706 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.325768 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.325784 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.325803 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.325819 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:05Z","lastTransitionTime":"2025-11-22T07:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.429026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.429080 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.429092 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.429110 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.429121 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:05Z","lastTransitionTime":"2025-11-22T07:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.532922 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.533020 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.533040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.533103 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.533122 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:05Z","lastTransitionTime":"2025-11-22T07:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.636489 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.636564 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.636581 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.636605 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.636618 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:05Z","lastTransitionTime":"2025-11-22T07:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.709863 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.710067 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.710044 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.709939 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:05 crc kubenswrapper[4856]: E1122 07:04:05.710434 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:05 crc kubenswrapper[4856]: E1122 07:04:05.710583 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:05 crc kubenswrapper[4856]: E1122 07:04:05.710326 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:05 crc kubenswrapper[4856]: E1122 07:04:05.710805 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.739897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.739966 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.739983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.740039 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.740063 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:05Z","lastTransitionTime":"2025-11-22T07:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.842280 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.842335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.842352 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.842376 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.842393 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:05Z","lastTransitionTime":"2025-11-22T07:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.944819 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.944872 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.944885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.944907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.944921 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:05Z","lastTransitionTime":"2025-11-22T07:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.966838 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/3.log" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.970782 4856 scope.go:117] "RemoveContainer" containerID="31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e" Nov 22 07:04:05 crc kubenswrapper[4856]: E1122 07:04:05.970948 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" Nov 22 07:04:05 crc kubenswrapper[4856]: I1122 07:04:05.983640 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.000253 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:05Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.016033 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.030225 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.049054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.049134 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.049159 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.049194 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.049218 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:06Z","lastTransitionTime":"2025-11-22T07:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.049367 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40c496e-61c8-4075-8458-a68ff5cce142\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a763c0331e7cddd0a225fed5d266bb61b64168a86c91607efea86c10d90b6a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081c75595e1ae1bf22e081b0f1d2f2ab0c2de1faacb2152112039775b82cbf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd103654296a981d4f128f44b06cb90a76cf6e7e7351898e3c42ad38e89772dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.063380 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.076307 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f96481a3-6094-4d09-b606-a53e1d016e5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://289319a76359a865092209ae7b4c1945c02be4817a450a8995562c1296e06772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.097738 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:04:04Z\\\",\\\"message\\\":\\\"9.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 07:04:04.703905 6998 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 07:04:04.703870 6998 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1122 07:04:04.704050 6998 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:04:04.704087 6998 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 07:04:04.704206 6998 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:04:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.111718 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85802497d854ef578bcdbf9c5f897f66796b88ff8d65b13ddd9e41b1be93d956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:55Z\\\",\\\"message\\\":\\\"2025-11-22T07:03:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7\\\\n2025-11-22T07:03:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7 to /host/opt/cni/bin/\\\\n2025-11-22T07:03:10Z [verbose] multus-daemon started\\\\n2025-11-22T07:03:10Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:03:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.124761 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.138044 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.149753 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.151648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.151678 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.151688 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.151707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.151719 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:06Z","lastTransitionTime":"2025-11-22T07:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.162300 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.174129 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.187152 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.198482 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.211379 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.223489 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:06Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.254691 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.254724 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.254734 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.254747 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.254758 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:06Z","lastTransitionTime":"2025-11-22T07:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.357248 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.357276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.357286 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.357300 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.357310 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:06Z","lastTransitionTime":"2025-11-22T07:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.460241 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.460271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.460306 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.460324 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.460333 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:06Z","lastTransitionTime":"2025-11-22T07:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.562447 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.562488 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.562499 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.562530 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.562542 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:06Z","lastTransitionTime":"2025-11-22T07:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.665056 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.665106 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.665119 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.665136 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.665149 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:06Z","lastTransitionTime":"2025-11-22T07:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.768105 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.768159 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.768175 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.768196 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.768211 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:06Z","lastTransitionTime":"2025-11-22T07:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.870979 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.871017 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.871028 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.871054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.871068 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:06Z","lastTransitionTime":"2025-11-22T07:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.972835 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.972869 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.972878 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.972891 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:06 crc kubenswrapper[4856]: I1122 07:04:06.972899 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:06Z","lastTransitionTime":"2025-11-22T07:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.075603 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.075639 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.075648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.075664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.075673 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:07Z","lastTransitionTime":"2025-11-22T07:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.178532 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.178569 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.178579 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.178592 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.178603 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:07Z","lastTransitionTime":"2025-11-22T07:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.281655 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.281702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.281717 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.281734 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.281744 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:07Z","lastTransitionTime":"2025-11-22T07:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.385116 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.385180 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.385189 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.385207 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.385217 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:07Z","lastTransitionTime":"2025-11-22T07:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.488753 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.488820 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.488834 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.488857 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.488872 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:07Z","lastTransitionTime":"2025-11-22T07:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.591453 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.591501 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.591537 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.591557 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.591570 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:07Z","lastTransitionTime":"2025-11-22T07:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.695078 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.695153 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.695174 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.695201 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.695222 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:07Z","lastTransitionTime":"2025-11-22T07:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.708644 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.708676 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:07 crc kubenswrapper[4856]: E1122 07:04:07.708794 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.708916 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:07 crc kubenswrapper[4856]: E1122 07:04:07.709100 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.709148 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:07 crc kubenswrapper[4856]: E1122 07:04:07.709253 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:07 crc kubenswrapper[4856]: E1122 07:04:07.709357 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.798400 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.798450 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.798462 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.798482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.798495 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:07Z","lastTransitionTime":"2025-11-22T07:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.901574 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.901646 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.901664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.901688 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:07 crc kubenswrapper[4856]: I1122 07:04:07.901705 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:07Z","lastTransitionTime":"2025-11-22T07:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.004892 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.004950 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.004966 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.004992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.005011 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:08Z","lastTransitionTime":"2025-11-22T07:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.108057 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.108134 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.108154 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.108174 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.108187 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:08Z","lastTransitionTime":"2025-11-22T07:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.211121 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.211218 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.211249 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.211288 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.211312 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:08Z","lastTransitionTime":"2025-11-22T07:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.315183 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.315227 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.315237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.315254 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.315264 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:08Z","lastTransitionTime":"2025-11-22T07:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.418813 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.418894 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.418919 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.418949 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.418974 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:08Z","lastTransitionTime":"2025-11-22T07:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.521604 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.521673 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.521688 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.521713 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.521727 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:08Z","lastTransitionTime":"2025-11-22T07:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.623950 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.624038 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.624076 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.624109 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.624132 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:08Z","lastTransitionTime":"2025-11-22T07:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.724188 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-722tb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dda6b6e5-61a2-459c-9207-5e5aa500869f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:20Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hf49l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:20Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-722tb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.724779 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.727537 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.727567 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.727575 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.727588 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.727598 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:08Z","lastTransitionTime":"2025-11-22T07:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.737200 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8dde9ab-d141-4cb7-9114-30fec2c28887\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c26c7408acc1a61e776fa5dc617800cb78ea4f508459282cdf350d7c72f3735f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61250343dfe15d2055e8c78d7420b33ae07c67b39a38237dc8166868cbe7fe7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://22a9a9bb64b671f3497bef6e17cd1243a6d9f46f865c0681a74f83ca1b3be9ab\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b957182fc9e641263a740cd1929f0998b5834fde310b4486633de585c3f1d41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.766227 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8dce68d3cbbbe14afb7f43e9f0b207ea3c9e31d43ba5137c27adc95607d935a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.788457 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.807453 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://52eab2a23a1d15f210f0d23768ae88e8c6de504807e1d27fa8be66aa0aeb223c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.825824 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-npjs2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5efcf6e5-77e9-4b2d-b26b-d5c46eeefa6f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://29de395dc4cfc0f4d153e503c8006bdf20dbb68c1053296e407cc5691acb5966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://321648ca7f9faa8c8969754fa3c050bd38340ec294f8a537d4ad1c6e2960adaa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6de32e50d7b18a0b1f4442f241c13eec92eaafa43fdaec41c5322632b09e4c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ddacbc4d5f060f9a87ab1e0ebcebd7d20bf6a42f09c23a6cc9fa411810b2e3f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c830ca0062ab593cbe60535af60f1929340c5d0f74c4aee45746b479c698cb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c89a457261e37a3ccaeffae036ffdd2258ed028e40cb08fad87dbb1d7ab504f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b29c6b81b0d99c3d58a061f6539dbae21b62beb5877a1b67ca9511fefc80847\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pk8gj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-npjs2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.829820 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.829848 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.829859 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.829873 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.829881 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:08Z","lastTransitionTime":"2025-11-22T07:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.840314 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"11160444-93dc-408f-898e-b127c3620ca4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2f43f949a2772727f06ba488cce6aea5ce922ddee42b673127f846242a8225ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bb7a29fdb83158041810eedabe53f05504c887e3f01450819b909adcc2c00\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8e3a5d3dc1d7f186f0eb2ed37fc240d0d4c6ba4312031599033e64c8966f7c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19597db70bb6730197a0455d36563e873f85e99ad187432dea72215d97dc597\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b486a9983851b6dbc5d5123e315d38f62c5bc20e89e2fc89a4121f841e832400\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-22T07:02:58Z\\\",\\\"message\\\":\\\"W1122 07:02:47.099195 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1122 07:02:47.099809 1 crypto.go:601] Generating new CA for check-endpoints-signer@1763794967 cert, and key in /tmp/serving-cert-1612674170/serving-signer.crt, /tmp/serving-cert-1612674170/serving-signer.key\\\\nI1122 07:02:47.429624 1 observer_polling.go:159] Starting file observer\\\\nW1122 07:02:47.431112 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1122 07:02:47.431266 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1122 07:02:47.432077 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1612674170/tls.crt::/tmp/serving-cert-1612674170/tls.key\\\\\\\"\\\\nF1122 07:02:57.945422 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2f9c9c8007d7b2f4154ab76f0755f8ddb749a13367c90fb2b257f60db120dc3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cbfbd95b1ea72d8c12f0a43bbc1ce6e7bb36c43aedb964d9e6cfcb9f441917ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.852706 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b2bf0c1b9e9ec632d9f66e4bf5b6ee5ab442d187a92b961669fe2f95b771c91c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ba0e7481287501455a9cea38aaaaf9261486677ab790533ea4e5ba61356f6e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.862418 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-44lw8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b51107-7e2b-463e-862c-700ac0976f31\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://219b2152036576fd5a379da974e8ce1ffa182ec2af7add49d2311e9c28fc62c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sslkm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-44lw8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.875299 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79ea67c8-6903-4252-a766-446631a43c49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87a57187a9c9739756f485bd10acfb986758e8a4ff4e817159c70e3c27c90e5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23c15576582a3ae8f9a7ce0ad46de4599836414b7500bf38007e5c6f28a1f642\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b59tk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6df25\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.886554 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f96481a3-6094-4d09-b606-a53e1d016e5f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://289319a76359a865092209ae7b4c1945c02be4817a450a8995562c1296e06772\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9a2edde11713e7ab38d23f65d5505aae568c18dab0562ec6329f18587334302\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.897532 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b40c496e-61c8-4075-8458-a68ff5cce142\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:02:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a763c0331e7cddd0a225fed5d266bb61b64168a86c91607efea86c10d90b6a5a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://081c75595e1ae1bf22e081b0f1d2f2ab0c2de1faacb2152112039775b82cbf89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd103654296a981d4f128f44b06cb90a76cf6e7e7351898e3c42ad38e89772dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:02:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://04257680a85831b2b20c985b28951a568be0b6b375375361a3805a313f7d7f3c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:02:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:02:40Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:02:38Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.907795 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.919300 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.927848 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0efefc3f-da5f-4035-81dc-6b5ab51e3df1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a599a55a58a4d8499528e2d9687d04441fc7055065499d0c9cc304491f96208\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbs6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-klt85\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.932711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.932743 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.932752 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.932766 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.932775 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:08Z","lastTransitionTime":"2025-11-22T07:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.953758 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"752eee1c-98a9-4221-88a7-f332f704d4cf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:04:04Z\\\",\\\"message\\\":\\\"9.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 07:04:04.703905 6998 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-marketplace/marketplace-operator-metrics]} name:Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.53:8081: 10.217.5.53:8383:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {89fe421e-04e8-4967-ac75-77a0e6f784ef}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1122 07:04:04.703870 6998 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1122 07:04:04.704050 6998 ovnkube.go:599] Stopped ovnkube\\\\nI1122 07:04:04.704087 6998 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1122 07:04:04.704206 6998 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:04:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxgp8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2685z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.969559 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fjqpv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59c3498a-6659-454c-9fe0-361fa7a0783c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85802497d854ef578bcdbf9c5f897f66796b88ff8d65b13ddd9e41b1be93d956\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-22T07:03:55Z\\\",\\\"message\\\":\\\"2025-11-22T07:03:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7\\\\n2025-11-22T07:03:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_414963e3-f7fd-4e47-aa15-1852bee88da7 to /host/opt/cni/bin/\\\\n2025-11-22T07:03:10Z [verbose] multus-daemon started\\\\n2025-11-22T07:03:10Z [verbose] Readiness Indicator file check\\\\n2025-11-22T07:03:55Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-22T07:03:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s7zdl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:07Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fjqpv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:08 crc kubenswrapper[4856]: I1122 07:04:08.980207 4856 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-c9svb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f19c49d-eee1-47ff-813d-51642778850a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-22T07:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://89e36bf3296539818b19e65cd25b6b9e5a1977dd868f9cf3e55df2a4bf080397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-22T07:03:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b296m\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-22T07:03:08Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-c9svb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:08Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.035613 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.035687 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.035705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.035730 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.035746 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:09Z","lastTransitionTime":"2025-11-22T07:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.139091 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.139339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.139350 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.139366 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.139377 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:09Z","lastTransitionTime":"2025-11-22T07:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.248632 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.248697 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.248711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.248732 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.248745 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:09Z","lastTransitionTime":"2025-11-22T07:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.352186 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.352226 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.352236 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.352248 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.352259 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:09Z","lastTransitionTime":"2025-11-22T07:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.454994 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.455036 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.455045 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.455059 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.455067 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:09Z","lastTransitionTime":"2025-11-22T07:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.558899 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.558954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.558966 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.558985 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.558997 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:09Z","lastTransitionTime":"2025-11-22T07:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.660741 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.660821 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.660840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.660865 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.660881 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:09Z","lastTransitionTime":"2025-11-22T07:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.709227 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.709335 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.709379 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.709398 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:09 crc kubenswrapper[4856]: E1122 07:04:09.709410 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:09 crc kubenswrapper[4856]: E1122 07:04:09.709470 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:09 crc kubenswrapper[4856]: E1122 07:04:09.709618 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:09 crc kubenswrapper[4856]: E1122 07:04:09.709740 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.763822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.763859 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.763868 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.763879 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.763889 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:09Z","lastTransitionTime":"2025-11-22T07:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.866648 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.866695 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.866708 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.866726 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.866740 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:09Z","lastTransitionTime":"2025-11-22T07:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.969348 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.969385 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.969393 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.969405 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:09 crc kubenswrapper[4856]: I1122 07:04:09.969414 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:09Z","lastTransitionTime":"2025-11-22T07:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.072781 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.072831 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.072845 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.072865 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.072878 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:10Z","lastTransitionTime":"2025-11-22T07:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.176481 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.176543 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.176556 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.176575 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.176587 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:10Z","lastTransitionTime":"2025-11-22T07:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.279297 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.279330 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.279345 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.279363 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.279373 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:10Z","lastTransitionTime":"2025-11-22T07:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.382377 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.382456 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.382469 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.382485 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.382496 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:10Z","lastTransitionTime":"2025-11-22T07:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.488079 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.488157 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.488181 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.488210 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.488230 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:10Z","lastTransitionTime":"2025-11-22T07:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.591588 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.591638 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.591651 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.591668 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.591678 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:10Z","lastTransitionTime":"2025-11-22T07:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.694622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.694664 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.694679 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.694697 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.694710 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:10Z","lastTransitionTime":"2025-11-22T07:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.797473 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.798225 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.798456 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.798738 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.798951 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:10Z","lastTransitionTime":"2025-11-22T07:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.902358 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.902418 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.902430 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.902456 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:10 crc kubenswrapper[4856]: I1122 07:04:10.902469 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:10Z","lastTransitionTime":"2025-11-22T07:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.005412 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.005465 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.005478 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.005501 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.005538 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:11Z","lastTransitionTime":"2025-11-22T07:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.108562 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.108612 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.108622 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.108638 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.108649 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:11Z","lastTransitionTime":"2025-11-22T07:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.212328 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.212389 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.212408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.212433 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.212450 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:11Z","lastTransitionTime":"2025-11-22T07:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.316319 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.316393 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.316408 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.316432 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.316447 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:11Z","lastTransitionTime":"2025-11-22T07:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.419674 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.419711 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.419721 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.419734 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.419743 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:11Z","lastTransitionTime":"2025-11-22T07:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.522446 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.522491 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.522500 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.522528 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.522540 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:11Z","lastTransitionTime":"2025-11-22T07:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.551002 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.551149 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.551124708 +0000 UTC m=+157.964517976 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.551217 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.551265 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.551295 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.551684 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.551913 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.551955 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.551974 4856 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.552036 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.552019448 +0000 UTC m=+157.965412716 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.552392 4856 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.552463 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.552449518 +0000 UTC m=+157.965842796 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.552709 4856 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.552764 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.552750056 +0000 UTC m=+157.966143334 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.555693 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.555735 4856 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.555759 4856 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.555858 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.555842657 +0000 UTC m=+157.969235925 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.625119 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.625160 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.625169 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.625185 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.625196 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:11Z","lastTransitionTime":"2025-11-22T07:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.709498 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.709550 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.709694 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.709833 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.709846 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.709925 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.710185 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:11 crc kubenswrapper[4856]: E1122 07:04:11.710376 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.728523 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.728579 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.728590 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.728604 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.728615 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:11Z","lastTransitionTime":"2025-11-22T07:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.830886 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.830937 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.830952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.830972 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.830988 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:11Z","lastTransitionTime":"2025-11-22T07:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.934051 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.934107 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.934124 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.934152 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:11 crc kubenswrapper[4856]: I1122 07:04:11.934174 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:11Z","lastTransitionTime":"2025-11-22T07:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.037054 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.037100 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.037112 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.037130 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.037144 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:12Z","lastTransitionTime":"2025-11-22T07:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.139777 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.139826 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.139843 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.139862 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.139873 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:12Z","lastTransitionTime":"2025-11-22T07:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.242215 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.242273 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.242286 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.242306 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.242319 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:12Z","lastTransitionTime":"2025-11-22T07:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.345026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.345086 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.345101 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.345128 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.345160 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:12Z","lastTransitionTime":"2025-11-22T07:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.448145 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.448204 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.448219 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.448239 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.448253 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:12Z","lastTransitionTime":"2025-11-22T07:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.550344 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.550392 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.550402 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.550419 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.550432 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:12Z","lastTransitionTime":"2025-11-22T07:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.653528 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.653582 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.653599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.653619 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.653631 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:12Z","lastTransitionTime":"2025-11-22T07:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.755565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.755610 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.755621 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.755642 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.755659 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:12Z","lastTransitionTime":"2025-11-22T07:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.858885 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.858961 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.858976 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.859001 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.859014 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:12Z","lastTransitionTime":"2025-11-22T07:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.916633 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.916687 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.916701 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.916722 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.916737 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:12Z","lastTransitionTime":"2025-11-22T07:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:12 crc kubenswrapper[4856]: E1122 07:04:12.931438 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.936274 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.936349 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.936362 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.936382 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.936394 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:12Z","lastTransitionTime":"2025-11-22T07:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:12 crc kubenswrapper[4856]: E1122 07:04:12.957687 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.963102 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.963162 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.963176 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.963198 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.963211 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:12Z","lastTransitionTime":"2025-11-22T07:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:12 crc kubenswrapper[4856]: E1122 07:04:12.982107 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:12Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.986697 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.986775 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.986789 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.986811 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:12 crc kubenswrapper[4856]: I1122 07:04:12.986828 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:12Z","lastTransitionTime":"2025-11-22T07:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:13 crc kubenswrapper[4856]: E1122 07:04:13.016190 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.021789 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.021857 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.021869 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.021896 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.021908 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:13Z","lastTransitionTime":"2025-11-22T07:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:13 crc kubenswrapper[4856]: E1122 07:04:13.042699 4856 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-22T07:04:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-22T07:04:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"306542ef-d3ef-4be8-9ac9-776f57e8a26c\\\",\\\"systemUUID\\\":\\\"f77229a0-445b-4f39-ab07-3ae475712a7b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-22T07:04:13Z is after 2025-08-24T17:21:41Z" Nov 22 07:04:13 crc kubenswrapper[4856]: E1122 07:04:13.042912 4856 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.044974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.045023 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.045037 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.045057 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.045069 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:13Z","lastTransitionTime":"2025-11-22T07:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.149369 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.149447 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.149466 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.149501 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.149557 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:13Z","lastTransitionTime":"2025-11-22T07:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.254282 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.254353 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.254376 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.254407 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.254427 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:13Z","lastTransitionTime":"2025-11-22T07:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.358000 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.358104 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.358150 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.358183 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.358205 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:13Z","lastTransitionTime":"2025-11-22T07:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.461556 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.461632 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.461659 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.461693 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.461716 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:13Z","lastTransitionTime":"2025-11-22T07:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.564339 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.564394 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.564411 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.564434 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.564450 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:13Z","lastTransitionTime":"2025-11-22T07:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.667150 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.667197 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.667217 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.667233 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.667246 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:13Z","lastTransitionTime":"2025-11-22T07:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.708892 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.708941 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:13 crc kubenswrapper[4856]: E1122 07:04:13.709012 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.709055 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.709098 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:13 crc kubenswrapper[4856]: E1122 07:04:13.709274 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:13 crc kubenswrapper[4856]: E1122 07:04:13.709378 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:13 crc kubenswrapper[4856]: E1122 07:04:13.709493 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.770444 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.770490 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.770500 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.770535 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.770547 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:13Z","lastTransitionTime":"2025-11-22T07:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.873024 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.873109 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.873132 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.873160 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.873180 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:13Z","lastTransitionTime":"2025-11-22T07:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.975629 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.975690 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.975707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.975727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:13 crc kubenswrapper[4856]: I1122 07:04:13.975741 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:13Z","lastTransitionTime":"2025-11-22T07:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.078208 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.078482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.078599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.078677 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.078748 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:14Z","lastTransitionTime":"2025-11-22T07:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.181983 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.182020 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.182032 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.182048 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.182060 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:14Z","lastTransitionTime":"2025-11-22T07:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.286150 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.286178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.286188 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.286200 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.286209 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:14Z","lastTransitionTime":"2025-11-22T07:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.388977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.389018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.389030 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.389045 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.389056 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:14Z","lastTransitionTime":"2025-11-22T07:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.491647 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.491694 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.491705 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.491719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.491728 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:14Z","lastTransitionTime":"2025-11-22T07:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.593924 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.593977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.593995 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.594014 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.594023 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:14Z","lastTransitionTime":"2025-11-22T07:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.696347 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.696388 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.696400 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.696414 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.696425 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:14Z","lastTransitionTime":"2025-11-22T07:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.799041 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.799361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.799457 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.799539 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.799615 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:14Z","lastTransitionTime":"2025-11-22T07:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.902076 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.902115 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.902126 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.902141 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:14 crc kubenswrapper[4856]: I1122 07:04:14.902155 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:14Z","lastTransitionTime":"2025-11-22T07:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.003943 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.003978 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.003992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.004007 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.004015 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:15Z","lastTransitionTime":"2025-11-22T07:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.106231 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.106276 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.106284 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.106299 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.106309 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:15Z","lastTransitionTime":"2025-11-22T07:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.208840 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.208886 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.208897 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.208912 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.208923 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:15Z","lastTransitionTime":"2025-11-22T07:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.311045 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.311088 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.311098 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.311114 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.311124 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:15Z","lastTransitionTime":"2025-11-22T07:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.413949 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.414006 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.414015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.414029 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.414055 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:15Z","lastTransitionTime":"2025-11-22T07:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.516954 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.516986 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.516995 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.517009 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.517019 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:15Z","lastTransitionTime":"2025-11-22T07:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.621012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.621068 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.621082 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.621100 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.621118 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:15Z","lastTransitionTime":"2025-11-22T07:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.709055 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.709177 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:15 crc kubenswrapper[4856]: E1122 07:04:15.709313 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.709350 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.709374 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:15 crc kubenswrapper[4856]: E1122 07:04:15.709447 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:15 crc kubenswrapper[4856]: E1122 07:04:15.709525 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:15 crc kubenswrapper[4856]: E1122 07:04:15.709691 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.723807 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.723839 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.723849 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.723863 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.723872 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:15Z","lastTransitionTime":"2025-11-22T07:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.826033 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.826072 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.826082 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.826098 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.826110 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:15Z","lastTransitionTime":"2025-11-22T07:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.928479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.928562 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.928580 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.928597 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:15 crc kubenswrapper[4856]: I1122 07:04:15.928609 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:15Z","lastTransitionTime":"2025-11-22T07:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.030644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.030945 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.031016 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.031077 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.031145 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:16Z","lastTransitionTime":"2025-11-22T07:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.133084 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.133122 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.133131 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.133145 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.133155 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:16Z","lastTransitionTime":"2025-11-22T07:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.235713 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.236365 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.236451 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.236589 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.236685 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:16Z","lastTransitionTime":"2025-11-22T07:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.338736 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.338769 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.338779 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.338794 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.338804 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:16Z","lastTransitionTime":"2025-11-22T07:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.441272 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.441305 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.441315 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.441330 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.441340 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:16Z","lastTransitionTime":"2025-11-22T07:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.543765 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.543802 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.543812 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.543827 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.543838 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:16Z","lastTransitionTime":"2025-11-22T07:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.648171 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.648218 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.648227 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.648242 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.648253 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:16Z","lastTransitionTime":"2025-11-22T07:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.751084 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.751232 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.751246 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.751261 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.751272 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:16Z","lastTransitionTime":"2025-11-22T07:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.853702 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.853742 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.853753 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.853770 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.853780 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:16Z","lastTransitionTime":"2025-11-22T07:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.956805 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.956876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.956890 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.956910 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:16 crc kubenswrapper[4856]: I1122 07:04:16.956924 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:16Z","lastTransitionTime":"2025-11-22T07:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.060045 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.060097 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.060109 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.060132 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.060146 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:17Z","lastTransitionTime":"2025-11-22T07:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.163459 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.163565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.163588 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.163621 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.163648 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:17Z","lastTransitionTime":"2025-11-22T07:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.265945 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.265984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.265992 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.266005 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.266014 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:17Z","lastTransitionTime":"2025-11-22T07:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.368688 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.368762 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.368774 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.368792 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.368805 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:17Z","lastTransitionTime":"2025-11-22T07:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.471363 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.471417 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.471431 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.471450 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.471462 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:17Z","lastTransitionTime":"2025-11-22T07:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.574805 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.574851 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.574861 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.574877 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.574887 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:17Z","lastTransitionTime":"2025-11-22T07:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.678015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.678065 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.678077 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.678101 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.678115 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:17Z","lastTransitionTime":"2025-11-22T07:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.709333 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.709354 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.709454 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:17 crc kubenswrapper[4856]: E1122 07:04:17.709557 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.709588 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:17 crc kubenswrapper[4856]: E1122 07:04:17.709973 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:17 crc kubenswrapper[4856]: E1122 07:04:17.710173 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:17 crc kubenswrapper[4856]: E1122 07:04:17.710256 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.781195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.781247 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.781262 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.781284 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.781299 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:17Z","lastTransitionTime":"2025-11-22T07:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.884481 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.884551 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.884565 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.884614 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.884630 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:17Z","lastTransitionTime":"2025-11-22T07:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.986774 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.986816 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.986825 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.986838 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:17 crc kubenswrapper[4856]: I1122 07:04:17.986847 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:17Z","lastTransitionTime":"2025-11-22T07:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.089773 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.089814 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.089822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.089836 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.089846 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:18Z","lastTransitionTime":"2025-11-22T07:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.194900 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.195321 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.195335 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.195353 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.195367 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:18Z","lastTransitionTime":"2025-11-22T07:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.297987 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.298018 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.298026 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.298040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.298049 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:18Z","lastTransitionTime":"2025-11-22T07:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.399998 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.400056 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.400071 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.400094 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.400108 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:18Z","lastTransitionTime":"2025-11-22T07:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.503447 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.503529 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.503548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.503569 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.503582 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:18Z","lastTransitionTime":"2025-11-22T07:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.606748 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.606784 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.606794 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.606808 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.606818 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:18Z","lastTransitionTime":"2025-11-22T07:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.716271 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.716302 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.716310 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.716324 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.716333 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:18Z","lastTransitionTime":"2025-11-22T07:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.766641 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.766621497 podStartE2EDuration="1m8.766621497s" podCreationTimestamp="2025-11-22 07:03:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:04:18.751206151 +0000 UTC m=+101.164599419" watchObservedRunningTime="2025-11-22 07:04:18.766621497 +0000 UTC m=+101.180014755" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.819035 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.819066 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.819075 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.819088 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.819099 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:18Z","lastTransitionTime":"2025-11-22T07:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.830863 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-npjs2" podStartSLOduration=73.830842569 podStartE2EDuration="1m13.830842569s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:04:18.815415733 +0000 UTC m=+101.228809011" watchObservedRunningTime="2025-11-22 07:04:18.830842569 +0000 UTC m=+101.244235827" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.846647 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=59.846627103 podStartE2EDuration="59.846627103s" podCreationTimestamp="2025-11-22 07:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:04:18.832482947 +0000 UTC m=+101.245876235" watchObservedRunningTime="2025-11-22 07:04:18.846627103 +0000 UTC m=+101.260020361" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.870962 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-44lw8" podStartSLOduration=73.870940044 podStartE2EDuration="1m13.870940044s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:04:18.859147352 +0000 UTC m=+101.272540610" watchObservedRunningTime="2025-11-22 07:04:18.870940044 +0000 UTC m=+101.284333302" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.871643 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6df25" podStartSLOduration=72.87163561 podStartE2EDuration="1m12.87163561s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:04:18.871039726 +0000 UTC m=+101.284432994" watchObservedRunningTime="2025-11-22 07:04:18.87163561 +0000 UTC m=+101.285028868" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.883908 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=32.883886863 podStartE2EDuration="32.883886863s" podCreationTimestamp="2025-11-22 07:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:04:18.883461233 +0000 UTC m=+101.296854491" watchObservedRunningTime="2025-11-22 07:04:18.883886863 +0000 UTC m=+101.297280121" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.912181 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=10.912164936 podStartE2EDuration="10.912164936s" podCreationTimestamp="2025-11-22 07:04:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:04:18.911030909 +0000 UTC m=+101.324424167" watchObservedRunningTime="2025-11-22 07:04:18.912164936 +0000 UTC m=+101.325558194" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.921424 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.921471 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.921482 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.921534 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.921548 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:18Z","lastTransitionTime":"2025-11-22T07:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.926063 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=47.926047256 podStartE2EDuration="47.926047256s" podCreationTimestamp="2025-11-22 07:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:04:18.926022656 +0000 UTC m=+101.339415914" watchObservedRunningTime="2025-11-22 07:04:18.926047256 +0000 UTC m=+101.339440514" Nov 22 07:04:18 crc kubenswrapper[4856]: I1122 07:04:18.968537 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podStartSLOduration=73.968489306 podStartE2EDuration="1m13.968489306s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:04:18.967750379 +0000 UTC m=+101.381143637" watchObservedRunningTime="2025-11-22 07:04:18.968489306 +0000 UTC m=+101.381882564" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.019332 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-c9svb" podStartSLOduration=74.019313958 podStartE2EDuration="1m14.019313958s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:04:19.018895879 +0000 UTC m=+101.432289137" watchObservedRunningTime="2025-11-22 07:04:19.019313958 +0000 UTC m=+101.432707226" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.019713 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fjqpv" podStartSLOduration=74.019705748 podStartE2EDuration="1m14.019705748s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:04:19.008088329 +0000 UTC m=+101.421481587" watchObservedRunningTime="2025-11-22 07:04:19.019705748 +0000 UTC m=+101.433099006" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.023644 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.023700 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.023719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.023738 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.023755 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:19Z","lastTransitionTime":"2025-11-22T07:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.126151 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.126211 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.126221 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.126237 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.126262 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:19Z","lastTransitionTime":"2025-11-22T07:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.228378 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.228666 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.228676 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.228689 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.228698 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:19Z","lastTransitionTime":"2025-11-22T07:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.331500 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.332086 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.332202 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.332287 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.332368 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:19Z","lastTransitionTime":"2025-11-22T07:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.434470 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.434497 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.434504 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.434542 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.434555 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:19Z","lastTransitionTime":"2025-11-22T07:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.537052 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.537361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.537430 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.537498 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.537614 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:19Z","lastTransitionTime":"2025-11-22T07:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.640046 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.640329 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.640427 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.640520 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.640587 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:19Z","lastTransitionTime":"2025-11-22T07:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.709060 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.709126 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.709177 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.709084 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:19 crc kubenswrapper[4856]: E1122 07:04:19.709256 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:19 crc kubenswrapper[4856]: E1122 07:04:19.709371 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:19 crc kubenswrapper[4856]: E1122 07:04:19.709457 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:19 crc kubenswrapper[4856]: E1122 07:04:19.709592 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.742877 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.742928 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.742938 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.742952 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.742964 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:19Z","lastTransitionTime":"2025-11-22T07:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.845555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.845590 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.845602 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.845618 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.845631 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:19Z","lastTransitionTime":"2025-11-22T07:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.947481 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.947578 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.947596 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.947616 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:19 crc kubenswrapper[4856]: I1122 07:04:19.947633 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:19Z","lastTransitionTime":"2025-11-22T07:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.049967 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.050012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.050028 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.050046 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.050060 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:20Z","lastTransitionTime":"2025-11-22T07:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.152149 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.152220 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.152234 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.152260 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.152277 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:20Z","lastTransitionTime":"2025-11-22T07:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.255626 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.255911 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.255974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.256040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.256114 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:20Z","lastTransitionTime":"2025-11-22T07:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.358935 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.358964 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.358972 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.358984 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.358993 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:20Z","lastTransitionTime":"2025-11-22T07:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.462209 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.462278 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.462295 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.462317 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.462333 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:20Z","lastTransitionTime":"2025-11-22T07:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.565656 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.565704 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.565719 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.565736 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.565747 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:20Z","lastTransitionTime":"2025-11-22T07:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.668641 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.668707 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.668727 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.668752 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.668770 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:20Z","lastTransitionTime":"2025-11-22T07:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.711262 4856 scope.go:117] "RemoveContainer" containerID="31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e" Nov 22 07:04:20 crc kubenswrapper[4856]: E1122 07:04:20.713362 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.771467 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.771503 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.771524 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.771537 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.771548 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:20Z","lastTransitionTime":"2025-11-22T07:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.874795 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.874843 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.874854 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.874871 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.874883 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:20Z","lastTransitionTime":"2025-11-22T07:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.977489 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.977607 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.977637 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.977670 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:20 crc kubenswrapper[4856]: I1122 07:04:20.977699 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:20Z","lastTransitionTime":"2025-11-22T07:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.081040 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.081100 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.081115 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.081134 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.081149 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:21Z","lastTransitionTime":"2025-11-22T07:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.184220 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.184289 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.184304 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.184321 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.184334 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:21Z","lastTransitionTime":"2025-11-22T07:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.286990 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.287042 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.287059 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.287081 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.287094 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:21Z","lastTransitionTime":"2025-11-22T07:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.390320 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.390476 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.390490 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.390560 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.390573 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:21Z","lastTransitionTime":"2025-11-22T07:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.493361 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.493411 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.493421 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.493442 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.493456 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:21Z","lastTransitionTime":"2025-11-22T07:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.597333 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.597369 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.597377 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.597394 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.597407 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:21Z","lastTransitionTime":"2025-11-22T07:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.700874 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.700927 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.700941 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.700963 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.700980 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:21Z","lastTransitionTime":"2025-11-22T07:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.709337 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.709395 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.709364 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.709371 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:21 crc kubenswrapper[4856]: E1122 07:04:21.709563 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:21 crc kubenswrapper[4856]: E1122 07:04:21.709671 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:21 crc kubenswrapper[4856]: E1122 07:04:21.709778 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:21 crc kubenswrapper[4856]: E1122 07:04:21.709953 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.804422 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.804474 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.804484 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.804502 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.804535 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:21Z","lastTransitionTime":"2025-11-22T07:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.907822 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.907862 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.907876 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.907892 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:21 crc kubenswrapper[4856]: I1122 07:04:21.907905 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:21Z","lastTransitionTime":"2025-11-22T07:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.010461 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.010525 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.010535 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.010548 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.010560 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:22Z","lastTransitionTime":"2025-11-22T07:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.112589 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.112627 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.112636 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.112649 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.112659 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:22Z","lastTransitionTime":"2025-11-22T07:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.217038 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.217170 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.217195 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.217258 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.217279 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:22Z","lastTransitionTime":"2025-11-22T07:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.320349 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.320397 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.320415 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.320435 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.320449 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:22Z","lastTransitionTime":"2025-11-22T07:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.423216 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.423257 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.423266 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.423282 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.423295 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:22Z","lastTransitionTime":"2025-11-22T07:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.526132 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.526167 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.526178 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.526209 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.526219 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:22Z","lastTransitionTime":"2025-11-22T07:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.628903 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.628938 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.628964 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.628977 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.628986 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:22Z","lastTransitionTime":"2025-11-22T07:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.730819 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.730878 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.730907 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.730930 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.730944 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:22Z","lastTransitionTime":"2025-11-22T07:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.833770 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.833812 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.833825 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.833844 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.833857 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:22Z","lastTransitionTime":"2025-11-22T07:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.936316 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.936357 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.936366 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.936383 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:22 crc kubenswrapper[4856]: I1122 07:04:22.936392 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:22Z","lastTransitionTime":"2025-11-22T07:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.038479 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.038546 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.038555 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.038568 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.038577 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:23Z","lastTransitionTime":"2025-11-22T07:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.140538 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.140574 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.140584 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.140599 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.140610 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:23Z","lastTransitionTime":"2025-11-22T07:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.242409 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.242466 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.242474 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.242487 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.242531 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:23Z","lastTransitionTime":"2025-11-22T07:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.344910 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.344974 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.344990 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.345012 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.345030 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:23Z","lastTransitionTime":"2025-11-22T07:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.346953 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.346991 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.347002 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.347015 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.347043 4856 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-22T07:04:23Z","lastTransitionTime":"2025-11-22T07:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.385771 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg"] Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.386252 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.388023 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.388014 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.392292 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.392373 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.474101 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.474140 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.474167 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.474182 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.474203 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.575577 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.575622 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.575654 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.575719 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.575743 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.575799 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.575801 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.576861 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-service-ca\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.581545 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.595692 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/118df57d-fe98-4c49-bc0d-dbceab3e9fa8-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-g9hjg\" (UID: \"118df57d-fe98-4c49-bc0d-dbceab3e9fa8\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.704554 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.708611 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.708688 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:23 crc kubenswrapper[4856]: E1122 07:04:23.708734 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:23 crc kubenswrapper[4856]: E1122 07:04:23.708799 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.708839 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:23 crc kubenswrapper[4856]: I1122 07:04:23.708974 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:23 crc kubenswrapper[4856]: E1122 07:04:23.709024 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:23 crc kubenswrapper[4856]: E1122 07:04:23.709122 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:23 crc kubenswrapper[4856]: W1122 07:04:23.717956 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod118df57d_fe98_4c49_bc0d_dbceab3e9fa8.slice/crio-9deb759d2af397d90adf5ffd9c5c9fa65cbc5751019d0fb2164b32c3766f16ed WatchSource:0}: Error finding container 9deb759d2af397d90adf5ffd9c5c9fa65cbc5751019d0fb2164b32c3766f16ed: Status 404 returned error can't find the container with id 9deb759d2af397d90adf5ffd9c5c9fa65cbc5751019d0fb2164b32c3766f16ed Nov 22 07:04:24 crc kubenswrapper[4856]: I1122 07:04:24.030907 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" event={"ID":"118df57d-fe98-4c49-bc0d-dbceab3e9fa8","Type":"ContainerStarted","Data":"2f9074fe728ef631ee41d8851591c155c453357d99b7c9a5d0177445396360a0"} Nov 22 07:04:24 crc kubenswrapper[4856]: I1122 07:04:24.030967 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" event={"ID":"118df57d-fe98-4c49-bc0d-dbceab3e9fa8","Type":"ContainerStarted","Data":"9deb759d2af397d90adf5ffd9c5c9fa65cbc5751019d0fb2164b32c3766f16ed"} Nov 22 07:04:24 crc kubenswrapper[4856]: I1122 07:04:24.046741 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-g9hjg" podStartSLOduration=79.046724311 podStartE2EDuration="1m19.046724311s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:04:24.045253988 +0000 UTC m=+106.458647246" watchObservedRunningTime="2025-11-22 07:04:24.046724311 +0000 UTC m=+106.460117569" Nov 22 07:04:24 crc kubenswrapper[4856]: I1122 07:04:24.183848 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:24 crc kubenswrapper[4856]: E1122 07:04:24.184085 4856 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:04:24 crc kubenswrapper[4856]: E1122 07:04:24.184182 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs podName:dda6b6e5-61a2-459c-9207-5e5aa500869f nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.184164593 +0000 UTC m=+170.597557851 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs") pod "network-metrics-daemon-722tb" (UID: "dda6b6e5-61a2-459c-9207-5e5aa500869f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 22 07:04:25 crc kubenswrapper[4856]: I1122 07:04:25.708990 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:25 crc kubenswrapper[4856]: I1122 07:04:25.709096 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:25 crc kubenswrapper[4856]: I1122 07:04:25.709230 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:25 crc kubenswrapper[4856]: I1122 07:04:25.709424 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:25 crc kubenswrapper[4856]: E1122 07:04:25.709547 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:25 crc kubenswrapper[4856]: E1122 07:04:25.709409 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:25 crc kubenswrapper[4856]: E1122 07:04:25.709848 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:25 crc kubenswrapper[4856]: E1122 07:04:25.710042 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:27 crc kubenswrapper[4856]: I1122 07:04:27.709428 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:27 crc kubenswrapper[4856]: I1122 07:04:27.709542 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:27 crc kubenswrapper[4856]: I1122 07:04:27.709558 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:27 crc kubenswrapper[4856]: I1122 07:04:27.709558 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:27 crc kubenswrapper[4856]: E1122 07:04:27.709727 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:27 crc kubenswrapper[4856]: E1122 07:04:27.709888 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:27 crc kubenswrapper[4856]: E1122 07:04:27.710000 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:27 crc kubenswrapper[4856]: E1122 07:04:27.710050 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:29 crc kubenswrapper[4856]: I1122 07:04:29.708648 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:29 crc kubenswrapper[4856]: I1122 07:04:29.708649 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:29 crc kubenswrapper[4856]: E1122 07:04:29.709112 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:29 crc kubenswrapper[4856]: I1122 07:04:29.708800 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:29 crc kubenswrapper[4856]: E1122 07:04:29.709242 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:29 crc kubenswrapper[4856]: I1122 07:04:29.708792 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:29 crc kubenswrapper[4856]: E1122 07:04:29.709345 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:29 crc kubenswrapper[4856]: E1122 07:04:29.709487 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:31 crc kubenswrapper[4856]: I1122 07:04:31.708595 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:31 crc kubenswrapper[4856]: I1122 07:04:31.708671 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:31 crc kubenswrapper[4856]: I1122 07:04:31.708724 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:31 crc kubenswrapper[4856]: E1122 07:04:31.708734 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:31 crc kubenswrapper[4856]: I1122 07:04:31.708595 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:31 crc kubenswrapper[4856]: E1122 07:04:31.708826 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:31 crc kubenswrapper[4856]: E1122 07:04:31.708965 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:31 crc kubenswrapper[4856]: E1122 07:04:31.709005 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:33 crc kubenswrapper[4856]: I1122 07:04:33.709429 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:33 crc kubenswrapper[4856]: I1122 07:04:33.709530 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:33 crc kubenswrapper[4856]: I1122 07:04:33.709557 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:33 crc kubenswrapper[4856]: E1122 07:04:33.709614 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:33 crc kubenswrapper[4856]: I1122 07:04:33.709651 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:33 crc kubenswrapper[4856]: E1122 07:04:33.709796 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:33 crc kubenswrapper[4856]: E1122 07:04:33.709827 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:33 crc kubenswrapper[4856]: E1122 07:04:33.709889 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:34 crc kubenswrapper[4856]: I1122 07:04:34.709949 4856 scope.go:117] "RemoveContainer" containerID="31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e" Nov 22 07:04:34 crc kubenswrapper[4856]: E1122 07:04:34.710158 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2685z_openshift-ovn-kubernetes(752eee1c-98a9-4221-88a7-f332f704d4cf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" Nov 22 07:04:35 crc kubenswrapper[4856]: I1122 07:04:35.708528 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:35 crc kubenswrapper[4856]: E1122 07:04:35.708816 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:35 crc kubenswrapper[4856]: I1122 07:04:35.708531 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:35 crc kubenswrapper[4856]: E1122 07:04:35.709029 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:35 crc kubenswrapper[4856]: I1122 07:04:35.708525 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:35 crc kubenswrapper[4856]: I1122 07:04:35.708575 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:35 crc kubenswrapper[4856]: E1122 07:04:35.709287 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:35 crc kubenswrapper[4856]: E1122 07:04:35.709330 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:37 crc kubenswrapper[4856]: I1122 07:04:37.709176 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:37 crc kubenswrapper[4856]: I1122 07:04:37.709175 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:37 crc kubenswrapper[4856]: I1122 07:04:37.709205 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:37 crc kubenswrapper[4856]: I1122 07:04:37.709387 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:37 crc kubenswrapper[4856]: E1122 07:04:37.709720 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:37 crc kubenswrapper[4856]: E1122 07:04:37.709781 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:37 crc kubenswrapper[4856]: E1122 07:04:37.709852 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:37 crc kubenswrapper[4856]: E1122 07:04:37.710028 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:38 crc kubenswrapper[4856]: E1122 07:04:38.698614 4856 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 22 07:04:38 crc kubenswrapper[4856]: E1122 07:04:38.838918 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:04:39 crc kubenswrapper[4856]: I1122 07:04:39.708805 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:39 crc kubenswrapper[4856]: I1122 07:04:39.708846 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:39 crc kubenswrapper[4856]: I1122 07:04:39.708810 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:39 crc kubenswrapper[4856]: E1122 07:04:39.708929 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:39 crc kubenswrapper[4856]: E1122 07:04:39.709007 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:39 crc kubenswrapper[4856]: E1122 07:04:39.709092 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:39 crc kubenswrapper[4856]: I1122 07:04:39.709438 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:39 crc kubenswrapper[4856]: E1122 07:04:39.709579 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:41 crc kubenswrapper[4856]: I1122 07:04:41.709697 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:41 crc kubenswrapper[4856]: I1122 07:04:41.709786 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:41 crc kubenswrapper[4856]: E1122 07:04:41.709905 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:41 crc kubenswrapper[4856]: I1122 07:04:41.710008 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:41 crc kubenswrapper[4856]: E1122 07:04:41.710149 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:41 crc kubenswrapper[4856]: E1122 07:04:41.710285 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:41 crc kubenswrapper[4856]: I1122 07:04:41.710754 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:41 crc kubenswrapper[4856]: E1122 07:04:41.710901 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:43 crc kubenswrapper[4856]: I1122 07:04:43.095618 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjqpv_59c3498a-6659-454c-9fe0-361fa7a0783c/kube-multus/1.log" Nov 22 07:04:43 crc kubenswrapper[4856]: I1122 07:04:43.096378 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjqpv_59c3498a-6659-454c-9fe0-361fa7a0783c/kube-multus/0.log" Nov 22 07:04:43 crc kubenswrapper[4856]: I1122 07:04:43.096477 4856 generic.go:334] "Generic (PLEG): container finished" podID="59c3498a-6659-454c-9fe0-361fa7a0783c" containerID="85802497d854ef578bcdbf9c5f897f66796b88ff8d65b13ddd9e41b1be93d956" exitCode=1 Nov 22 07:04:43 crc kubenswrapper[4856]: I1122 07:04:43.096554 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjqpv" event={"ID":"59c3498a-6659-454c-9fe0-361fa7a0783c","Type":"ContainerDied","Data":"85802497d854ef578bcdbf9c5f897f66796b88ff8d65b13ddd9e41b1be93d956"} Nov 22 07:04:43 crc kubenswrapper[4856]: I1122 07:04:43.096616 4856 scope.go:117] "RemoveContainer" containerID="89026c485a1a0f39c7607e451e145e3eddb8a9a31c1d5936de5eb598aa66cdd4" Nov 22 07:04:43 crc kubenswrapper[4856]: I1122 07:04:43.097195 4856 scope.go:117] "RemoveContainer" containerID="85802497d854ef578bcdbf9c5f897f66796b88ff8d65b13ddd9e41b1be93d956" Nov 22 07:04:43 crc kubenswrapper[4856]: E1122 07:04:43.097561 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-fjqpv_openshift-multus(59c3498a-6659-454c-9fe0-361fa7a0783c)\"" pod="openshift-multus/multus-fjqpv" podUID="59c3498a-6659-454c-9fe0-361fa7a0783c" Nov 22 07:04:43 crc kubenswrapper[4856]: I1122 07:04:43.708679 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:43 crc kubenswrapper[4856]: I1122 07:04:43.708716 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:43 crc kubenswrapper[4856]: I1122 07:04:43.708707 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:43 crc kubenswrapper[4856]: I1122 07:04:43.708679 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:43 crc kubenswrapper[4856]: E1122 07:04:43.708793 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:43 crc kubenswrapper[4856]: E1122 07:04:43.708889 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:43 crc kubenswrapper[4856]: E1122 07:04:43.709098 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:43 crc kubenswrapper[4856]: E1122 07:04:43.709169 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:43 crc kubenswrapper[4856]: E1122 07:04:43.840433 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:04:44 crc kubenswrapper[4856]: I1122 07:04:44.100814 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjqpv_59c3498a-6659-454c-9fe0-361fa7a0783c/kube-multus/1.log" Nov 22 07:04:45 crc kubenswrapper[4856]: I1122 07:04:45.708902 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:45 crc kubenswrapper[4856]: I1122 07:04:45.708946 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:45 crc kubenswrapper[4856]: I1122 07:04:45.709017 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:45 crc kubenswrapper[4856]: I1122 07:04:45.708926 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:45 crc kubenswrapper[4856]: E1122 07:04:45.709097 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:45 crc kubenswrapper[4856]: E1122 07:04:45.709174 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:45 crc kubenswrapper[4856]: E1122 07:04:45.709252 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:45 crc kubenswrapper[4856]: E1122 07:04:45.709337 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:47 crc kubenswrapper[4856]: I1122 07:04:47.709383 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:47 crc kubenswrapper[4856]: I1122 07:04:47.709394 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:47 crc kubenswrapper[4856]: I1122 07:04:47.709395 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:47 crc kubenswrapper[4856]: E1122 07:04:47.709828 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:47 crc kubenswrapper[4856]: E1122 07:04:47.709574 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:47 crc kubenswrapper[4856]: E1122 07:04:47.709964 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:47 crc kubenswrapper[4856]: I1122 07:04:47.709554 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:47 crc kubenswrapper[4856]: E1122 07:04:47.710122 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:48 crc kubenswrapper[4856]: I1122 07:04:48.711495 4856 scope.go:117] "RemoveContainer" containerID="31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e" Nov 22 07:04:48 crc kubenswrapper[4856]: E1122 07:04:48.841286 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:04:49 crc kubenswrapper[4856]: I1122 07:04:49.119751 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/3.log" Nov 22 07:04:49 crc kubenswrapper[4856]: I1122 07:04:49.122813 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerStarted","Data":"48539420b5e6d6a577381d7e945bd14c09869ee456dba40d36330cf27bd84070"} Nov 22 07:04:49 crc kubenswrapper[4856]: I1122 07:04:49.123416 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:04:49 crc kubenswrapper[4856]: I1122 07:04:49.160775 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podStartSLOduration=104.16075379 podStartE2EDuration="1m44.16075379s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:04:49.160476354 +0000 UTC m=+131.573869662" watchObservedRunningTime="2025-11-22 07:04:49.16075379 +0000 UTC m=+131.574147048" Nov 22 07:04:49 crc kubenswrapper[4856]: I1122 07:04:49.709612 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:49 crc kubenswrapper[4856]: I1122 07:04:49.709703 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:49 crc kubenswrapper[4856]: E1122 07:04:49.709729 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:49 crc kubenswrapper[4856]: I1122 07:04:49.709887 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:49 crc kubenswrapper[4856]: E1122 07:04:49.709888 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:49 crc kubenswrapper[4856]: E1122 07:04:49.709933 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:49 crc kubenswrapper[4856]: I1122 07:04:49.710090 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:49 crc kubenswrapper[4856]: E1122 07:04:49.710218 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:49 crc kubenswrapper[4856]: I1122 07:04:49.818979 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-722tb"] Nov 22 07:04:50 crc kubenswrapper[4856]: I1122 07:04:50.128132 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:50 crc kubenswrapper[4856]: E1122 07:04:50.128565 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:51 crc kubenswrapper[4856]: I1122 07:04:51.709446 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:51 crc kubenswrapper[4856]: I1122 07:04:51.709578 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:51 crc kubenswrapper[4856]: I1122 07:04:51.709642 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:51 crc kubenswrapper[4856]: E1122 07:04:51.710800 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:51 crc kubenswrapper[4856]: I1122 07:04:51.711203 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:51 crc kubenswrapper[4856]: E1122 07:04:51.711403 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:51 crc kubenswrapper[4856]: E1122 07:04:51.711348 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:51 crc kubenswrapper[4856]: E1122 07:04:51.711243 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:53 crc kubenswrapper[4856]: I1122 07:04:53.709644 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:53 crc kubenswrapper[4856]: I1122 07:04:53.709747 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:53 crc kubenswrapper[4856]: I1122 07:04:53.709855 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:53 crc kubenswrapper[4856]: E1122 07:04:53.710031 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:53 crc kubenswrapper[4856]: I1122 07:04:53.710320 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:53 crc kubenswrapper[4856]: E1122 07:04:53.710418 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:53 crc kubenswrapper[4856]: E1122 07:04:53.710616 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:53 crc kubenswrapper[4856]: E1122 07:04:53.710730 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:53 crc kubenswrapper[4856]: E1122 07:04:53.844936 4856 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:04:55 crc kubenswrapper[4856]: I1122 07:04:55.708964 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:55 crc kubenswrapper[4856]: I1122 07:04:55.709018 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:55 crc kubenswrapper[4856]: I1122 07:04:55.709127 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:55 crc kubenswrapper[4856]: I1122 07:04:55.709197 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:55 crc kubenswrapper[4856]: E1122 07:04:55.709377 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:55 crc kubenswrapper[4856]: I1122 07:04:55.709456 4856 scope.go:117] "RemoveContainer" containerID="85802497d854ef578bcdbf9c5f897f66796b88ff8d65b13ddd9e41b1be93d956" Nov 22 07:04:55 crc kubenswrapper[4856]: E1122 07:04:55.709670 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:55 crc kubenswrapper[4856]: E1122 07:04:55.709732 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:55 crc kubenswrapper[4856]: E1122 07:04:55.709750 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:57 crc kubenswrapper[4856]: I1122 07:04:57.156722 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjqpv_59c3498a-6659-454c-9fe0-361fa7a0783c/kube-multus/1.log" Nov 22 07:04:57 crc kubenswrapper[4856]: I1122 07:04:57.157788 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjqpv" event={"ID":"59c3498a-6659-454c-9fe0-361fa7a0783c","Type":"ContainerStarted","Data":"96df3ae9766dbae643106da1572f9d0c1c5787e1e82f6dbb57a18cf7ba6e3c10"} Nov 22 07:04:57 crc kubenswrapper[4856]: I1122 07:04:57.709419 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:57 crc kubenswrapper[4856]: E1122 07:04:57.709852 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 22 07:04:57 crc kubenswrapper[4856]: I1122 07:04:57.709542 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:57 crc kubenswrapper[4856]: E1122 07:04:57.710156 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 22 07:04:57 crc kubenswrapper[4856]: I1122 07:04:57.709493 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:57 crc kubenswrapper[4856]: E1122 07:04:57.710396 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-722tb" podUID="dda6b6e5-61a2-459c-9207-5e5aa500869f" Nov 22 07:04:57 crc kubenswrapper[4856]: I1122 07:04:57.709488 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:57 crc kubenswrapper[4856]: E1122 07:04:57.710655 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 22 07:04:59 crc kubenswrapper[4856]: I1122 07:04:59.709292 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:04:59 crc kubenswrapper[4856]: I1122 07:04:59.709419 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:04:59 crc kubenswrapper[4856]: I1122 07:04:59.709471 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:04:59 crc kubenswrapper[4856]: I1122 07:04:59.709672 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:04:59 crc kubenswrapper[4856]: I1122 07:04:59.713065 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 22 07:04:59 crc kubenswrapper[4856]: I1122 07:04:59.713405 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 22 07:04:59 crc kubenswrapper[4856]: I1122 07:04:59.715566 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 22 07:04:59 crc kubenswrapper[4856]: I1122 07:04:59.715649 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 22 07:04:59 crc kubenswrapper[4856]: I1122 07:04:59.715923 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 22 07:04:59 crc kubenswrapper[4856]: I1122 07:04:59.716196 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.240055 4856 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.269605 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2szb8"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.270313 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.270339 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lpbp9"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.271312 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.273640 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.273814 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.273941 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.274597 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-csttt"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.274812 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.274916 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.277450 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.277812 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.277932 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.278222 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.278911 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.278981 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.279177 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.279319 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.279484 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.279713 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.287870 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.288084 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.288146 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.288454 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.288655 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.288710 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:04 crc kubenswrapper[4856]: W1122 07:05:04.288805 4856 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Nov 22 07:05:04 crc kubenswrapper[4856]: E1122 07:05:04.288839 4856 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 22 07:05:04 crc kubenswrapper[4856]: W1122 07:05:04.290008 4856 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Nov 22 07:05:04 crc kubenswrapper[4856]: E1122 07:05:04.290043 4856 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 22 07:05:04 crc kubenswrapper[4856]: W1122 07:05:04.290116 4856 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Nov 22 07:05:04 crc kubenswrapper[4856]: E1122 07:05:04.290129 4856 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 22 07:05:04 crc kubenswrapper[4856]: W1122 07:05:04.290176 4856 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Nov 22 07:05:04 crc kubenswrapper[4856]: E1122 07:05:04.290212 4856 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 22 07:05:04 crc kubenswrapper[4856]: W1122 07:05:04.290264 4856 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Nov 22 07:05:04 crc kubenswrapper[4856]: E1122 07:05:04.290283 4856 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 22 07:05:04 crc kubenswrapper[4856]: W1122 07:05:04.290557 4856 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: configmaps "openshift-global-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Nov 22 07:05:04 crc kubenswrapper[4856]: E1122 07:05:04.290589 4856 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-global-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.290786 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 22 07:05:04 crc kubenswrapper[4856]: W1122 07:05:04.290984 4856 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Nov 22 07:05:04 crc kubenswrapper[4856]: E1122 07:05:04.291028 4856 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.292170 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-57k7r"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.294501 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.300542 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.302079 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.302133 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.302220 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.302465 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.302495 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.302615 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.302741 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.302616 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.302657 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.302077 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.302695 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.302735 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.303510 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.303611 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.306237 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.308590 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-56qbr"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.308922 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.309212 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.309869 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.310317 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.310691 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.311075 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-psfmg"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.311636 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-psfmg" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.311898 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.312428 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.314866 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rc5xn"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.315290 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sl25x"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.315631 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.315835 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.316458 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.316985 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.317766 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.319474 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.319829 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.322968 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.323462 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2hxc"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.323797 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.324181 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.324901 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mf6qh"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.325340 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.327463 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.327804 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.328033 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.328213 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.328362 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.328371 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.328600 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.328820 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.332885 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.333128 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.333159 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.333323 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.333353 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.333489 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.333617 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.333719 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.333719 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.334219 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.334292 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x4fc7"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.334563 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.334594 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.334607 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.335343 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-klclm"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.336049 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.337168 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.337557 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.337680 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.337844 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.338040 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.338158 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.338293 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.338400 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.338508 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.338716 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.338829 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.338935 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.348717 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.348779 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.348844 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.348972 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.349050 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.349145 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.349300 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.349322 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.349465 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.350509 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.350702 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.350844 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.350963 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.351134 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.351243 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.351532 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.351733 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.351788 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.351901 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.351990 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.352045 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.352153 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.352192 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.352280 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.352313 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.352555 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.353444 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.354068 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.370895 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b4jc\" (UniqueName: \"kubernetes.io/projected/d47abc5e-74bd-4f9a-9a99-1d83d8834ce0-kube-api-access-5b4jc\") pod \"cluster-samples-operator-665b6dd947-89kjz\" (UID: \"d47abc5e-74bd-4f9a-9a99-1d83d8834ce0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.371244 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4dff5c22-ed64-4f83-9f80-3c618d5585ab-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2szb8\" (UID: \"4dff5c22-ed64-4f83-9f80-3c618d5585ab\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.371418 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-etcd-serving-ca\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.371496 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/689f4fd5-222f-46a6-a41b-bc519d7c1005-machine-approver-tls\") pod \"machine-approver-56656f9798-nrtm2\" (UID: \"689f4fd5-222f-46a6-a41b-bc519d7c1005\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.371626 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5mj5\" (UniqueName: \"kubernetes.io/projected/52414feb-0c08-4591-a84a-985167853ba3-kube-api-access-h5mj5\") pod \"downloads-7954f5f757-psfmg\" (UID: \"52414feb-0c08-4591-a84a-985167853ba3\") " pod="openshift-console/downloads-7954f5f757-psfmg" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.371699 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c5787e0-4c30-4ab6-8fa4-0936744b8fc4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cbrkr\" (UID: \"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.371763 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.371829 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-service-ca\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.371905 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-oauth-serving-cert\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.371976 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcljw\" (UniqueName: \"kubernetes.io/projected/b237e36f-a520-4471-82a5-5d26aff897b1-kube-api-access-fcljw\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.372044 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-config\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.372116 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d9a5e5b4-a255-4888-b381-e743b2440738-etcd-client\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.372187 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8ht\" (UniqueName: \"kubernetes.io/projected/7c5787e0-4c30-4ab6-8fa4-0936744b8fc4-kube-api-access-tb8ht\") pod \"cluster-image-registry-operator-dc59b4c8b-cbrkr\" (UID: \"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.372261 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40737369-e550-4119-b969-44e99b9ec9e7-serving-cert\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.372341 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f1bb024-f9c1-46f4-8805-4dd12cf9a369-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr6p4\" (UID: \"4f1bb024-f9c1-46f4-8805-4dd12cf9a369\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.372415 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-serving-cert\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.372486 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40737369-e550-4119-b969-44e99b9ec9e7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.372590 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jmq8\" (UniqueName: \"kubernetes.io/projected/4dff5c22-ed64-4f83-9f80-3c618d5585ab-kube-api-access-7jmq8\") pod \"machine-api-operator-5694c8668f-2szb8\" (UID: \"4dff5c22-ed64-4f83-9f80-3c618d5585ab\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.372660 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb75722-66d1-46a3-b867-1cab32f01ede-console-serving-cert\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.372765 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xtv\" (UniqueName: \"kubernetes.io/projected/2cb75722-66d1-46a3-b867-1cab32f01ede-kube-api-access-45xtv\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.372861 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b237e36f-a520-4471-82a5-5d26aff897b1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.372932 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40737369-e550-4119-b969-44e99b9ec9e7-service-ca-bundle\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.373008 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2zrq\" (UniqueName: \"kubernetes.io/projected/4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c-kube-api-access-t2zrq\") pod \"openshift-config-operator-7777fb866f-ntxhf\" (UID: \"4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.373086 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdw6d\" (UniqueName: \"kubernetes.io/projected/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-kube-api-access-bdw6d\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.373158 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a5e5b4-a255-4888-b381-e743b2440738-serving-cert\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.373227 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9a5e5b4-a255-4888-b381-e743b2440738-audit-dir\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.373320 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-config\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.373387 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/689f4fd5-222f-46a6-a41b-bc519d7c1005-auth-proxy-config\") pod \"machine-approver-56656f9798-nrtm2\" (UID: \"689f4fd5-222f-46a6-a41b-bc519d7c1005\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.373461 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dff5c22-ed64-4f83-9f80-3c618d5585ab-config\") pod \"machine-api-operator-5694c8668f-2szb8\" (UID: \"4dff5c22-ed64-4f83-9f80-3c618d5585ab\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.373547 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-trusted-ca-bundle\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.373628 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f1bb024-f9c1-46f4-8805-4dd12cf9a369-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr6p4\" (UID: \"4f1bb024-f9c1-46f4-8805-4dd12cf9a369\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.373714 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.373782 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/689f4fd5-222f-46a6-a41b-bc519d7c1005-config\") pod \"machine-approver-56656f9798-nrtm2\" (UID: \"689f4fd5-222f-46a6-a41b-bc519d7c1005\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.373856 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2cb75722-66d1-46a3-b867-1cab32f01ede-console-oauth-config\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.373983 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.374259 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wcx2\" (UniqueName: \"kubernetes.io/projected/689f4fd5-222f-46a6-a41b-bc519d7c1005-kube-api-access-9wcx2\") pod \"machine-approver-56656f9798-nrtm2\" (UID: \"689f4fd5-222f-46a6-a41b-bc519d7c1005\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.374345 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l74cc\" (UniqueName: \"kubernetes.io/projected/4f1bb024-f9c1-46f4-8805-4dd12cf9a369-kube-api-access-l74cc\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr6p4\" (UID: \"4f1bb024-f9c1-46f4-8805-4dd12cf9a369\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.374485 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-client-ca\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.374647 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d47abc5e-74bd-4f9a-9a99-1d83d8834ce0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-89kjz\" (UID: \"d47abc5e-74bd-4f9a-9a99-1d83d8834ce0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.374736 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v4xl\" (UniqueName: \"kubernetes.io/projected/40737369-e550-4119-b969-44e99b9ec9e7-kube-api-access-9v4xl\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.374855 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b237e36f-a520-4471-82a5-5d26aff897b1-audit-dir\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.375015 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b237e36f-a520-4471-82a5-5d26aff897b1-serving-cert\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.375204 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-image-import-ca\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.375345 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-audit\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.375414 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-console-config\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.375493 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c-serving-cert\") pod \"openshift-config-operator-7777fb866f-ntxhf\" (UID: \"4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.375610 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mgsh\" (UniqueName: \"kubernetes.io/projected/d9a5e5b4-a255-4888-b381-e743b2440738-kube-api-access-8mgsh\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.375717 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40737369-e550-4119-b969-44e99b9ec9e7-config\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.375799 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ntxhf\" (UID: \"4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.375872 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d9a5e5b4-a255-4888-b381-e743b2440738-encryption-config\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.375818 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.375940 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b237e36f-a520-4471-82a5-5d26aff897b1-audit-policies\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.376070 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b237e36f-a520-4471-82a5-5d26aff897b1-encryption-config\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.376157 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4dff5c22-ed64-4f83-9f80-3c618d5585ab-images\") pod \"machine-api-operator-5694c8668f-2szb8\" (UID: \"4dff5c22-ed64-4f83-9f80-3c618d5585ab\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.376241 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c5787e0-4c30-4ab6-8fa4-0936744b8fc4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cbrkr\" (UID: \"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.376319 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c5787e0-4c30-4ab6-8fa4-0936744b8fc4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cbrkr\" (UID: \"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.376388 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d9a5e5b4-a255-4888-b381-e743b2440738-node-pullsecrets\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.376465 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b237e36f-a520-4471-82a5-5d26aff897b1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.376573 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b237e36f-a520-4471-82a5-5d26aff897b1-etcd-client\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.376595 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.378992 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mdfqc"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.379594 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cxh4g"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.380087 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.380357 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.380732 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.380846 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.380926 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.380995 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cxh4g" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.381032 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mdfqc" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.381129 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.381267 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.381431 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.382003 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.383637 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.384267 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.384322 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kbbhd"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.384617 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.385137 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.389553 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-2grnj"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.390045 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.391673 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.392266 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.393433 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cbrxg"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.395136 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbrxg" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.396563 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.397614 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.398079 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.398975 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.407344 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.408163 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.418586 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.419065 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.419220 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.419151 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.419813 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.420504 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.420638 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.421372 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.434023 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.436005 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.440681 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-csttt"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.440965 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lpbp9"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.442818 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rc5xn"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.458207 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.459749 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.470749 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2szb8"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.472410 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-57k7r"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.474832 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hlxfj"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.476451 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.477596 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2zrq\" (UniqueName: \"kubernetes.io/projected/4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c-kube-api-access-t2zrq\") pod \"openshift-config-operator-7777fb866f-ntxhf\" (UID: \"4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.477700 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdw6d\" (UniqueName: \"kubernetes.io/projected/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-kube-api-access-bdw6d\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.477787 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/312bb5c3-467c-48bb-967f-b8aadfa43e94-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mdfqc\" (UID: \"312bb5c3-467c-48bb-967f-b8aadfa43e94\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mdfqc" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.477863 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-config\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.477931 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/689f4fd5-222f-46a6-a41b-bc519d7c1005-auth-proxy-config\") pod \"machine-approver-56656f9798-nrtm2\" (UID: \"689f4fd5-222f-46a6-a41b-bc519d7c1005\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.477996 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a5e5b4-a255-4888-b381-e743b2440738-serving-cert\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.478066 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9a5e5b4-a255-4888-b381-e743b2440738-audit-dir\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.478163 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dff5c22-ed64-4f83-9f80-3c618d5585ab-config\") pod \"machine-api-operator-5694c8668f-2szb8\" (UID: \"4dff5c22-ed64-4f83-9f80-3c618d5585ab\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.478252 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-trusted-ca-bundle\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.478310 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d9a5e5b4-a255-4888-b381-e743b2440738-audit-dir\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.478328 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e21e2b8-4129-4670-96a9-e587637a3a04-metrics-tls\") pod \"dns-operator-744455d44c-cxh4g\" (UID: \"8e21e2b8-4129-4670-96a9-e587637a3a04\") " pod="openshift-dns-operator/dns-operator-744455d44c-cxh4g" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.478488 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/689f4fd5-222f-46a6-a41b-bc519d7c1005-config\") pod \"machine-approver-56656f9798-nrtm2\" (UID: \"689f4fd5-222f-46a6-a41b-bc519d7c1005\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.478592 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f1bb024-f9c1-46f4-8805-4dd12cf9a369-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr6p4\" (UID: \"4f1bb024-f9c1-46f4-8805-4dd12cf9a369\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.478713 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479001 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wcx2\" (UniqueName: \"kubernetes.io/projected/689f4fd5-222f-46a6-a41b-bc519d7c1005-kube-api-access-9wcx2\") pod \"machine-approver-56656f9798-nrtm2\" (UID: \"689f4fd5-222f-46a6-a41b-bc519d7c1005\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479077 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2cb75722-66d1-46a3-b867-1cab32f01ede-console-oauth-config\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479152 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l74cc\" (UniqueName: \"kubernetes.io/projected/4f1bb024-f9c1-46f4-8805-4dd12cf9a369-kube-api-access-l74cc\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr6p4\" (UID: \"4f1bb024-f9c1-46f4-8805-4dd12cf9a369\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479225 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-client-ca\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479303 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d47abc5e-74bd-4f9a-9a99-1d83d8834ce0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-89kjz\" (UID: \"d47abc5e-74bd-4f9a-9a99-1d83d8834ce0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479374 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v4xl\" (UniqueName: \"kubernetes.io/projected/40737369-e550-4119-b969-44e99b9ec9e7-kube-api-access-9v4xl\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479441 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b237e36f-a520-4471-82a5-5d26aff897b1-audit-dir\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479536 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b237e36f-a520-4471-82a5-5d26aff897b1-serving-cert\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479630 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-image-import-ca\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479702 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-console-config\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479770 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c-serving-cert\") pod \"openshift-config-operator-7777fb866f-ntxhf\" (UID: \"4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479837 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-audit\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479909 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mgsh\" (UniqueName: \"kubernetes.io/projected/d9a5e5b4-a255-4888-b381-e743b2440738-kube-api-access-8mgsh\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479987 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvrxm\" (UniqueName: \"kubernetes.io/projected/8e21e2b8-4129-4670-96a9-e587637a3a04-kube-api-access-gvrxm\") pod \"dns-operator-744455d44c-cxh4g\" (UID: \"8e21e2b8-4129-4670-96a9-e587637a3a04\") " pod="openshift-dns-operator/dns-operator-744455d44c-cxh4g" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.480053 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ntxhf\" (UID: \"4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.480117 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40737369-e550-4119-b969-44e99b9ec9e7-config\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.480188 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lphtx\" (UniqueName: \"kubernetes.io/projected/18513a7b-b0ef-4b3a-be63-ebd97482baa7-kube-api-access-lphtx\") pod \"service-ca-operator-777779d784-6s7wh\" (UID: \"18513a7b-b0ef-4b3a-be63-ebd97482baa7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.480264 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b237e36f-a520-4471-82a5-5d26aff897b1-audit-policies\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.480334 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b237e36f-a520-4471-82a5-5d26aff897b1-encryption-config\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.480410 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18513a7b-b0ef-4b3a-be63-ebd97482baa7-config\") pod \"service-ca-operator-777779d784-6s7wh\" (UID: \"18513a7b-b0ef-4b3a-be63-ebd97482baa7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.480958 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f1bb024-f9c1-46f4-8805-4dd12cf9a369-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr6p4\" (UID: \"4f1bb024-f9c1-46f4-8805-4dd12cf9a369\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.480960 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d9a5e5b4-a255-4888-b381-e743b2440738-encryption-config\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481033 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4dff5c22-ed64-4f83-9f80-3c618d5585ab-images\") pod \"machine-api-operator-5694c8668f-2szb8\" (UID: \"4dff5c22-ed64-4f83-9f80-3c618d5585ab\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481054 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c5787e0-4c30-4ab6-8fa4-0936744b8fc4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cbrkr\" (UID: \"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481079 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c5787e0-4c30-4ab6-8fa4-0936744b8fc4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cbrkr\" (UID: \"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481124 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d9a5e5b4-a255-4888-b381-e743b2440738-node-pullsecrets\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481143 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b237e36f-a520-4471-82a5-5d26aff897b1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481160 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b237e36f-a520-4471-82a5-5d26aff897b1-etcd-client\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481184 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b4jc\" (UniqueName: \"kubernetes.io/projected/d47abc5e-74bd-4f9a-9a99-1d83d8834ce0-kube-api-access-5b4jc\") pod \"cluster-samples-operator-665b6dd947-89kjz\" (UID: \"d47abc5e-74bd-4f9a-9a99-1d83d8834ce0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481211 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4dff5c22-ed64-4f83-9f80-3c618d5585ab-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2szb8\" (UID: \"4dff5c22-ed64-4f83-9f80-3c618d5585ab\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481229 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-etcd-serving-ca\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481254 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/689f4fd5-222f-46a6-a41b-bc519d7c1005-machine-approver-tls\") pod \"machine-approver-56656f9798-nrtm2\" (UID: \"689f4fd5-222f-46a6-a41b-bc519d7c1005\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481279 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481302 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-service-ca\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481325 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5mj5\" (UniqueName: \"kubernetes.io/projected/52414feb-0c08-4591-a84a-985167853ba3-kube-api-access-h5mj5\") pod \"downloads-7954f5f757-psfmg\" (UID: \"52414feb-0c08-4591-a84a-985167853ba3\") " pod="openshift-console/downloads-7954f5f757-psfmg" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481347 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c5787e0-4c30-4ab6-8fa4-0936744b8fc4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cbrkr\" (UID: \"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481377 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdxvs\" (UniqueName: \"kubernetes.io/projected/312bb5c3-467c-48bb-967f-b8aadfa43e94-kube-api-access-hdxvs\") pod \"multus-admission-controller-857f4d67dd-mdfqc\" (UID: \"312bb5c3-467c-48bb-967f-b8aadfa43e94\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mdfqc" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481403 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-config\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481422 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-oauth-serving-cert\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481443 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcljw\" (UniqueName: \"kubernetes.io/projected/b237e36f-a520-4471-82a5-5d26aff897b1-kube-api-access-fcljw\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481474 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d9a5e5b4-a255-4888-b381-e743b2440738-etcd-client\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481501 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8ht\" (UniqueName: \"kubernetes.io/projected/7c5787e0-4c30-4ab6-8fa4-0936744b8fc4-kube-api-access-tb8ht\") pod \"cluster-image-registry-operator-dc59b4c8b-cbrkr\" (UID: \"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481541 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40737369-e550-4119-b969-44e99b9ec9e7-serving-cert\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481569 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18513a7b-b0ef-4b3a-be63-ebd97482baa7-serving-cert\") pod \"service-ca-operator-777779d784-6s7wh\" (UID: \"18513a7b-b0ef-4b3a-be63-ebd97482baa7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481593 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-serving-cert\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481616 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40737369-e550-4119-b969-44e99b9ec9e7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481641 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f1bb024-f9c1-46f4-8805-4dd12cf9a369-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr6p4\" (UID: \"4f1bb024-f9c1-46f4-8805-4dd12cf9a369\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481676 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jmq8\" (UniqueName: \"kubernetes.io/projected/4dff5c22-ed64-4f83-9f80-3c618d5585ab-kube-api-access-7jmq8\") pod \"machine-api-operator-5694c8668f-2szb8\" (UID: \"4dff5c22-ed64-4f83-9f80-3c618d5585ab\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481698 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb75722-66d1-46a3-b867-1cab32f01ede-console-serving-cert\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481721 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xtv\" (UniqueName: \"kubernetes.io/projected/2cb75722-66d1-46a3-b867-1cab32f01ede-kube-api-access-45xtv\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481744 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b237e36f-a520-4471-82a5-5d26aff897b1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.481767 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40737369-e550-4119-b969-44e99b9ec9e7-service-ca-bundle\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.482384 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40737369-e550-4119-b969-44e99b9ec9e7-service-ca-bundle\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.479875 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-trusted-ca-bundle\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.484733 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4dff5c22-ed64-4f83-9f80-3c618d5585ab-images\") pod \"machine-api-operator-5694c8668f-2szb8\" (UID: \"4dff5c22-ed64-4f83-9f80-3c618d5585ab\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.485588 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-ntxhf\" (UID: \"4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.485660 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-image-import-ca\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.485809 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b237e36f-a520-4471-82a5-5d26aff897b1-audit-dir\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.486132 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40737369-e550-4119-b969-44e99b9ec9e7-config\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.486576 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/689f4fd5-222f-46a6-a41b-bc519d7c1005-auth-proxy-config\") pod \"machine-approver-56656f9798-nrtm2\" (UID: \"689f4fd5-222f-46a6-a41b-bc519d7c1005\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.487443 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dff5c22-ed64-4f83-9f80-3c618d5585ab-config\") pod \"machine-api-operator-5694c8668f-2szb8\" (UID: \"4dff5c22-ed64-4f83-9f80-3c618d5585ab\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.487953 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-audit\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.488386 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-config\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.488917 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-oauth-serving-cert\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.494376 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b237e36f-a520-4471-82a5-5d26aff897b1-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.495009 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d9a5e5b4-a255-4888-b381-e743b2440738-etcd-client\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.495436 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7c5787e0-4c30-4ab6-8fa4-0936744b8fc4-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-cbrkr\" (UID: \"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.495489 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d9a5e5b4-a255-4888-b381-e743b2440738-node-pullsecrets\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.495954 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4dff5c22-ed64-4f83-9f80-3c618d5585ab-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-2szb8\" (UID: \"4dff5c22-ed64-4f83-9f80-3c618d5585ab\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.497097 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-console-config\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.497142 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c-serving-cert\") pod \"openshift-config-operator-7777fb866f-ntxhf\" (UID: \"4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.497151 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.497260 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.497944 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-etcd-serving-ca\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.497987 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9a5e5b4-a255-4888-b381-e743b2440738-trusted-ca-bundle\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.480308 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/689f4fd5-222f-46a6-a41b-bc519d7c1005-config\") pod \"machine-approver-56656f9798-nrtm2\" (UID: \"689f4fd5-222f-46a6-a41b-bc519d7c1005\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.498757 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-service-ca\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.499239 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b237e36f-a520-4471-82a5-5d26aff897b1-audit-policies\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.500214 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40737369-e550-4119-b969-44e99b9ec9e7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.505603 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4f1bb024-f9c1-46f4-8805-4dd12cf9a369-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr6p4\" (UID: \"4f1bb024-f9c1-46f4-8805-4dd12cf9a369\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.506177 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b237e36f-a520-4471-82a5-5d26aff897b1-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.509881 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d47abc5e-74bd-4f9a-9a99-1d83d8834ce0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-89kjz\" (UID: \"d47abc5e-74bd-4f9a-9a99-1d83d8834ce0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.511261 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7c5787e0-4c30-4ab6-8fa4-0936744b8fc4-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-cbrkr\" (UID: \"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.511841 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d9a5e5b4-a255-4888-b381-e743b2440738-encryption-config\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.511952 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b237e36f-a520-4471-82a5-5d26aff897b1-encryption-config\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.512057 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d9a5e5b4-a255-4888-b381-e743b2440738-serving-cert\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.512089 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b237e36f-a520-4471-82a5-5d26aff897b1-serving-cert\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.512676 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x4fc7"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.515231 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb75722-66d1-46a3-b867-1cab32f01ede-console-serving-cert\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.520856 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/689f4fd5-222f-46a6-a41b-bc519d7c1005-machine-approver-tls\") pod \"machine-approver-56656f9798-nrtm2\" (UID: \"689f4fd5-222f-46a6-a41b-bc519d7c1005\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.521039 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b237e36f-a520-4471-82a5-5d26aff897b1-etcd-client\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.521113 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40737369-e550-4119-b969-44e99b9ec9e7-serving-cert\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.522777 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2cb75722-66d1-46a3-b867-1cab32f01ede-console-oauth-config\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.527003 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.527312 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.527381 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.527793 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.527837 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.534730 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.542843 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.544884 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.546026 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2hxc"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.547609 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-psfmg"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.548363 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.549702 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.551820 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-56qbr"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.553202 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sl25x"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.555742 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.555961 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.556812 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.563653 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.567000 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kbbhd"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.568672 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.571594 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-klclm"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.571784 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.574592 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mdfqc"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.574658 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.575779 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cxh4g"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.582693 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cbrxg"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.583752 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvrxm\" (UniqueName: \"kubernetes.io/projected/8e21e2b8-4129-4670-96a9-e587637a3a04-kube-api-access-gvrxm\") pod \"dns-operator-744455d44c-cxh4g\" (UID: \"8e21e2b8-4129-4670-96a9-e587637a3a04\") " pod="openshift-dns-operator/dns-operator-744455d44c-cxh4g" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.583787 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lphtx\" (UniqueName: \"kubernetes.io/projected/18513a7b-b0ef-4b3a-be63-ebd97482baa7-kube-api-access-lphtx\") pod \"service-ca-operator-777779d784-6s7wh\" (UID: \"18513a7b-b0ef-4b3a-be63-ebd97482baa7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.583810 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18513a7b-b0ef-4b3a-be63-ebd97482baa7-config\") pod \"service-ca-operator-777779d784-6s7wh\" (UID: \"18513a7b-b0ef-4b3a-be63-ebd97482baa7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.583853 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdxvs\" (UniqueName: \"kubernetes.io/projected/312bb5c3-467c-48bb-967f-b8aadfa43e94-kube-api-access-hdxvs\") pod \"multus-admission-controller-857f4d67dd-mdfqc\" (UID: \"312bb5c3-467c-48bb-967f-b8aadfa43e94\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mdfqc" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.583763 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.583882 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18513a7b-b0ef-4b3a-be63-ebd97482baa7-serving-cert\") pod \"service-ca-operator-777779d784-6s7wh\" (UID: \"18513a7b-b0ef-4b3a-be63-ebd97482baa7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.583948 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/312bb5c3-467c-48bb-967f-b8aadfa43e94-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mdfqc\" (UID: \"312bb5c3-467c-48bb-967f-b8aadfa43e94\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mdfqc" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.583987 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e21e2b8-4129-4670-96a9-e587637a3a04-metrics-tls\") pod \"dns-operator-744455d44c-cxh4g\" (UID: \"8e21e2b8-4129-4670-96a9-e587637a3a04\") " pod="openshift-dns-operator/dns-operator-744455d44c-cxh4g" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.585279 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mf6qh"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.585898 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.589751 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.591688 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hlxfj"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.592077 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.594028 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.597762 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-gpfpp"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.599681 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gpfpp" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.604916 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mvcdt"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.606830 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mvcdt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.607069 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mvcdt"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.609821 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-nnhrm"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.610697 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nnhrm" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.611248 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nnhrm"] Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.611813 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.632233 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.651735 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.673362 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.692174 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.711961 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.738720 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.752544 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.771787 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.792855 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.813025 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.832167 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.852833 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.872853 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.892323 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.911955 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.932677 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.973054 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 22 07:05:04 crc kubenswrapper[4856]: I1122 07:05:04.992747 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.012726 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.033008 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.053811 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.074465 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.092612 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.112929 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.134436 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.153215 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.173364 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.179798 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/312bb5c3-467c-48bb-967f-b8aadfa43e94-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-mdfqc\" (UID: \"312bb5c3-467c-48bb-967f-b8aadfa43e94\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mdfqc" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.192728 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.213190 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.235060 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.253613 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.272502 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.293222 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.297666 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8e21e2b8-4129-4670-96a9-e587637a3a04-metrics-tls\") pod \"dns-operator-744455d44c-cxh4g\" (UID: \"8e21e2b8-4129-4670-96a9-e587637a3a04\") " pod="openshift-dns-operator/dns-operator-744455d44c-cxh4g" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.313414 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.334252 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.353011 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.373557 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.390809 4856 request.go:700] Waited for 1.006038174s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0 Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.393950 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.412978 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.433490 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.452846 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.474939 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: E1122 07:05:05.480575 4856 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Nov 22 07:05:05 crc kubenswrapper[4856]: E1122 07:05:05.480735 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-client-ca podName:bbedaf28-a7ca-437c-93a8-8c676c7a9f1f nodeName:}" failed. No retries permitted until 2025-11-22 07:05:05.980701464 +0000 UTC m=+148.394094722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-client-ca") pod "controller-manager-879f6c89f-csttt" (UID: "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f") : failed to sync configmap cache: timed out waiting for the condition Nov 22 07:05:05 crc kubenswrapper[4856]: E1122 07:05:05.482745 4856 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Nov 22 07:05:05 crc kubenswrapper[4856]: E1122 07:05:05.482797 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-proxy-ca-bundles podName:bbedaf28-a7ca-437c-93a8-8c676c7a9f1f nodeName:}" failed. No retries permitted until 2025-11-22 07:05:05.982787183 +0000 UTC m=+148.396180431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-proxy-ca-bundles") pod "controller-manager-879f6c89f-csttt" (UID: "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f") : failed to sync configmap cache: timed out waiting for the condition Nov 22 07:05:05 crc kubenswrapper[4856]: E1122 07:05:05.486908 4856 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Nov 22 07:05:05 crc kubenswrapper[4856]: E1122 07:05:05.486953 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-config podName:bbedaf28-a7ca-437c-93a8-8c676c7a9f1f nodeName:}" failed. No retries permitted until 2025-11-22 07:05:05.986941563 +0000 UTC m=+148.400334831 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-config") pod "controller-manager-879f6c89f-csttt" (UID: "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f") : failed to sync configmap cache: timed out waiting for the condition Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.493798 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: E1122 07:05:05.499979 4856 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 22 07:05:05 crc kubenswrapper[4856]: E1122 07:05:05.500080 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-serving-cert podName:bbedaf28-a7ca-437c-93a8-8c676c7a9f1f nodeName:}" failed. No retries permitted until 2025-11-22 07:05:06.000054028 +0000 UTC m=+148.413447286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-serving-cert") pod "controller-manager-879f6c89f-csttt" (UID: "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f") : failed to sync secret cache: timed out waiting for the condition Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.513442 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.533009 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.553475 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.573325 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 22 07:05:05 crc kubenswrapper[4856]: E1122 07:05:05.584682 4856 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 22 07:05:05 crc kubenswrapper[4856]: E1122 07:05:05.584714 4856 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Nov 22 07:05:05 crc kubenswrapper[4856]: E1122 07:05:05.584814 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18513a7b-b0ef-4b3a-be63-ebd97482baa7-serving-cert podName:18513a7b-b0ef-4b3a-be63-ebd97482baa7 nodeName:}" failed. No retries permitted until 2025-11-22 07:05:06.084779189 +0000 UTC m=+148.498172477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/18513a7b-b0ef-4b3a-be63-ebd97482baa7-serving-cert") pod "service-ca-operator-777779d784-6s7wh" (UID: "18513a7b-b0ef-4b3a-be63-ebd97482baa7") : failed to sync secret cache: timed out waiting for the condition Nov 22 07:05:05 crc kubenswrapper[4856]: E1122 07:05:05.584850 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/18513a7b-b0ef-4b3a-be63-ebd97482baa7-config podName:18513a7b-b0ef-4b3a-be63-ebd97482baa7 nodeName:}" failed. No retries permitted until 2025-11-22 07:05:06.08483551 +0000 UTC m=+148.498228808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/18513a7b-b0ef-4b3a-be63-ebd97482baa7-config") pod "service-ca-operator-777779d784-6s7wh" (UID: "18513a7b-b0ef-4b3a-be63-ebd97482baa7") : failed to sync configmap cache: timed out waiting for the condition Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.592865 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.613311 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.633423 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.653883 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.673673 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.693473 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.713507 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.733644 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.754830 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.772869 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.792992 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.812780 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.832163 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.851725 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.873019 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.913118 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.913176 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.953620 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.972852 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 22 07:05:05 crc kubenswrapper[4856]: I1122 07:05:05.993869 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.009805 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.010181 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-client-ca\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.010556 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-serving-cert\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.010663 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-config\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.013350 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.033164 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.052504 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.073690 4856 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.093155 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.111757 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18513a7b-b0ef-4b3a-be63-ebd97482baa7-serving-cert\") pod \"service-ca-operator-777779d784-6s7wh\" (UID: \"18513a7b-b0ef-4b3a-be63-ebd97482baa7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.112008 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18513a7b-b0ef-4b3a-be63-ebd97482baa7-config\") pod \"service-ca-operator-777779d784-6s7wh\" (UID: \"18513a7b-b0ef-4b3a-be63-ebd97482baa7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.113211 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.113358 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18513a7b-b0ef-4b3a-be63-ebd97482baa7-config\") pod \"service-ca-operator-777779d784-6s7wh\" (UID: \"18513a7b-b0ef-4b3a-be63-ebd97482baa7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.121982 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18513a7b-b0ef-4b3a-be63-ebd97482baa7-serving-cert\") pod \"service-ca-operator-777779d784-6s7wh\" (UID: \"18513a7b-b0ef-4b3a-be63-ebd97482baa7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.153884 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2zrq\" (UniqueName: \"kubernetes.io/projected/4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c-kube-api-access-t2zrq\") pod \"openshift-config-operator-7777fb866f-ntxhf\" (UID: \"4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.196722 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wcx2\" (UniqueName: \"kubernetes.io/projected/689f4fd5-222f-46a6-a41b-bc519d7c1005-kube-api-access-9wcx2\") pod \"machine-approver-56656f9798-nrtm2\" (UID: \"689f4fd5-222f-46a6-a41b-bc519d7c1005\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.213057 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.215974 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7c5787e0-4c30-4ab6-8fa4-0936744b8fc4-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-cbrkr\" (UID: \"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.233555 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l74cc\" (UniqueName: \"kubernetes.io/projected/4f1bb024-f9c1-46f4-8805-4dd12cf9a369-kube-api-access-l74cc\") pod \"openshift-controller-manager-operator-756b6f6bc6-lr6p4\" (UID: \"4f1bb024-f9c1-46f4-8805-4dd12cf9a369\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.249365 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mgsh\" (UniqueName: \"kubernetes.io/projected/d9a5e5b4-a255-4888-b381-e743b2440738-kube-api-access-8mgsh\") pod \"apiserver-76f77b778f-lpbp9\" (UID: \"d9a5e5b4-a255-4888-b381-e743b2440738\") " pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.268094 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v4xl\" (UniqueName: \"kubernetes.io/projected/40737369-e550-4119-b969-44e99b9ec9e7-kube-api-access-9v4xl\") pod \"authentication-operator-69f744f599-56qbr\" (UID: \"40737369-e550-4119-b969-44e99b9ec9e7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.291630 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcljw\" (UniqueName: \"kubernetes.io/projected/b237e36f-a520-4471-82a5-5d26aff897b1-kube-api-access-fcljw\") pod \"apiserver-7bbb656c7d-rpfn9\" (UID: \"b237e36f-a520-4471-82a5-5d26aff897b1\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.305536 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8ht\" (UniqueName: \"kubernetes.io/projected/7c5787e0-4c30-4ab6-8fa4-0936744b8fc4-kube-api-access-tb8ht\") pod \"cluster-image-registry-operator-dc59b4c8b-cbrkr\" (UID: \"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.328981 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jmq8\" (UniqueName: \"kubernetes.io/projected/4dff5c22-ed64-4f83-9f80-3c618d5585ab-kube-api-access-7jmq8\") pod \"machine-api-operator-5694c8668f-2szb8\" (UID: \"4dff5c22-ed64-4f83-9f80-3c618d5585ab\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.347417 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5mj5\" (UniqueName: \"kubernetes.io/projected/52414feb-0c08-4591-a84a-985167853ba3-kube-api-access-h5mj5\") pod \"downloads-7954f5f757-psfmg\" (UID: \"52414feb-0c08-4591-a84a-985167853ba3\") " pod="openshift-console/downloads-7954f5f757-psfmg" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.349234 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.363698 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-psfmg" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.367341 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xtv\" (UniqueName: \"kubernetes.io/projected/2cb75722-66d1-46a3-b867-1cab32f01ede-kube-api-access-45xtv\") pod \"console-f9d7485db-57k7r\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.370841 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.390292 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b4jc\" (UniqueName: \"kubernetes.io/projected/d47abc5e-74bd-4f9a-9a99-1d83d8834ce0-kube-api-access-5b4jc\") pod \"cluster-samples-operator-665b6dd947-89kjz\" (UID: \"d47abc5e-74bd-4f9a-9a99-1d83d8834ce0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.411554 4856 request.go:700] Waited for 1.827424739s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.416023 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvrxm\" (UniqueName: \"kubernetes.io/projected/8e21e2b8-4129-4670-96a9-e587637a3a04-kube-api-access-gvrxm\") pod \"dns-operator-744455d44c-cxh4g\" (UID: \"8e21e2b8-4129-4670-96a9-e587637a3a04\") " pod="openshift-dns-operator/dns-operator-744455d44c-cxh4g" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.417774 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.419718 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.427608 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lphtx\" (UniqueName: \"kubernetes.io/projected/18513a7b-b0ef-4b3a-be63-ebd97482baa7-kube-api-access-lphtx\") pod \"service-ca-operator-777779d784-6s7wh\" (UID: \"18513a7b-b0ef-4b3a-be63-ebd97482baa7\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.453293 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.454257 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdxvs\" (UniqueName: \"kubernetes.io/projected/312bb5c3-467c-48bb-967f-b8aadfa43e94-kube-api-access-hdxvs\") pod \"multus-admission-controller-857f4d67dd-mdfqc\" (UID: \"312bb5c3-467c-48bb-967f-b8aadfa43e94\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-mdfqc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.461610 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cxh4g" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.469928 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-mdfqc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.474905 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.480947 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.493441 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.511879 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.514873 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.533179 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.533554 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.553554 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.558725 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.572952 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.590870 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.592739 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.613223 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.631832 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.656165 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.672552 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.693454 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.702179 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-client-ca\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.720361 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.733741 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.735380 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.744897 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-serving-cert\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.752843 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.772732 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.777684 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdw6d\" (UniqueName: \"kubernetes.io/projected/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-kube-api-access-bdw6d\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.792855 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.803198 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-config\") pod \"controller-manager-879f6c89f-csttt\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.844272 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89b98d6b-28cd-4530-ac68-f717832a84b0-config\") pod \"console-operator-58897d9998-rc5xn\" (UID: \"89b98d6b-28cd-4530-ac68-f717832a84b0\") " pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.844564 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljg6r\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-kube-api-access-ljg6r\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.844669 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89b98d6b-28cd-4530-ac68-f717832a84b0-serving-cert\") pod \"console-operator-58897d9998-rc5xn\" (UID: \"89b98d6b-28cd-4530-ac68-f717832a84b0\") " pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.844709 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-registry-tls\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.844752 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smpxp\" (UniqueName: \"kubernetes.io/projected/89b98d6b-28cd-4530-ac68-f717832a84b0-kube-api-access-smpxp\") pod \"console-operator-58897d9998-rc5xn\" (UID: \"89b98d6b-28cd-4530-ac68-f717832a84b0\") " pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.844813 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7faca66b-795d-46b2-aebd-53f45fdb51de-registry-certificates\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.844845 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d4dd468-0b0e-4767-aa06-7800fd9c449f-apiservice-cert\") pod \"packageserver-d55dfcdfc-5xhm6\" (UID: \"6d4dd468-0b0e-4767-aa06-7800fd9c449f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.844872 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6d4dd468-0b0e-4767-aa06-7800fd9c449f-tmpfs\") pod \"packageserver-d55dfcdfc-5xhm6\" (UID: \"6d4dd468-0b0e-4767-aa06-7800fd9c449f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.844906 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.844929 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7faca66b-795d-46b2-aebd-53f45fdb51de-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.844978 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89b98d6b-28cd-4530-ac68-f717832a84b0-trusted-ca\") pod \"console-operator-58897d9998-rc5xn\" (UID: \"89b98d6b-28cd-4530-ac68-f717832a84b0\") " pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.845000 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7faca66b-795d-46b2-aebd-53f45fdb51de-trusted-ca\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.845063 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-bound-sa-token\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.845085 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7faca66b-795d-46b2-aebd-53f45fdb51de-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: E1122 07:05:06.845589 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:07.345572251 +0000 UTC m=+149.758965529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949415 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949597 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95c0f9f6-974c-4169-a84c-92f57fb96f2e-etcd-client\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949655 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c7c1d403-b7f4-4d42-b707-54ac23853d3f-stats-auth\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949673 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949689 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949713 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsn98\" (UniqueName: \"kubernetes.io/projected/e611448f-b8a0-4e60-94a5-51d929ea1b5f-kube-api-access-nsn98\") pod \"migrator-59844c95c7-cbrxg\" (UID: \"e611448f-b8a0-4e60-94a5-51d929ea1b5f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbrxg" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949728 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fdc4\" (UniqueName: \"kubernetes.io/projected/95c0f9f6-974c-4169-a84c-92f57fb96f2e-kube-api-access-2fdc4\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949754 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bq98\" (UniqueName: \"kubernetes.io/projected/44fda25c-1ecf-4334-803c-106306261877-kube-api-access-2bq98\") pod \"olm-operator-6b444d44fb-85rt7\" (UID: \"44fda25c-1ecf-4334-803c-106306261877\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949769 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbjfc\" (UniqueName: \"kubernetes.io/projected/aacf6535-3d49-4077-af9d-3d947615c61b-kube-api-access-mbjfc\") pod \"kube-storage-version-migrator-operator-b67b599dd-m6rv8\" (UID: \"aacf6535-3d49-4077-af9d-3d947615c61b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949822 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aacf6535-3d49-4077-af9d-3d947615c61b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-m6rv8\" (UID: \"aacf6535-3d49-4077-af9d-3d947615c61b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949841 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f594bec6-55be-4db1-a25e-1fbe651b3eb2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xfq4s\" (UID: \"f594bec6-55be-4db1-a25e-1fbe651b3eb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949860 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3616ce-3561-4c0c-8901-c713984631f6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-76kb2\" (UID: \"4e3616ce-3561-4c0c-8901-c713984631f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949874 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/95c0f9f6-974c-4169-a84c-92f57fb96f2e-etcd-ca\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949920 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7faca66b-795d-46b2-aebd-53f45fdb51de-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949938 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.949970 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44fda25c-1ecf-4334-803c-106306261877-profile-collector-cert\") pod \"olm-operator-6b444d44fb-85rt7\" (UID: \"44fda25c-1ecf-4334-803c-106306261877\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950007 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f594bec6-55be-4db1-a25e-1fbe651b3eb2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xfq4s\" (UID: \"f594bec6-55be-4db1-a25e-1fbe651b3eb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950023 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d7bb875-88e2-48e4-a81b-188f251742c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tkprx\" (UID: \"8d7bb875-88e2-48e4-a81b-188f251742c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950039 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c0f9f6-974c-4169-a84c-92f57fb96f2e-config\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950074 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950090 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljg6r\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-kube-api-access-ljg6r\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950105 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950123 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r67bs\" (UniqueName: \"kubernetes.io/projected/aa013f01-5701-4d63-bc2c-284f5d4a397f-kube-api-access-r67bs\") pod \"marketplace-operator-79b997595-x4fc7\" (UID: \"aa013f01-5701-4d63-bc2c-284f5d4a397f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950142 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89b98d6b-28cd-4530-ac68-f717832a84b0-serving-cert\") pod \"console-operator-58897d9998-rc5xn\" (UID: \"89b98d6b-28cd-4530-ac68-f717832a84b0\") " pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950158 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lmvpd\" (UID: \"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950175 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-registry-tls\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950192 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5h6s\" (UniqueName: \"kubernetes.io/projected/c7c1d403-b7f4-4d42-b707-54ac23853d3f-kube-api-access-q5h6s\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950207 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950231 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/74f36017-681f-459c-a204-02bcaaf27d89-profile-collector-cert\") pod \"catalog-operator-68c6474976-6b26c\" (UID: \"74f36017-681f-459c-a204-02bcaaf27d89\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950247 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d7bb875-88e2-48e4-a81b-188f251742c2-config\") pod \"kube-controller-manager-operator-78b949d7b-tkprx\" (UID: \"8d7bb875-88e2-48e4-a81b-188f251742c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950265 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7c1d403-b7f4-4d42-b707-54ac23853d3f-service-ca-bundle\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950279 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95c0f9f6-974c-4169-a84c-92f57fb96f2e-serving-cert\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950296 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb4vj\" (UniqueName: \"kubernetes.io/projected/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-kube-api-access-qb4vj\") pod \"route-controller-manager-6576b87f9c-s6j42\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950338 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6d4dd468-0b0e-4767-aa06-7800fd9c449f-tmpfs\") pod \"packageserver-d55dfcdfc-5xhm6\" (UID: \"6d4dd468-0b0e-4767-aa06-7800fd9c449f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950354 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/74f36017-681f-459c-a204-02bcaaf27d89-srv-cert\") pod \"catalog-operator-68c6474976-6b26c\" (UID: \"74f36017-681f-459c-a204-02bcaaf27d89\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950371 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlh2j\" (UniqueName: \"kubernetes.io/projected/8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23-kube-api-access-hlh2j\") pod \"ingress-operator-5b745b69d9-lmvpd\" (UID: \"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950387 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3-signing-cabundle\") pod \"service-ca-9c57cc56f-kbbhd\" (UID: \"35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950413 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7faca66b-795d-46b2-aebd-53f45fdb51de-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950454 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950471 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stzmp\" (UniqueName: \"kubernetes.io/projected/9a816ade-c1d6-48c0-a246-4d3407f90e58-kube-api-access-stzmp\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950495 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950533 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950562 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be0ca9e5-223e-4597-a38f-9992ca7d00d1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-slgnm\" (UID: \"be0ca9e5-223e-4597-a38f-9992ca7d00d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950577 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-config\") pod \"route-controller-manager-6576b87f9c-s6j42\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950602 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/95c0f9f6-974c-4169-a84c-92f57fb96f2e-etcd-service-ca\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950630 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3bb21a-5d8a-48e2-b115-9953c3021a67-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gckss\" (UID: \"ed3bb21a-5d8a-48e2-b115-9953c3021a67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950647 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d7bb875-88e2-48e4-a81b-188f251742c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tkprx\" (UID: \"8d7bb875-88e2-48e4-a81b-188f251742c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950672 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be0ca9e5-223e-4597-a38f-9992ca7d00d1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-slgnm\" (UID: \"be0ca9e5-223e-4597-a38f-9992ca7d00d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950698 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44fda25c-1ecf-4334-803c-106306261877-srv-cert\") pod \"olm-operator-6b444d44fb-85rt7\" (UID: \"44fda25c-1ecf-4334-803c-106306261877\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950715 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcz2d\" (UniqueName: \"kubernetes.io/projected/74f36017-681f-459c-a204-02bcaaf27d89-kube-api-access-hcz2d\") pod \"catalog-operator-68c6474976-6b26c\" (UID: \"74f36017-681f-459c-a204-02bcaaf27d89\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950730 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a816ade-c1d6-48c0-a246-4d3407f90e58-audit-dir\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950745 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcbgr\" (UniqueName: \"kubernetes.io/projected/35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3-kube-api-access-pcbgr\") pod \"service-ca-9c57cc56f-kbbhd\" (UID: \"35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950771 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nxs7\" (UniqueName: \"kubernetes.io/projected/4e3616ce-3561-4c0c-8901-c713984631f6-kube-api-access-4nxs7\") pod \"openshift-apiserver-operator-796bbdcf4f-76kb2\" (UID: \"4e3616ce-3561-4c0c-8901-c713984631f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950866 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa013f01-5701-4d63-bc2c-284f5d4a397f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x4fc7\" (UID: \"aa013f01-5701-4d63-bc2c-284f5d4a397f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950915 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f594bec6-55be-4db1-a25e-1fbe651b3eb2-config\") pod \"kube-apiserver-operator-766d6c64bb-xfq4s\" (UID: \"f594bec6-55be-4db1-a25e-1fbe651b3eb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.950974 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-bound-sa-token\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.951006 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c7c1d403-b7f4-4d42-b707-54ac23853d3f-default-certificate\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.951037 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-client-ca\") pod \"route-controller-manager-6576b87f9c-s6j42\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.951062 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aacf6535-3d49-4077-af9d-3d947615c61b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-m6rv8\" (UID: \"aacf6535-3d49-4077-af9d-3d947615c61b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.951090 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e3616ce-3561-4c0c-8901-c713984631f6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-76kb2\" (UID: \"4e3616ce-3561-4c0c-8901-c713984631f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.951118 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be0ca9e5-223e-4597-a38f-9992ca7d00d1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-slgnm\" (UID: \"be0ca9e5-223e-4597-a38f-9992ca7d00d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.951139 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa013f01-5701-4d63-bc2c-284f5d4a397f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x4fc7\" (UID: \"aa013f01-5701-4d63-bc2c-284f5d4a397f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.952379 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c87wt\" (UniqueName: \"kubernetes.io/projected/6d4dd468-0b0e-4767-aa06-7800fd9c449f-kube-api-access-c87wt\") pod \"packageserver-d55dfcdfc-5xhm6\" (UID: \"6d4dd468-0b0e-4767-aa06-7800fd9c449f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.952445 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89b98d6b-28cd-4530-ac68-f717832a84b0-config\") pod \"console-operator-58897d9998-rc5xn\" (UID: \"89b98d6b-28cd-4530-ac68-f717832a84b0\") " pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.952692 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb186618-19e9-4d7e-93ab-38fba228147d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-klclm\" (UID: \"eb186618-19e9-4d7e-93ab-38fba228147d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.952728 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23-trusted-ca\") pod \"ingress-operator-5b745b69d9-lmvpd\" (UID: \"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.952754 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-audit-policies\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.952799 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7c1d403-b7f4-4d42-b707-54ac23853d3f-metrics-certs\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.952845 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c2rb\" (UniqueName: \"kubernetes.io/projected/eb186618-19e9-4d7e-93ab-38fba228147d-kube-api-access-9c2rb\") pod \"machine-config-operator-74547568cd-klclm\" (UID: \"eb186618-19e9-4d7e-93ab-38fba228147d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.952893 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smpxp\" (UniqueName: \"kubernetes.io/projected/89b98d6b-28cd-4530-ac68-f717832a84b0-kube-api-access-smpxp\") pod \"console-operator-58897d9998-rc5xn\" (UID: \"89b98d6b-28cd-4530-ac68-f717832a84b0\") " pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.952918 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fkss\" (UniqueName: \"kubernetes.io/projected/ed3bb21a-5d8a-48e2-b115-9953c3021a67-kube-api-access-5fkss\") pod \"package-server-manager-789f6589d5-gckss\" (UID: \"ed3bb21a-5d8a-48e2-b115-9953c3021a67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.952946 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7faca66b-795d-46b2-aebd-53f45fdb51de-registry-certificates\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.952985 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d4dd468-0b0e-4767-aa06-7800fd9c449f-apiservice-cert\") pod \"packageserver-d55dfcdfc-5xhm6\" (UID: \"6d4dd468-0b0e-4767-aa06-7800fd9c449f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.953011 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23-metrics-tls\") pod \"ingress-operator-5b745b69d9-lmvpd\" (UID: \"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.953081 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb186618-19e9-4d7e-93ab-38fba228147d-images\") pod \"machine-config-operator-74547568cd-klclm\" (UID: \"eb186618-19e9-4d7e-93ab-38fba228147d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.953111 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d4dd468-0b0e-4767-aa06-7800fd9c449f-webhook-cert\") pod \"packageserver-d55dfcdfc-5xhm6\" (UID: \"6d4dd468-0b0e-4767-aa06-7800fd9c449f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.953139 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb186618-19e9-4d7e-93ab-38fba228147d-proxy-tls\") pod \"machine-config-operator-74547568cd-klclm\" (UID: \"eb186618-19e9-4d7e-93ab-38fba228147d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.953165 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.953190 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3-signing-key\") pod \"service-ca-9c57cc56f-kbbhd\" (UID: \"35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.953213 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.953238 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-serving-cert\") pod \"route-controller-manager-6576b87f9c-s6j42\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.954165 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89b98d6b-28cd-4530-ac68-f717832a84b0-trusted-ca\") pod \"console-operator-58897d9998-rc5xn\" (UID: \"89b98d6b-28cd-4530-ac68-f717832a84b0\") " pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.954222 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7faca66b-795d-46b2-aebd-53f45fdb51de-trusted-ca\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.954267 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7faca66b-795d-46b2-aebd-53f45fdb51de-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.954950 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7faca66b-795d-46b2-aebd-53f45fdb51de-registry-certificates\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.956201 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/6d4dd468-0b0e-4767-aa06-7800fd9c449f-tmpfs\") pod \"packageserver-d55dfcdfc-5xhm6\" (UID: \"6d4dd468-0b0e-4767-aa06-7800fd9c449f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:06 crc kubenswrapper[4856]: E1122 07:05:06.956602 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:07.456577603 +0000 UTC m=+149.869970861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.956800 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89b98d6b-28cd-4530-ac68-f717832a84b0-config\") pod \"console-operator-58897d9998-rc5xn\" (UID: \"89b98d6b-28cd-4530-ac68-f717832a84b0\") " pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.959942 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89b98d6b-28cd-4530-ac68-f717832a84b0-trusted-ca\") pod \"console-operator-58897d9998-rc5xn\" (UID: \"89b98d6b-28cd-4530-ac68-f717832a84b0\") " pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.960449 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d4dd468-0b0e-4767-aa06-7800fd9c449f-apiservice-cert\") pod \"packageserver-d55dfcdfc-5xhm6\" (UID: \"6d4dd468-0b0e-4767-aa06-7800fd9c449f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.961405 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7faca66b-795d-46b2-aebd-53f45fdb51de-trusted-ca\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.962779 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7faca66b-795d-46b2-aebd-53f45fdb51de-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.963628 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89b98d6b-28cd-4530-ac68-f717832a84b0-serving-cert\") pod \"console-operator-58897d9998-rc5xn\" (UID: \"89b98d6b-28cd-4530-ac68-f717832a84b0\") " pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.966201 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-registry-tls\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:06 crc kubenswrapper[4856]: I1122 07:05:06.990339 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-bound-sa-token\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.009986 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljg6r\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-kube-api-access-ljg6r\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.028550 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smpxp\" (UniqueName: \"kubernetes.io/projected/89b98d6b-28cd-4530-ac68-f717832a84b0-kube-api-access-smpxp\") pod \"console-operator-58897d9998-rc5xn\" (UID: \"89b98d6b-28cd-4530-ac68-f717832a84b0\") " pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.052664 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.055332 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be0ca9e5-223e-4597-a38f-9992ca7d00d1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-slgnm\" (UID: \"be0ca9e5-223e-4597-a38f-9992ca7d00d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.055371 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa013f01-5701-4d63-bc2c-284f5d4a397f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x4fc7\" (UID: \"aa013f01-5701-4d63-bc2c-284f5d4a397f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.055404 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/788765a4-87d8-4477-90b7-97ee6549e1ba-node-bootstrap-token\") pod \"machine-config-server-gpfpp\" (UID: \"788765a4-87d8-4477-90b7-97ee6549e1ba\") " pod="openshift-machine-config-operator/machine-config-server-gpfpp" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.055430 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c87wt\" (UniqueName: \"kubernetes.io/projected/6d4dd468-0b0e-4767-aa06-7800fd9c449f-kube-api-access-c87wt\") pod \"packageserver-d55dfcdfc-5xhm6\" (UID: \"6d4dd468-0b0e-4767-aa06-7800fd9c449f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.057135 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa013f01-5701-4d63-bc2c-284f5d4a397f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-x4fc7\" (UID: \"aa013f01-5701-4d63-bc2c-284f5d4a397f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.060095 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb186618-19e9-4d7e-93ab-38fba228147d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-klclm\" (UID: \"eb186618-19e9-4d7e-93ab-38fba228147d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.068383 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb186618-19e9-4d7e-93ab-38fba228147d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-klclm\" (UID: \"eb186618-19e9-4d7e-93ab-38fba228147d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.068469 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23-trusted-ca\") pod \"ingress-operator-5b745b69d9-lmvpd\" (UID: \"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.068500 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-audit-policies\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.068551 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7c1d403-b7f4-4d42-b707-54ac23853d3f-metrics-certs\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.068592 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c2rb\" (UniqueName: \"kubernetes.io/projected/eb186618-19e9-4d7e-93ab-38fba228147d-kube-api-access-9c2rb\") pod \"machine-config-operator-74547568cd-klclm\" (UID: \"eb186618-19e9-4d7e-93ab-38fba228147d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.068628 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fkss\" (UniqueName: \"kubernetes.io/projected/ed3bb21a-5d8a-48e2-b115-9953c3021a67-kube-api-access-5fkss\") pod \"package-server-manager-789f6589d5-gckss\" (UID: \"ed3bb21a-5d8a-48e2-b115-9953c3021a67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.068671 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23-metrics-tls\") pod \"ingress-operator-5b745b69d9-lmvpd\" (UID: \"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.068704 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb2769ed-3d4b-4e62-8298-b05cc6dcca3b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5cq4r\" (UID: \"eb2769ed-3d4b-4e62-8298-b05cc6dcca3b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.068728 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg55b\" (UniqueName: \"kubernetes.io/projected/eb2769ed-3d4b-4e62-8298-b05cc6dcca3b-kube-api-access-dg55b\") pod \"machine-config-controller-84d6567774-5cq4r\" (UID: \"eb2769ed-3d4b-4e62-8298-b05cc6dcca3b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.068814 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.068874 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85xgw\" (UniqueName: \"kubernetes.io/projected/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-kube-api-access-85xgw\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.068957 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb186618-19e9-4d7e-93ab-38fba228147d-images\") pod \"machine-config-operator-74547568cd-klclm\" (UID: \"eb186618-19e9-4d7e-93ab-38fba228147d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.068987 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d4dd468-0b0e-4767-aa06-7800fd9c449f-webhook-cert\") pod \"packageserver-d55dfcdfc-5xhm6\" (UID: \"6d4dd468-0b0e-4767-aa06-7800fd9c449f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.069020 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb186618-19e9-4d7e-93ab-38fba228147d-proxy-tls\") pod \"machine-config-operator-74547568cd-klclm\" (UID: \"eb186618-19e9-4d7e-93ab-38fba228147d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.069052 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.069083 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3-signing-key\") pod \"service-ca-9c57cc56f-kbbhd\" (UID: \"35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.069113 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.069142 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-serving-cert\") pod \"route-controller-manager-6576b87f9c-s6j42\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.069222 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c7c1d403-b7f4-4d42-b707-54ac23853d3f-stats-auth\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.069253 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070224 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070282 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95c0f9f6-974c-4169-a84c-92f57fb96f2e-etcd-client\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070326 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20d49e34-b412-49d0-8236-227ae0043102-secret-volume\") pod \"collect-profiles-29396580-dt9th\" (UID: \"20d49e34-b412-49d0-8236-227ae0043102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070381 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsn98\" (UniqueName: \"kubernetes.io/projected/e611448f-b8a0-4e60-94a5-51d929ea1b5f-kube-api-access-nsn98\") pod \"migrator-59844c95c7-cbrxg\" (UID: \"e611448f-b8a0-4e60-94a5-51d929ea1b5f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbrxg" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070412 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fdc4\" (UniqueName: \"kubernetes.io/projected/95c0f9f6-974c-4169-a84c-92f57fb96f2e-kube-api-access-2fdc4\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070441 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bq98\" (UniqueName: \"kubernetes.io/projected/44fda25c-1ecf-4334-803c-106306261877-kube-api-access-2bq98\") pod \"olm-operator-6b444d44fb-85rt7\" (UID: \"44fda25c-1ecf-4334-803c-106306261877\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070467 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbjfc\" (UniqueName: \"kubernetes.io/projected/aacf6535-3d49-4077-af9d-3d947615c61b-kube-api-access-mbjfc\") pod \"kube-storage-version-migrator-operator-b67b599dd-m6rv8\" (UID: \"aacf6535-3d49-4077-af9d-3d947615c61b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070574 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-csi-data-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070671 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/15bf26f9-ee66-489a-bda0-cacb5b094844-metrics-tls\") pod \"dns-default-mvcdt\" (UID: \"15bf26f9-ee66-489a-bda0-cacb5b094844\") " pod="openshift-dns/dns-default-mvcdt" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070698 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-mountpoint-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070807 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aacf6535-3d49-4077-af9d-3d947615c61b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-m6rv8\" (UID: \"aacf6535-3d49-4077-af9d-3d947615c61b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070829 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-socket-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070858 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a5ccb31-7635-4995-926a-927e72a69546-cert\") pod \"ingress-canary-nnhrm\" (UID: \"0a5ccb31-7635-4995-926a-927e72a69546\") " pod="openshift-ingress-canary/ingress-canary-nnhrm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.070897 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f594bec6-55be-4db1-a25e-1fbe651b3eb2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xfq4s\" (UID: \"f594bec6-55be-4db1-a25e-1fbe651b3eb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071042 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58g2l\" (UniqueName: \"kubernetes.io/projected/15bf26f9-ee66-489a-bda0-cacb5b094844-kube-api-access-58g2l\") pod \"dns-default-mvcdt\" (UID: \"15bf26f9-ee66-489a-bda0-cacb5b094844\") " pod="openshift-dns/dns-default-mvcdt" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071092 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3616ce-3561-4c0c-8901-c713984631f6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-76kb2\" (UID: \"4e3616ce-3561-4c0c-8901-c713984631f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071188 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/95c0f9f6-974c-4169-a84c-92f57fb96f2e-etcd-ca\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071212 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20d49e34-b412-49d0-8236-227ae0043102-config-volume\") pod \"collect-profiles-29396580-dt9th\" (UID: \"20d49e34-b412-49d0-8236-227ae0043102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071247 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071274 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44fda25c-1ecf-4334-803c-106306261877-profile-collector-cert\") pod \"olm-operator-6b444d44fb-85rt7\" (UID: \"44fda25c-1ecf-4334-803c-106306261877\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071438 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f594bec6-55be-4db1-a25e-1fbe651b3eb2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xfq4s\" (UID: \"f594bec6-55be-4db1-a25e-1fbe651b3eb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071465 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d7bb875-88e2-48e4-a81b-188f251742c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tkprx\" (UID: \"8d7bb875-88e2-48e4-a81b-188f251742c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071525 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c0f9f6-974c-4169-a84c-92f57fb96f2e-config\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071544 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-registration-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071575 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071670 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071799 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r67bs\" (UniqueName: \"kubernetes.io/projected/aa013f01-5701-4d63-bc2c-284f5d4a397f-kube-api-access-r67bs\") pod \"marketplace-operator-79b997595-x4fc7\" (UID: \"aa013f01-5701-4d63-bc2c-284f5d4a397f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071863 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lmvpd\" (UID: \"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.071962 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5h6s\" (UniqueName: \"kubernetes.io/projected/c7c1d403-b7f4-4d42-b707-54ac23853d3f-kube-api-access-q5h6s\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.073813 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.073862 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/74f36017-681f-459c-a204-02bcaaf27d89-profile-collector-cert\") pod \"catalog-operator-68c6474976-6b26c\" (UID: \"74f36017-681f-459c-a204-02bcaaf27d89\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.073911 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d7bb875-88e2-48e4-a81b-188f251742c2-config\") pod \"kube-controller-manager-operator-78b949d7b-tkprx\" (UID: \"8d7bb875-88e2-48e4-a81b-188f251742c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.073938 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7c1d403-b7f4-4d42-b707-54ac23853d3f-service-ca-bundle\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.073960 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95c0f9f6-974c-4169-a84c-92f57fb96f2e-serving-cert\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074017 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb2769ed-3d4b-4e62-8298-b05cc6dcca3b-proxy-tls\") pod \"machine-config-controller-84d6567774-5cq4r\" (UID: \"eb2769ed-3d4b-4e62-8298-b05cc6dcca3b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074049 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/74f36017-681f-459c-a204-02bcaaf27d89-srv-cert\") pod \"catalog-operator-68c6474976-6b26c\" (UID: \"74f36017-681f-459c-a204-02bcaaf27d89\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074120 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb4vj\" (UniqueName: \"kubernetes.io/projected/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-kube-api-access-qb4vj\") pod \"route-controller-manager-6576b87f9c-s6j42\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074146 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3-signing-cabundle\") pod \"service-ca-9c57cc56f-kbbhd\" (UID: \"35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074186 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlh2j\" (UniqueName: \"kubernetes.io/projected/8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23-kube-api-access-hlh2j\") pod \"ingress-operator-5b745b69d9-lmvpd\" (UID: \"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074260 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074250 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3616ce-3561-4c0c-8901-c713984631f6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-76kb2\" (UID: \"4e3616ce-3561-4c0c-8901-c713984631f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074294 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stzmp\" (UniqueName: \"kubernetes.io/projected/9a816ade-c1d6-48c0-a246-4d3407f90e58-kube-api-access-stzmp\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074617 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074651 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074675 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be0ca9e5-223e-4597-a38f-9992ca7d00d1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-slgnm\" (UID: \"be0ca9e5-223e-4597-a38f-9992ca7d00d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074768 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-config\") pod \"route-controller-manager-6576b87f9c-s6j42\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074799 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/95c0f9f6-974c-4169-a84c-92f57fb96f2e-etcd-service-ca\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074822 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3bb21a-5d8a-48e2-b115-9953c3021a67-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gckss\" (UID: \"ed3bb21a-5d8a-48e2-b115-9953c3021a67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074846 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d7bb875-88e2-48e4-a81b-188f251742c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tkprx\" (UID: \"8d7bb875-88e2-48e4-a81b-188f251742c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074894 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15bf26f9-ee66-489a-bda0-cacb5b094844-config-volume\") pod \"dns-default-mvcdt\" (UID: \"15bf26f9-ee66-489a-bda0-cacb5b094844\") " pod="openshift-dns/dns-default-mvcdt" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.074917 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be0ca9e5-223e-4597-a38f-9992ca7d00d1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-slgnm\" (UID: \"be0ca9e5-223e-4597-a38f-9992ca7d00d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.075107 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcz2d\" (UniqueName: \"kubernetes.io/projected/74f36017-681f-459c-a204-02bcaaf27d89-kube-api-access-hcz2d\") pod \"catalog-operator-68c6474976-6b26c\" (UID: \"74f36017-681f-459c-a204-02bcaaf27d89\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.075135 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a816ade-c1d6-48c0-a246-4d3407f90e58-audit-dir\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.075191 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcbgr\" (UniqueName: \"kubernetes.io/projected/35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3-kube-api-access-pcbgr\") pod \"service-ca-9c57cc56f-kbbhd\" (UID: \"35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.075247 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59skw\" (UniqueName: \"kubernetes.io/projected/20d49e34-b412-49d0-8236-227ae0043102-kube-api-access-59skw\") pod \"collect-profiles-29396580-dt9th\" (UID: \"20d49e34-b412-49d0-8236-227ae0043102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.075272 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44fda25c-1ecf-4334-803c-106306261877-srv-cert\") pod \"olm-operator-6b444d44fb-85rt7\" (UID: \"44fda25c-1ecf-4334-803c-106306261877\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.075315 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/95c0f9f6-974c-4169-a84c-92f57fb96f2e-etcd-ca\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.077236 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5knnq\" (UniqueName: \"kubernetes.io/projected/788765a4-87d8-4477-90b7-97ee6549e1ba-kube-api-access-5knnq\") pod \"machine-config-server-gpfpp\" (UID: \"788765a4-87d8-4477-90b7-97ee6549e1ba\") " pod="openshift-machine-config-operator/machine-config-server-gpfpp" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.077284 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nxs7\" (UniqueName: \"kubernetes.io/projected/4e3616ce-3561-4c0c-8901-c713984631f6-kube-api-access-4nxs7\") pod \"openshift-apiserver-operator-796bbdcf4f-76kb2\" (UID: \"4e3616ce-3561-4c0c-8901-c713984631f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.077313 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa013f01-5701-4d63-bc2c-284f5d4a397f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x4fc7\" (UID: \"aa013f01-5701-4d63-bc2c-284f5d4a397f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.077335 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f594bec6-55be-4db1-a25e-1fbe651b3eb2-config\") pod \"kube-apiserver-operator-766d6c64bb-xfq4s\" (UID: \"f594bec6-55be-4db1-a25e-1fbe651b3eb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.077358 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5btwt\" (UniqueName: \"kubernetes.io/projected/3a58051f-3a17-420b-aad3-453e819b7b85-kube-api-access-5btwt\") pod \"control-plane-machine-set-operator-78cbb6b69f-hzskf\" (UID: \"3a58051f-3a17-420b-aad3-453e819b7b85\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.077390 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c7c1d403-b7f4-4d42-b707-54ac23853d3f-default-certificate\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.077412 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-client-ca\") pod \"route-controller-manager-6576b87f9c-s6j42\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.077436 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p62mf\" (UniqueName: \"kubernetes.io/projected/0a5ccb31-7635-4995-926a-927e72a69546-kube-api-access-p62mf\") pod \"ingress-canary-nnhrm\" (UID: \"0a5ccb31-7635-4995-926a-927e72a69546\") " pod="openshift-ingress-canary/ingress-canary-nnhrm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.077500 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e3616ce-3561-4c0c-8901-c713984631f6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-76kb2\" (UID: \"4e3616ce-3561-4c0c-8901-c713984631f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.077542 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-plugins-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.077568 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/788765a4-87d8-4477-90b7-97ee6549e1ba-certs\") pod \"machine-config-server-gpfpp\" (UID: \"788765a4-87d8-4477-90b7-97ee6549e1ba\") " pod="openshift-machine-config-operator/machine-config-server-gpfpp" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.077594 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a58051f-3a17-420b-aad3-453e819b7b85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hzskf\" (UID: \"3a58051f-3a17-420b-aad3-453e819b7b85\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.077617 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aacf6535-3d49-4077-af9d-3d947615c61b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-m6rv8\" (UID: \"aacf6535-3d49-4077-af9d-3d947615c61b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.079651 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.080835 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a816ade-c1d6-48c0-a246-4d3407f90e58-audit-dir\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.081918 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-serving-cert\") pod \"route-controller-manager-6576b87f9c-s6j42\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.082069 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aacf6535-3d49-4077-af9d-3d947615c61b-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-m6rv8\" (UID: \"aacf6535-3d49-4077-af9d-3d947615c61b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.082657 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c7c1d403-b7f4-4d42-b707-54ac23853d3f-metrics-certs\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.083321 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aacf6535-3d49-4077-af9d-3d947615c61b-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-m6rv8\" (UID: \"aacf6535-3d49-4077-af9d-3d947615c61b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.085413 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c7c1d403-b7f4-4d42-b707-54ac23853d3f-stats-auth\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.086284 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f594bec6-55be-4db1-a25e-1fbe651b3eb2-config\") pod \"kube-apiserver-operator-766d6c64bb-xfq4s\" (UID: \"f594bec6-55be-4db1-a25e-1fbe651b3eb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.087618 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be0ca9e5-223e-4597-a38f-9992ca7d00d1-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-slgnm\" (UID: \"be0ca9e5-223e-4597-a38f-9992ca7d00d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.088250 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95c0f9f6-974c-4169-a84c-92f57fb96f2e-config\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.088699 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d7bb875-88e2-48e4-a81b-188f251742c2-config\") pod \"kube-controller-manager-operator-78b949d7b-tkprx\" (UID: \"8d7bb875-88e2-48e4-a81b-188f251742c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.089368 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3-signing-cabundle\") pod \"service-ca-9c57cc56f-kbbhd\" (UID: \"35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.089398 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.090099 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-client-ca\") pod \"route-controller-manager-6576b87f9c-s6j42\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.090116 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7c1d403-b7f4-4d42-b707-54ac23853d3f-service-ca-bundle\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.092275 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/95c0f9f6-974c-4169-a84c-92f57fb96f2e-etcd-service-ca\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.092613 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-config\") pod \"route-controller-manager-6576b87f9c-s6j42\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.093640 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: E1122 07:05:07.093653 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:07.593633279 +0000 UTC m=+150.007026537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.095932 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/eb186618-19e9-4d7e-93ab-38fba228147d-images\") pod \"machine-config-operator-74547568cd-klclm\" (UID: \"eb186618-19e9-4d7e-93ab-38fba228147d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.097243 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.102203 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa013f01-5701-4d63-bc2c-284f5d4a397f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-x4fc7\" (UID: \"aa013f01-5701-4d63-bc2c-284f5d4a397f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.106086 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23-metrics-tls\") pod \"ingress-operator-5b745b69d9-lmvpd\" (UID: \"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.106614 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3bb21a-5d8a-48e2-b115-9953c3021a67-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-gckss\" (UID: \"ed3bb21a-5d8a-48e2-b115-9953c3021a67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.106670 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95c0f9f6-974c-4169-a84c-92f57fb96f2e-serving-cert\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.107457 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-audit-policies\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.108050 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3-signing-key\") pod \"service-ca-9c57cc56f-kbbhd\" (UID: \"35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.108339 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f594bec6-55be-4db1-a25e-1fbe651b3eb2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xfq4s\" (UID: \"f594bec6-55be-4db1-a25e-1fbe651b3eb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.108879 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/74f36017-681f-459c-a204-02bcaaf27d89-profile-collector-cert\") pod \"catalog-operator-68c6474976-6b26c\" (UID: \"74f36017-681f-459c-a204-02bcaaf27d89\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.109345 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23-trusted-ca\") pod \"ingress-operator-5b745b69d9-lmvpd\" (UID: \"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.111050 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.111836 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c7c1d403-b7f4-4d42-b707-54ac23853d3f-default-certificate\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.113057 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.113180 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb186618-19e9-4d7e-93ab-38fba228147d-proxy-tls\") pod \"machine-config-operator-74547568cd-klclm\" (UID: \"eb186618-19e9-4d7e-93ab-38fba228147d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.113365 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.113888 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6d4dd468-0b0e-4767-aa06-7800fd9c449f-webhook-cert\") pod \"packageserver-d55dfcdfc-5xhm6\" (UID: \"6d4dd468-0b0e-4767-aa06-7800fd9c449f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.114087 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.114926 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/44fda25c-1ecf-4334-803c-106306261877-srv-cert\") pod \"olm-operator-6b444d44fb-85rt7\" (UID: \"44fda25c-1ecf-4334-803c-106306261877\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.115065 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/95c0f9f6-974c-4169-a84c-92f57fb96f2e-etcd-client\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.115593 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/44fda25c-1ecf-4334-803c-106306261877-profile-collector-cert\") pod \"olm-operator-6b444d44fb-85rt7\" (UID: \"44fda25c-1ecf-4334-803c-106306261877\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.116313 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e3616ce-3561-4c0c-8901-c713984631f6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-76kb2\" (UID: \"4e3616ce-3561-4c0c-8901-c713984631f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.116327 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be0ca9e5-223e-4597-a38f-9992ca7d00d1-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-slgnm\" (UID: \"be0ca9e5-223e-4597-a38f-9992ca7d00d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.119181 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.119887 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/74f36017-681f-459c-a204-02bcaaf27d89-srv-cert\") pod \"catalog-operator-68c6474976-6b26c\" (UID: \"74f36017-681f-459c-a204-02bcaaf27d89\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.120383 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.121294 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.131600 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c87wt\" (UniqueName: \"kubernetes.io/projected/6d4dd468-0b0e-4767-aa06-7800fd9c449f-kube-api-access-c87wt\") pod \"packageserver-d55dfcdfc-5xhm6\" (UID: \"6d4dd468-0b0e-4767-aa06-7800fd9c449f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.154015 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fkss\" (UniqueName: \"kubernetes.io/projected/ed3bb21a-5d8a-48e2-b115-9953c3021a67-kube-api-access-5fkss\") pod \"package-server-manager-789f6589d5-gckss\" (UID: \"ed3bb21a-5d8a-48e2-b115-9953c3021a67\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.176444 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c2rb\" (UniqueName: \"kubernetes.io/projected/eb186618-19e9-4d7e-93ab-38fba228147d-kube-api-access-9c2rb\") pod \"machine-config-operator-74547568cd-klclm\" (UID: \"eb186618-19e9-4d7e-93ab-38fba228147d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.179451 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.179856 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb2769ed-3d4b-4e62-8298-b05cc6dcca3b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5cq4r\" (UID: \"eb2769ed-3d4b-4e62-8298-b05cc6dcca3b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.179908 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg55b\" (UniqueName: \"kubernetes.io/projected/eb2769ed-3d4b-4e62-8298-b05cc6dcca3b-kube-api-access-dg55b\") pod \"machine-config-controller-84d6567774-5cq4r\" (UID: \"eb2769ed-3d4b-4e62-8298-b05cc6dcca3b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.179947 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85xgw\" (UniqueName: \"kubernetes.io/projected/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-kube-api-access-85xgw\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.179996 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20d49e34-b412-49d0-8236-227ae0043102-secret-volume\") pod \"collect-profiles-29396580-dt9th\" (UID: \"20d49e34-b412-49d0-8236-227ae0043102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180050 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-csi-data-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180080 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-mountpoint-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180103 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/15bf26f9-ee66-489a-bda0-cacb5b094844-metrics-tls\") pod \"dns-default-mvcdt\" (UID: \"15bf26f9-ee66-489a-bda0-cacb5b094844\") " pod="openshift-dns/dns-default-mvcdt" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180123 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-socket-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180143 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a5ccb31-7635-4995-926a-927e72a69546-cert\") pod \"ingress-canary-nnhrm\" (UID: \"0a5ccb31-7635-4995-926a-927e72a69546\") " pod="openshift-ingress-canary/ingress-canary-nnhrm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180163 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58g2l\" (UniqueName: \"kubernetes.io/projected/15bf26f9-ee66-489a-bda0-cacb5b094844-kube-api-access-58g2l\") pod \"dns-default-mvcdt\" (UID: \"15bf26f9-ee66-489a-bda0-cacb5b094844\") " pod="openshift-dns/dns-default-mvcdt" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180179 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20d49e34-b412-49d0-8236-227ae0043102-config-volume\") pod \"collect-profiles-29396580-dt9th\" (UID: \"20d49e34-b412-49d0-8236-227ae0043102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180202 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-registration-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180261 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb2769ed-3d4b-4e62-8298-b05cc6dcca3b-proxy-tls\") pod \"machine-config-controller-84d6567774-5cq4r\" (UID: \"eb2769ed-3d4b-4e62-8298-b05cc6dcca3b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180313 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15bf26f9-ee66-489a-bda0-cacb5b094844-config-volume\") pod \"dns-default-mvcdt\" (UID: \"15bf26f9-ee66-489a-bda0-cacb5b094844\") " pod="openshift-dns/dns-default-mvcdt" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180335 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59skw\" (UniqueName: \"kubernetes.io/projected/20d49e34-b412-49d0-8236-227ae0043102-kube-api-access-59skw\") pod \"collect-profiles-29396580-dt9th\" (UID: \"20d49e34-b412-49d0-8236-227ae0043102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180369 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5knnq\" (UniqueName: \"kubernetes.io/projected/788765a4-87d8-4477-90b7-97ee6549e1ba-kube-api-access-5knnq\") pod \"machine-config-server-gpfpp\" (UID: \"788765a4-87d8-4477-90b7-97ee6549e1ba\") " pod="openshift-machine-config-operator/machine-config-server-gpfpp" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180390 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5btwt\" (UniqueName: \"kubernetes.io/projected/3a58051f-3a17-420b-aad3-453e819b7b85-kube-api-access-5btwt\") pod \"control-plane-machine-set-operator-78cbb6b69f-hzskf\" (UID: \"3a58051f-3a17-420b-aad3-453e819b7b85\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180410 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p62mf\" (UniqueName: \"kubernetes.io/projected/0a5ccb31-7635-4995-926a-927e72a69546-kube-api-access-p62mf\") pod \"ingress-canary-nnhrm\" (UID: \"0a5ccb31-7635-4995-926a-927e72a69546\") " pod="openshift-ingress-canary/ingress-canary-nnhrm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180428 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a58051f-3a17-420b-aad3-453e819b7b85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hzskf\" (UID: \"3a58051f-3a17-420b-aad3-453e819b7b85\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180449 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-plugins-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180464 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/788765a4-87d8-4477-90b7-97ee6549e1ba-certs\") pod \"machine-config-server-gpfpp\" (UID: \"788765a4-87d8-4477-90b7-97ee6549e1ba\") " pod="openshift-machine-config-operator/machine-config-server-gpfpp" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.180479 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/788765a4-87d8-4477-90b7-97ee6549e1ba-node-bootstrap-token\") pod \"machine-config-server-gpfpp\" (UID: \"788765a4-87d8-4477-90b7-97ee6549e1ba\") " pod="openshift-machine-config-operator/machine-config-server-gpfpp" Nov 22 07:05:07 crc kubenswrapper[4856]: E1122 07:05:07.180869 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:07.68084367 +0000 UTC m=+150.094237068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.182177 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20d49e34-b412-49d0-8236-227ae0043102-config-volume\") pod \"collect-profiles-29396580-dt9th\" (UID: \"20d49e34-b412-49d0-8236-227ae0043102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.182440 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eb2769ed-3d4b-4e62-8298-b05cc6dcca3b-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-5cq4r\" (UID: \"eb2769ed-3d4b-4e62-8298-b05cc6dcca3b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.183110 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15bf26f9-ee66-489a-bda0-cacb5b094844-config-volume\") pod \"dns-default-mvcdt\" (UID: \"15bf26f9-ee66-489a-bda0-cacb5b094844\") " pod="openshift-dns/dns-default-mvcdt" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.186336 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/788765a4-87d8-4477-90b7-97ee6549e1ba-node-bootstrap-token\") pod \"machine-config-server-gpfpp\" (UID: \"788765a4-87d8-4477-90b7-97ee6549e1ba\") " pod="openshift-machine-config-operator/machine-config-server-gpfpp" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.190848 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-plugins-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.193756 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-socket-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.193983 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-mountpoint-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.194041 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-registration-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.193995 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-csi-data-dir\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.194633 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/15bf26f9-ee66-489a-bda0-cacb5b094844-metrics-tls\") pod \"dns-default-mvcdt\" (UID: \"15bf26f9-ee66-489a-bda0-cacb5b094844\") " pod="openshift-dns/dns-default-mvcdt" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.204276 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/788765a4-87d8-4477-90b7-97ee6549e1ba-certs\") pod \"machine-config-server-gpfpp\" (UID: \"788765a4-87d8-4477-90b7-97ee6549e1ba\") " pod="openshift-machine-config-operator/machine-config-server-gpfpp" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.204345 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a5ccb31-7635-4995-926a-927e72a69546-cert\") pod \"ingress-canary-nnhrm\" (UID: \"0a5ccb31-7635-4995-926a-927e72a69546\") " pod="openshift-ingress-canary/ingress-canary-nnhrm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.204724 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lmvpd\" (UID: \"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.204813 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3a58051f-3a17-420b-aad3-453e819b7b85-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hzskf\" (UID: \"3a58051f-3a17-420b-aad3-453e819b7b85\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.205051 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20d49e34-b412-49d0-8236-227ae0043102-secret-volume\") pod \"collect-profiles-29396580-dt9th\" (UID: \"20d49e34-b412-49d0-8236-227ae0043102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.213066 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fdc4\" (UniqueName: \"kubernetes.io/projected/95c0f9f6-974c-4169-a84c-92f57fb96f2e-kube-api-access-2fdc4\") pod \"etcd-operator-b45778765-mf6qh\" (UID: \"95c0f9f6-974c-4169-a84c-92f57fb96f2e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.214025 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/eb2769ed-3d4b-4e62-8298-b05cc6dcca3b-proxy-tls\") pod \"machine-config-controller-84d6567774-5cq4r\" (UID: \"eb2769ed-3d4b-4e62-8298-b05cc6dcca3b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.217022 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be0ca9e5-223e-4597-a38f-9992ca7d00d1-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-slgnm\" (UID: \"be0ca9e5-223e-4597-a38f-9992ca7d00d1\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.218014 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d7bb875-88e2-48e4-a81b-188f251742c2-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tkprx\" (UID: \"8d7bb875-88e2-48e4-a81b-188f251742c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.222195 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" event={"ID":"689f4fd5-222f-46a6-a41b-bc519d7c1005","Type":"ContainerStarted","Data":"fefa03c90268e6643834a518b085fc74cffe4faca512fc31af1df7eb1916145b"} Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.234301 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5h6s\" (UniqueName: \"kubernetes.io/projected/c7c1d403-b7f4-4d42-b707-54ac23853d3f-kube-api-access-q5h6s\") pod \"router-default-5444994796-2grnj\" (UID: \"c7c1d403-b7f4-4d42-b707-54ac23853d3f\") " pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.248560 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcz2d\" (UniqueName: \"kubernetes.io/projected/74f36017-681f-459c-a204-02bcaaf27d89-kube-api-access-hcz2d\") pod \"catalog-operator-68c6474976-6b26c\" (UID: \"74f36017-681f-459c-a204-02bcaaf27d89\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.272314 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bq98\" (UniqueName: \"kubernetes.io/projected/44fda25c-1ecf-4334-803c-106306261877-kube-api-access-2bq98\") pod \"olm-operator-6b444d44fb-85rt7\" (UID: \"44fda25c-1ecf-4334-803c-106306261877\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.282930 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:07 crc kubenswrapper[4856]: E1122 07:05:07.283917 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:07.783898781 +0000 UTC m=+150.197292039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.297318 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.306166 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.327940 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.330747 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r67bs\" (UniqueName: \"kubernetes.io/projected/aa013f01-5701-4d63-bc2c-284f5d4a397f-kube-api-access-r67bs\") pod \"marketplace-operator-79b997595-x4fc7\" (UID: \"aa013f01-5701-4d63-bc2c-284f5d4a397f\") " pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.335122 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.347991 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.349898 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcbgr\" (UniqueName: \"kubernetes.io/projected/35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3-kube-api-access-pcbgr\") pod \"service-ca-9c57cc56f-kbbhd\" (UID: \"35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3\") " pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.375626 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.376898 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbjfc\" (UniqueName: \"kubernetes.io/projected/aacf6535-3d49-4077-af9d-3d947615c61b-kube-api-access-mbjfc\") pod \"kube-storage-version-migrator-operator-b67b599dd-m6rv8\" (UID: \"aacf6535-3d49-4077-af9d-3d947615c61b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.377066 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb4vj\" (UniqueName: \"kubernetes.io/projected/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-kube-api-access-qb4vj\") pod \"route-controller-manager-6576b87f9c-s6j42\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.382064 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.384115 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:07 crc kubenswrapper[4856]: E1122 07:05:07.384593 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:07.884572785 +0000 UTC m=+150.297966043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.387948 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.395002 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.403315 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.411776 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nxs7\" (UniqueName: \"kubernetes.io/projected/4e3616ce-3561-4c0c-8901-c713984631f6-kube-api-access-4nxs7\") pod \"openshift-apiserver-operator-796bbdcf4f-76kb2\" (UID: \"4e3616ce-3561-4c0c-8901-c713984631f6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.413951 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d7bb875-88e2-48e4-a81b-188f251742c2-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tkprx\" (UID: \"8d7bb875-88e2-48e4-a81b-188f251742c2\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.416085 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsn98\" (UniqueName: \"kubernetes.io/projected/e611448f-b8a0-4e60-94a5-51d929ea1b5f-kube-api-access-nsn98\") pod \"migrator-59844c95c7-cbrxg\" (UID: \"e611448f-b8a0-4e60-94a5-51d929ea1b5f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbrxg" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.419055 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbrxg" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.426410 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.436873 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlh2j\" (UniqueName: \"kubernetes.io/projected/8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23-kube-api-access-hlh2j\") pod \"ingress-operator-5b745b69d9-lmvpd\" (UID: \"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.457586 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stzmp\" (UniqueName: \"kubernetes.io/projected/9a816ade-c1d6-48c0-a246-4d3407f90e58-kube-api-access-stzmp\") pod \"oauth-openshift-558db77b4-s2hxc\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.479762 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f594bec6-55be-4db1-a25e-1fbe651b3eb2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xfq4s\" (UID: \"f594bec6-55be-4db1-a25e-1fbe651b3eb2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.487071 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:07 crc kubenswrapper[4856]: E1122 07:05:07.487501 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:07.987480682 +0000 UTC m=+150.400873940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.510909 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58g2l\" (UniqueName: \"kubernetes.io/projected/15bf26f9-ee66-489a-bda0-cacb5b094844-kube-api-access-58g2l\") pod \"dns-default-mvcdt\" (UID: \"15bf26f9-ee66-489a-bda0-cacb5b094844\") " pod="openshift-dns/dns-default-mvcdt" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.550126 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5knnq\" (UniqueName: \"kubernetes.io/projected/788765a4-87d8-4477-90b7-97ee6549e1ba-kube-api-access-5knnq\") pod \"machine-config-server-gpfpp\" (UID: \"788765a4-87d8-4477-90b7-97ee6549e1ba\") " pod="openshift-machine-config-operator/machine-config-server-gpfpp" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.554565 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg55b\" (UniqueName: \"kubernetes.io/projected/eb2769ed-3d4b-4e62-8298-b05cc6dcca3b-kube-api-access-dg55b\") pod \"machine-config-controller-84d6567774-5cq4r\" (UID: \"eb2769ed-3d4b-4e62-8298-b05cc6dcca3b\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.578591 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5btwt\" (UniqueName: \"kubernetes.io/projected/3a58051f-3a17-420b-aad3-453e819b7b85-kube-api-access-5btwt\") pod \"control-plane-machine-set-operator-78cbb6b69f-hzskf\" (UID: \"3a58051f-3a17-420b-aad3-453e819b7b85\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.589150 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.589439 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" Nov 22 07:05:07 crc kubenswrapper[4856]: E1122 07:05:07.590493 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.090452302 +0000 UTC m=+150.503845560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.591480 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cxh4g"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.595667 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p62mf\" (UniqueName: \"kubernetes.io/projected/0a5ccb31-7635-4995-926a-927e72a69546-kube-api-access-p62mf\") pod \"ingress-canary-nnhrm\" (UID: \"0a5ccb31-7635-4995-926a-927e72a69546\") " pod="openshift-ingress-canary/ingress-canary-nnhrm" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.612585 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.612651 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-2szb8"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.618759 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.619605 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.622961 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rc5xn"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.630671 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.632266 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-mdfqc"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.636300 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.636635 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-psfmg"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.641365 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.646365 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.650191 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-56qbr"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.650246 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85xgw\" (UniqueName: \"kubernetes.io/projected/53ec176a-6d8d-43a2-8523-78fd3cd12cd9-kube-api-access-85xgw\") pod \"csi-hostpathplugin-hlxfj\" (UID: \"53ec176a-6d8d-43a2-8523-78fd3cd12cd9\") " pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.654947 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.655315 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.658754 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-lpbp9"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.659028 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.667027 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-mf6qh"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.668357 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.670695 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59skw\" (UniqueName: \"kubernetes.io/projected/20d49e34-b412-49d0-8236-227ae0043102-kube-api-access-59skw\") pod \"collect-profiles-29396580-dt9th\" (UID: \"20d49e34-b412-49d0-8236-227ae0043102\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.681534 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-57k7r"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.683421 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-csttt"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.691604 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:07 crc kubenswrapper[4856]: E1122 07:05:07.692162 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.19214462 +0000 UTC m=+150.605537878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.732265 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.734364 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.739592 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.746152 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.753807 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.785685 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.795128 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gpfpp" Nov 22 07:05:07 crc kubenswrapper[4856]: E1122 07:05:07.795898 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.295871487 +0000 UTC m=+150.709264745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.795939 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.796237 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:07 crc kubenswrapper[4856]: E1122 07:05:07.796763 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.296753958 +0000 UTC m=+150.710147216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.797319 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mvcdt" Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.803771 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nnhrm" Nov 22 07:05:07 crc kubenswrapper[4856]: W1122 07:05:07.808423 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dff5c22_ed64_4f83_9f80_3c618d5585ab.slice/crio-90ce0af4d355f13213f9768c737ada85c7c51ef8ab1a03b5d80330f303570841 WatchSource:0}: Error finding container 90ce0af4d355f13213f9768c737ada85c7c51ef8ab1a03b5d80330f303570841: Status 404 returned error can't find the container with id 90ce0af4d355f13213f9768c737ada85c7c51ef8ab1a03b5d80330f303570841 Nov 22 07:05:07 crc kubenswrapper[4856]: W1122 07:05:07.810205 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52414feb_0c08_4591_a84a_985167853ba3.slice/crio-569b8c99bfcc5c045edcd2f399f317b03cfc31bf8745782aa2ace0e27577dccf WatchSource:0}: Error finding container 569b8c99bfcc5c045edcd2f399f317b03cfc31bf8745782aa2ace0e27577dccf: Status 404 returned error can't find the container with id 569b8c99bfcc5c045edcd2f399f317b03cfc31bf8745782aa2ace0e27577dccf Nov 22 07:05:07 crc kubenswrapper[4856]: W1122 07:05:07.812249 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f1bb024_f9c1_46f4_8805_4dd12cf9a369.slice/crio-ee186936dd8812f95cbcde9d038b5365e21b4b3f971ac6de95dd59a7c0ce6cc8 WatchSource:0}: Error finding container ee186936dd8812f95cbcde9d038b5365e21b4b3f971ac6de95dd59a7c0ce6cc8: Status 404 returned error can't find the container with id ee186936dd8812f95cbcde9d038b5365e21b4b3f971ac6de95dd59a7c0ce6cc8 Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.854473 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x4fc7"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.862592 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7"] Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.897535 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:07 crc kubenswrapper[4856]: E1122 07:05:07.897840 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.397791821 +0000 UTC m=+150.811185079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:07 crc kubenswrapper[4856]: I1122 07:05:07.898376 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:07 crc kubenswrapper[4856]: E1122 07:05:07.898883 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.398852237 +0000 UTC m=+150.812245655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.000219 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.000621 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.500589666 +0000 UTC m=+150.913982934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.102006 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.102468 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.602444858 +0000 UTC m=+151.015838116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.104078 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kbbhd"] Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.108957 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss"] Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.120043 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-klclm"] Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.174017 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cbrxg"] Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.181713 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c"] Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.193898 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6"] Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.203025 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.203356 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.703328997 +0000 UTC m=+151.116722255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.203675 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.204059 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.704051265 +0000 UTC m=+151.117444523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: W1122 07:05:08.215264 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d4dd468_0b0e_4767_aa06_7800fd9c449f.slice/crio-06666cd298df9d6f4bb4aa3d18323eaccb61d4b35ce0b6d00d9f4a2c200350e5 WatchSource:0}: Error finding container 06666cd298df9d6f4bb4aa3d18323eaccb61d4b35ce0b6d00d9f4a2c200350e5: Status 404 returned error can't find the container with id 06666cd298df9d6f4bb4aa3d18323eaccb61d4b35ce0b6d00d9f4a2c200350e5 Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.243199 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" event={"ID":"44fda25c-1ecf-4334-803c-106306261877","Type":"ContainerStarted","Data":"69bf14e9f40073b022bcfd50363ebe2abd795e5885789e4a52e2e5a3fe663663"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.245369 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" event={"ID":"4f1bb024-f9c1-46f4-8805-4dd12cf9a369","Type":"ContainerStarted","Data":"ee186936dd8812f95cbcde9d038b5365e21b4b3f971ac6de95dd59a7c0ce6cc8"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.247395 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" event={"ID":"18513a7b-b0ef-4b3a-be63-ebd97482baa7","Type":"ContainerStarted","Data":"6779d9353808523f4b97d9aa5ce478efbf8476e828fbb589e7ee5428843c7274"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.250919 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" event={"ID":"4dff5c22-ed64-4f83-9f80-3c618d5585ab","Type":"ContainerStarted","Data":"90ce0af4d355f13213f9768c737ada85c7c51ef8ab1a03b5d80330f303570841"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.254896 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" event={"ID":"74f36017-681f-459c-a204-02bcaaf27d89","Type":"ContainerStarted","Data":"f9277f6462857e187c75946924dc861979cd04115cbb4e65fcadd3a751a1fc6b"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.256174 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mdfqc" event={"ID":"312bb5c3-467c-48bb-967f-b8aadfa43e94","Type":"ContainerStarted","Data":"5c0a7a4add72cd2bd4ced506c91a130ec191504df0cec9952ceea9ffeb6a527a"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.257919 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" event={"ID":"6d4dd468-0b0e-4767-aa06-7800fd9c449f","Type":"ContainerStarted","Data":"06666cd298df9d6f4bb4aa3d18323eaccb61d4b35ce0b6d00d9f4a2c200350e5"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.259046 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" event={"ID":"40737369-e550-4119-b969-44e99b9ec9e7","Type":"ContainerStarted","Data":"3c667c1c5d8daf6fe88f9ff788f92bf982100a421e396dc95cc62865fa967321"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.260722 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" event={"ID":"ed3bb21a-5d8a-48e2-b115-9953c3021a67","Type":"ContainerStarted","Data":"4295e1473786cc1b4a29f7bbc4af818159d922bfb1892c26bc1e74ad6e05933d"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.262112 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" event={"ID":"b237e36f-a520-4471-82a5-5d26aff897b1","Type":"ContainerStarted","Data":"a3b7b2b4523a1283f54e72d3e753c6a11217cd896109206d61ec550ec698f7c6"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.263188 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cxh4g" event={"ID":"8e21e2b8-4129-4670-96a9-e587637a3a04","Type":"ContainerStarted","Data":"a2e939700b9c0318125ce37825fcf39c14b4cdef1287052bf16031333456aa5e"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.265075 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" event={"ID":"4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c","Type":"ContainerStarted","Data":"4f60325297fe54b6f479abecedc882114972cb040f3a0fdffba23502f144a821"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.265935 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbrxg" event={"ID":"e611448f-b8a0-4e60-94a5-51d929ea1b5f","Type":"ContainerStarted","Data":"228297f76342af9de026b4368e6a3e626ac09819277e769fa582ff717076bd93"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.267484 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rc5xn" event={"ID":"89b98d6b-28cd-4530-ac68-f717832a84b0","Type":"ContainerStarted","Data":"d82fbc78a4c4c7c59a4beed2fdc83ab065f97f0315976598b5c5c93371d35ca6"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.268798 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" event={"ID":"eb186618-19e9-4d7e-93ab-38fba228147d","Type":"ContainerStarted","Data":"078cfa9e6ba5284a84819aad17ba48f5729aeb9c28281d6b17871e98f7ea505d"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.270175 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" event={"ID":"35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3","Type":"ContainerStarted","Data":"95ac7a9a803257cce6ec312f4c2aa951db816d5c0ba50c49d0a83b44956dfa25"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.272641 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" event={"ID":"aa013f01-5701-4d63-bc2c-284f5d4a397f","Type":"ContainerStarted","Data":"8d5542d7fdf572831ec1ebf9cb55075a48553a24ec8eb3deed1715234573c0cb"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.273837 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" event={"ID":"95c0f9f6-974c-4169-a84c-92f57fb96f2e","Type":"ContainerStarted","Data":"b7f14959ac4234e186b1067f871f99ce2e46210cad66c4bf413efe2b20878e9c"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.275190 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-57k7r" event={"ID":"2cb75722-66d1-46a3-b867-1cab32f01ede","Type":"ContainerStarted","Data":"66eda2f63d1b3fe39ffc6c3506c0c2e04e81446e43d279ab5bb82e89ccdf1a9b"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.278764 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" event={"ID":"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4","Type":"ContainerStarted","Data":"cf7fcdff8974e908d94ab43327f6f2891f96016cfc189a35a5cb850f6e4530df"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.279762 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" event={"ID":"be0ca9e5-223e-4597-a38f-9992ca7d00d1","Type":"ContainerStarted","Data":"071a333051e837537cf037bb5a517849664705b63aa43a346a7ed2c84e7d23c3"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.286292 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" event={"ID":"d9a5e5b4-a255-4888-b381-e743b2440738","Type":"ContainerStarted","Data":"da8ff4bc1e58f22c687dd66ea124a07a589bbfc04d0698232c2f49aedc038c4d"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.288877 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" event={"ID":"689f4fd5-222f-46a6-a41b-bc519d7c1005","Type":"ContainerStarted","Data":"a86b380cb3789061b107f7d2ccda54c1586183abdf1ee2e73b01e1fc27deed25"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.291153 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2grnj" event={"ID":"c7c1d403-b7f4-4d42-b707-54ac23853d3f","Type":"ContainerStarted","Data":"daad6eb99db9d4882e60186127609d49a93a64defbf23fe69396df1bf96b1cda"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.293354 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" event={"ID":"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f","Type":"ContainerStarted","Data":"aa78fdd83656dbc58edbc662698d19a1d3a0d65ebe62ff659a202ea5ba40d529"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.294393 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-psfmg" event={"ID":"52414feb-0c08-4591-a84a-985167853ba3","Type":"ContainerStarted","Data":"569b8c99bfcc5c045edcd2f399f317b03cfc31bf8745782aa2ace0e27577dccf"} Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.304950 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.305026 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.804977675 +0000 UTC m=+151.218370933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.306215 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.306683 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.806674265 +0000 UTC m=+151.220067523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.415479 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.416936 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.915918805 +0000 UTC m=+151.329312073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.417723 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.418221 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:08.91820457 +0000 UTC m=+151.331597828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.519086 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.519240 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.019207851 +0000 UTC m=+151.432601109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.519336 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.519828 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.019819826 +0000 UTC m=+151.433213084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.620980 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.621313 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.121254488 +0000 UTC m=+151.534647756 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.621572 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.622201 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.12218974 +0000 UTC m=+151.535583008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.724766 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.724970 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.224931654 +0000 UTC m=+151.638324912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.725211 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.725678 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.225668871 +0000 UTC m=+151.639062329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.826553 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.826736 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.326709425 +0000 UTC m=+151.740102683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.826834 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.827234 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.327224317 +0000 UTC m=+151.740617565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.928434 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.928640 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.428608088 +0000 UTC m=+151.842001356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:08 crc kubenswrapper[4856]: I1122 07:05:08.928944 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:08 crc kubenswrapper[4856]: E1122 07:05:08.929499 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.429484389 +0000 UTC m=+151.842877647 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.030098 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.030291 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.530259826 +0000 UTC m=+151.943653094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.030905 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.031295 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.53128464 +0000 UTC m=+151.944678088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.110358 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s"] Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.132319 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.132434 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.632408765 +0000 UTC m=+152.045802023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.132582 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.132927 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.632918737 +0000 UTC m=+152.046311995 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.241913 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.242298 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.742241218 +0000 UTC m=+152.155634466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.242731 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.243241 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.743208982 +0000 UTC m=+152.156602240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.300665 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz" event={"ID":"d47abc5e-74bd-4f9a-9a99-1d83d8834ce0","Type":"ContainerStarted","Data":"a5b7bb2a7dddfde95d4984b6c28680370264874c02894562603bcfd6d3469c01"} Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.344244 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.344482 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.844445299 +0000 UTC m=+152.257838557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.344699 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.345145 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.845127716 +0000 UTC m=+152.258520974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.445267 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.445548 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.945485511 +0000 UTC m=+152.358878789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.445620 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.446035 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:09.946017325 +0000 UTC m=+152.359410573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.542750 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th"] Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.545151 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8"] Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.546055 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.546193 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:10.046148475 +0000 UTC m=+152.459541733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.547723 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.548148 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:10.048134263 +0000 UTC m=+152.461527521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.549058 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd"] Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.649238 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.649647 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:10.149590936 +0000 UTC m=+152.562984194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.650347 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.650820 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:10.150800565 +0000 UTC m=+152.564193823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: W1122 07:05:09.677815 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaacf6535_3d49_4077_af9d_3d947615c61b.slice/crio-0a840e8031b2be4d044fde785292530ac9adb2d8490d7f0e57225a87083eee44 WatchSource:0}: Error finding container 0a840e8031b2be4d044fde785292530ac9adb2d8490d7f0e57225a87083eee44: Status 404 returned error can't find the container with id 0a840e8031b2be4d044fde785292530ac9adb2d8490d7f0e57225a87083eee44 Nov 22 07:05:09 crc kubenswrapper[4856]: W1122 07:05:09.681661 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20d49e34_b412_49d0_8236_227ae0043102.slice/crio-aece58ebeead4a67cd4f5d8110e8a28fe7826570f1968199b52ed1860b057d8a WatchSource:0}: Error finding container aece58ebeead4a67cd4f5d8110e8a28fe7826570f1968199b52ed1860b057d8a: Status 404 returned error can't find the container with id aece58ebeead4a67cd4f5d8110e8a28fe7826570f1968199b52ed1860b057d8a Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.751175 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.751497 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:10.251476759 +0000 UTC m=+152.664870027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.792219 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx"] Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.855395 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.856273 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:10.356223001 +0000 UTC m=+152.769616259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.868643 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2hxc"] Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.956823 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:09 crc kubenswrapper[4856]: E1122 07:05:09.957392 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:10.457367296 +0000 UTC m=+152.870760574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.976043 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42"] Nov 22 07:05:09 crc kubenswrapper[4856]: I1122 07:05:09.979388 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nnhrm"] Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.058838 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:10 crc kubenswrapper[4856]: E1122 07:05:10.059302 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:10.55928202 +0000 UTC m=+152.972675278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.127818 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2"] Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.148260 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r"] Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.160969 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:10 crc kubenswrapper[4856]: E1122 07:05:10.161235 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:10.661199704 +0000 UTC m=+153.074592972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.161397 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:10 crc kubenswrapper[4856]: E1122 07:05:10.161861 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:10.661847019 +0000 UTC m=+153.075240467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.218052 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hlxfj"] Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.224236 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mvcdt"] Nov 22 07:05:10 crc kubenswrapper[4856]: W1122 07:05:10.225045 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5ccb31_7635_4995_926a_927e72a69546.slice/crio-3227884447f0416e8c0b28fa1ad3518d3c0aee11e4c449d957858c556b9e2df2 WatchSource:0}: Error finding container 3227884447f0416e8c0b28fa1ad3518d3c0aee11e4c449d957858c556b9e2df2: Status 404 returned error can't find the container with id 3227884447f0416e8c0b28fa1ad3518d3c0aee11e4c449d957858c556b9e2df2 Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.238135 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf"] Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.262136 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:10 crc kubenswrapper[4856]: E1122 07:05:10.262433 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:10.76238994 +0000 UTC m=+153.175783198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.306430 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" event={"ID":"f594bec6-55be-4db1-a25e-1fbe651b3eb2","Type":"ContainerStarted","Data":"83f6a05e38659e4c1576bec66a149919c563e1e135cb5ed2d0fe6953cf415e6a"} Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.308080 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" event={"ID":"aacf6535-3d49-4077-af9d-3d947615c61b","Type":"ContainerStarted","Data":"0a840e8031b2be4d044fde785292530ac9adb2d8490d7f0e57225a87083eee44"} Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.309407 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" event={"ID":"95c0f9f6-974c-4169-a84c-92f57fb96f2e","Type":"ContainerStarted","Data":"d193364f507878883669bc814232bcada754d7fa8a8d9027709e82ef1f22eec5"} Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.310410 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" event={"ID":"8d7bb875-88e2-48e4-a81b-188f251742c2","Type":"ContainerStarted","Data":"6c00f5d588d00d71a643d7c3a05c2c468a765354f08306a4d88438d2e60d8ac9"} Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.311853 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" event={"ID":"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23","Type":"ContainerStarted","Data":"9d5ebacc814e6aa1d1acda595c98b7b64dc39eeb7de01896b8a198930717438e"} Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.313808 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-2grnj" event={"ID":"c7c1d403-b7f4-4d42-b707-54ac23853d3f","Type":"ContainerStarted","Data":"91120a35b8c73b7612e331b85085e969acd543d1a0556a41cd6a82ee3f6db04b"} Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.315360 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" event={"ID":"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0","Type":"ContainerStarted","Data":"5b9584540ec95b7ea31e3fd3f0e05d164f1ee4603a3edd2fd69b08a71a0b78ae"} Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.316690 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nnhrm" event={"ID":"0a5ccb31-7635-4995-926a-927e72a69546","Type":"ContainerStarted","Data":"3227884447f0416e8c0b28fa1ad3518d3c0aee11e4c449d957858c556b9e2df2"} Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.318297 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" event={"ID":"20d49e34-b412-49d0-8236-227ae0043102","Type":"ContainerStarted","Data":"aece58ebeead4a67cd4f5d8110e8a28fe7826570f1968199b52ed1860b057d8a"} Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.319528 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gpfpp" event={"ID":"788765a4-87d8-4477-90b7-97ee6549e1ba","Type":"ContainerStarted","Data":"50c7da3b074d65ba292f5281722f90ae3d2156c7be9d9838399807a8615fe652"} Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.320361 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" event={"ID":"9a816ade-c1d6-48c0-a246-4d3407f90e58","Type":"ContainerStarted","Data":"4a8025d528eb75f91f293d8bd8881eea7b72255a319dafb320e24c37a1d86221"} Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.364195 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:10 crc kubenswrapper[4856]: E1122 07:05:10.364895 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:10.864865067 +0000 UTC m=+153.278258365 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:10 crc kubenswrapper[4856]: W1122 07:05:10.456530 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e3616ce_3561_4c0c_8901_c713984631f6.slice/crio-8615d1ce1e8490e2daa05a9bd6a56c284dce4baeedfcb7a3ef96a71c4a5be85f WatchSource:0}: Error finding container 8615d1ce1e8490e2daa05a9bd6a56c284dce4baeedfcb7a3ef96a71c4a5be85f: Status 404 returned error can't find the container with id 8615d1ce1e8490e2daa05a9bd6a56c284dce4baeedfcb7a3ef96a71c4a5be85f Nov 22 07:05:10 crc kubenswrapper[4856]: W1122 07:05:10.458276 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb2769ed_3d4b_4e62_8298_b05cc6dcca3b.slice/crio-2ed1daee8c482107469b196e3ce51b0599004c5cdd98b04d5d4aaaf520866ef6 WatchSource:0}: Error finding container 2ed1daee8c482107469b196e3ce51b0599004c5cdd98b04d5d4aaaf520866ef6: Status 404 returned error can't find the container with id 2ed1daee8c482107469b196e3ce51b0599004c5cdd98b04d5d4aaaf520866ef6 Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.467902 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:10 crc kubenswrapper[4856]: E1122 07:05:10.476617 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:10.976579136 +0000 UTC m=+153.389972394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.570798 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:10 crc kubenswrapper[4856]: E1122 07:05:10.571229 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:11.071213745 +0000 UTC m=+153.484607003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.672093 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:10 crc kubenswrapper[4856]: E1122 07:05:10.672960 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:11.172929754 +0000 UTC m=+153.586323042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.773915 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:10 crc kubenswrapper[4856]: E1122 07:05:10.774638 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:11.274612982 +0000 UTC m=+153.688006230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.875207 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:10 crc kubenswrapper[4856]: E1122 07:05:10.875364 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:11.375346288 +0000 UTC m=+153.788739546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.875452 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:10 crc kubenswrapper[4856]: E1122 07:05:10.875773 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:11.375764947 +0000 UTC m=+153.789158205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:10 crc kubenswrapper[4856]: W1122 07:05:10.909133 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a58051f_3a17_420b_aad3_453e819b7b85.slice/crio-3abb625779646040519b6f02fc349f5cd1a2afb1a0ad9d7c6086cc0fbe4d1f3b WatchSource:0}: Error finding container 3abb625779646040519b6f02fc349f5cd1a2afb1a0ad9d7c6086cc0fbe4d1f3b: Status 404 returned error can't find the container with id 3abb625779646040519b6f02fc349f5cd1a2afb1a0ad9d7c6086cc0fbe4d1f3b Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.976873 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:10 crc kubenswrapper[4856]: E1122 07:05:10.977154 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:11.477090136 +0000 UTC m=+153.890483404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:10 crc kubenswrapper[4856]: I1122 07:05:10.977715 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:10 crc kubenswrapper[4856]: E1122 07:05:10.978210 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:11.478199653 +0000 UTC m=+153.891593131 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.079280 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:11 crc kubenswrapper[4856]: E1122 07:05:11.079556 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:11.579480363 +0000 UTC m=+153.992873621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.080011 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:11 crc kubenswrapper[4856]: E1122 07:05:11.080571 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:11.580547777 +0000 UTC m=+153.993941035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.180739 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:11 crc kubenswrapper[4856]: E1122 07:05:11.181127 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:11.681109549 +0000 UTC m=+154.094502807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.282372 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:11 crc kubenswrapper[4856]: E1122 07:05:11.282756 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:11.782734286 +0000 UTC m=+154.196127714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.327953 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf" event={"ID":"3a58051f-3a17-420b-aad3-453e819b7b85","Type":"ContainerStarted","Data":"3abb625779646040519b6f02fc349f5cd1a2afb1a0ad9d7c6086cc0fbe4d1f3b"} Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.329656 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" event={"ID":"eb2769ed-3d4b-4e62-8298-b05cc6dcca3b","Type":"ContainerStarted","Data":"2ed1daee8c482107469b196e3ce51b0599004c5cdd98b04d5d4aaaf520866ef6"} Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.331858 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" event={"ID":"4f1bb024-f9c1-46f4-8805-4dd12cf9a369","Type":"ContainerStarted","Data":"1724a4c2f2253b3008cad83d2d2e52018f0a375db5a6d62b46a124c6bfdeaf8e"} Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.337039 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" event={"ID":"7c5787e0-4c30-4ab6-8fa4-0936744b8fc4","Type":"ContainerStarted","Data":"43284937d3852a671641d0c82157794931e1f053294c29038e6f7cf5a0a6be8a"} Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.339003 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" event={"ID":"53ec176a-6d8d-43a2-8523-78fd3cd12cd9","Type":"ContainerStarted","Data":"a421222547ca6a187f1e563fbe2f88b94ddcb8e7366c939b6baccc2c85103d32"} Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.341109 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" event={"ID":"be0ca9e5-223e-4597-a38f-9992ca7d00d1","Type":"ContainerStarted","Data":"a3fd4ad7128f914b8ccb67faf79060444ea0b2473c06b3cb7b4fc3e9f815e3a3"} Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.343222 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" event={"ID":"40737369-e550-4119-b969-44e99b9ec9e7","Type":"ContainerStarted","Data":"2a76929d3fc1020296f19feb0b6aa0a5bf71fcc6005671c81cdb22d1096554be"} Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.346633 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-psfmg" event={"ID":"52414feb-0c08-4591-a84a-985167853ba3","Type":"ContainerStarted","Data":"e23ade9a36de2945fdd952c812a20c1af881dd1ee5e0dc8dd858debd512f1f58"} Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.347676 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mvcdt" event={"ID":"15bf26f9-ee66-489a-bda0-cacb5b094844","Type":"ContainerStarted","Data":"79e178f55edadc151c6ae332a8bc5ce61d7a3a88d1bdd8bafccd36c9aa8e888d"} Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.348782 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" event={"ID":"4e3616ce-3561-4c0c-8901-c713984631f6","Type":"ContainerStarted","Data":"8615d1ce1e8490e2daa05a9bd6a56c284dce4baeedfcb7a3ef96a71c4a5be85f"} Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.350490 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rc5xn" event={"ID":"89b98d6b-28cd-4530-ac68-f717832a84b0","Type":"ContainerStarted","Data":"19542a0d1d28f29f7415a491d905b6628ff019d2e3229bdec152b1cbc4aaea63"} Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.376134 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-mf6qh" podStartSLOduration=125.376061473 podStartE2EDuration="2m5.376061473s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:11.372846716 +0000 UTC m=+153.786239994" watchObservedRunningTime="2025-11-22 07:05:11.376061473 +0000 UTC m=+153.789454731" Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.388986 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:11 crc kubenswrapper[4856]: E1122 07:05:11.389274 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:11.8892371 +0000 UTC m=+154.302630368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.403604 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.406359 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:11 crc kubenswrapper[4856]: E1122 07:05:11.407391 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:11.907368394 +0000 UTC m=+154.320761672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.407727 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-2grnj" podStartSLOduration=125.407701722 podStartE2EDuration="2m5.407701722s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:11.40634645 +0000 UTC m=+153.819739728" watchObservedRunningTime="2025-11-22 07:05:11.407701722 +0000 UTC m=+153.821095000" Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.410340 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.410439 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.474200 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.505353 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:11 crc kubenswrapper[4856]: E1122 07:05:11.506561 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.006535812 +0000 UTC m=+154.419929070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.607305 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:11 crc kubenswrapper[4856]: E1122 07:05:11.608131 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.108115098 +0000 UTC m=+154.521508356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.709219 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:11 crc kubenswrapper[4856]: E1122 07:05:11.709442 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.209402236 +0000 UTC m=+154.622795494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.709823 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:11 crc kubenswrapper[4856]: E1122 07:05:11.710222 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.210206506 +0000 UTC m=+154.623599764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.811264 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:11 crc kubenswrapper[4856]: E1122 07:05:11.811502 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.311447873 +0000 UTC m=+154.724841131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.811773 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:11 crc kubenswrapper[4856]: E1122 07:05:11.812114 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.312104419 +0000 UTC m=+154.725497677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:11 crc kubenswrapper[4856]: I1122 07:05:11.916445 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:11 crc kubenswrapper[4856]: E1122 07:05:11.917322 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.417293281 +0000 UTC m=+154.830686539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.018959 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.019610 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.519583494 +0000 UTC m=+154.932976762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.120368 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.120790 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.62076265 +0000 UTC m=+155.034155908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.222215 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.222752 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.722727875 +0000 UTC m=+155.136121133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.324106 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.324313 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.824281981 +0000 UTC m=+155.237675239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.324434 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.324928 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.824918815 +0000 UTC m=+155.238312073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.360203 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mdfqc" event={"ID":"312bb5c3-467c-48bb-967f-b8aadfa43e94","Type":"ContainerStarted","Data":"f696dba4f73fa470c46eeea13d8205a750e8769209099d4b90dbcdd4c9de8c4f"} Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.361713 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cxh4g" event={"ID":"8e21e2b8-4129-4670-96a9-e587637a3a04","Type":"ContainerStarted","Data":"f33b5b229459f49ffaade39567cc50052dc75498cb0982691d9577e51b4766eb"} Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.363090 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" event={"ID":"4dff5c22-ed64-4f83-9f80-3c618d5585ab","Type":"ContainerStarted","Data":"2f4c321a72427f4aff6e53a070055b65fe36c8cbe1ff1112bb15577e964bc7d7"} Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.364316 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" event={"ID":"eb186618-19e9-4d7e-93ab-38fba228147d","Type":"ContainerStarted","Data":"005fdcc380d4c69708a27c51ac21aab70d38a881dbb14b5b3031246fa01cb642"} Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.365929 4856 generic.go:334] "Generic (PLEG): container finished" podID="d9a5e5b4-a255-4888-b381-e743b2440738" containerID="cdd7523cc4814f5c899d3d0eab06476da0d66321ebdf14fba0a380273e48b284" exitCode=0 Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.365987 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" event={"ID":"d9a5e5b4-a255-4888-b381-e743b2440738","Type":"ContainerDied","Data":"cdd7523cc4814f5c899d3d0eab06476da0d66321ebdf14fba0a380273e48b284"} Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.367338 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" event={"ID":"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f","Type":"ContainerStarted","Data":"e79fb505f3364a9b790a6cc07e9dc76dbce00678dc37abb4c4d0b7356e4ba1a5"} Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.368705 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" event={"ID":"18513a7b-b0ef-4b3a-be63-ebd97482baa7","Type":"ContainerStarted","Data":"18b1049c8750fd9de018c1c3960f923238d395d80b6fee8f40d2ac4465e62818"} Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.369373 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.373184 4856 patch_prober.go:28] interesting pod/console-operator-58897d9998-rc5xn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.373247 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rc5xn" podUID="89b98d6b-28cd-4530-ac68-f717832a84b0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.398006 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.398066 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.400972 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rc5xn" podStartSLOduration=127.400957788 podStartE2EDuration="2m7.400957788s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:12.397200079 +0000 UTC m=+154.810593357" watchObservedRunningTime="2025-11-22 07:05:12.400957788 +0000 UTC m=+154.814351046" Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.425837 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.426058 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.926023599 +0000 UTC m=+155.339416857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.426433 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.427566 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:12.927555876 +0000 UTC m=+155.340949134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.528978 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.529196 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.029162663 +0000 UTC m=+155.442555931 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.529892 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.530347 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.030325661 +0000 UTC m=+155.443718919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.631782 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.631999 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.131965867 +0000 UTC m=+155.545359135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.632224 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.632557 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.132544522 +0000 UTC m=+155.545937780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.736969 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.737229 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.237146209 +0000 UTC m=+155.650539477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.737297 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.737896 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.237879867 +0000 UTC m=+155.651273305 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.839791 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.840059 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.340020407 +0000 UTC m=+155.753413705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.840363 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.840998 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.34097818 +0000 UTC m=+155.754371468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.943452 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.943765 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.443705463 +0000 UTC m=+155.857098761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:12 crc kubenswrapper[4856]: I1122 07:05:12.944345 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:12 crc kubenswrapper[4856]: E1122 07:05:12.948525 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.448475987 +0000 UTC m=+155.861869245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.047173 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.047914 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.54787336 +0000 UTC m=+155.961266618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.048212 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.048667 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.548659369 +0000 UTC m=+155.962052627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.149616 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.149756 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.649727513 +0000 UTC m=+156.063120791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.150263 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.150744 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.650733637 +0000 UTC m=+156.064126905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.252221 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.252402 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.752374694 +0000 UTC m=+156.165767952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.252810 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.253263 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.753253785 +0000 UTC m=+156.166647043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.357004 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.357243 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.857199477 +0000 UTC m=+156.270592745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.357465 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.357961 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.857948016 +0000 UTC m=+156.271341454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.400144 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.400222 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.401492 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" event={"ID":"ed3bb21a-5d8a-48e2-b115-9953c3021a67","Type":"ContainerStarted","Data":"a86d6093277bd06728b9532d42f89092c43ca68036b4dd512c69a991672cc2c4"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.414684 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" event={"ID":"aacf6535-3d49-4077-af9d-3d947615c61b","Type":"ContainerStarted","Data":"6529fa032eb1eb18fe9783565f3cc1460870284ff209e8e1618d68c074a87068"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.417203 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" event={"ID":"4e3616ce-3561-4c0c-8901-c713984631f6","Type":"ContainerStarted","Data":"ff4b29a261f4fe8fde74f4293ff239d80c9989c40cad8e61fee53b55f3b06009"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.422815 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" event={"ID":"4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c","Type":"ContainerStarted","Data":"4c8419c8738d9ab89d30c38007a86d8984fabeaf9bc42ffe741c3a8df72baffa"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.428560 4856 generic.go:334] "Generic (PLEG): container finished" podID="b237e36f-a520-4471-82a5-5d26aff897b1" containerID="f71b6f0bc3229410a0f1d11d27cc8ea009cec398b3935c9bd67aa65f6922bbf5" exitCode=0 Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.428627 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" event={"ID":"b237e36f-a520-4471-82a5-5d26aff897b1","Type":"ContainerDied","Data":"f71b6f0bc3229410a0f1d11d27cc8ea009cec398b3935c9bd67aa65f6922bbf5"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.430492 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbrxg" event={"ID":"e611448f-b8a0-4e60-94a5-51d929ea1b5f","Type":"ContainerStarted","Data":"ce3dc8774d52e1c7311c7a328028ea6b5b3814c02d6c576b02078eb15ce1d081"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.432301 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" event={"ID":"44fda25c-1ecf-4334-803c-106306261877","Type":"ContainerStarted","Data":"e3e6b172437eec0c468422598190720a9cfe3183c08f90c08d9f3055a0fe7daa"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.435636 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" event={"ID":"aa013f01-5701-4d63-bc2c-284f5d4a397f","Type":"ContainerStarted","Data":"e8fda0b1d8dbbcd711bba9088ec5870e05fdd03cb7cf57ba13d6a369dbf3c804"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.439439 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-57k7r" event={"ID":"2cb75722-66d1-46a3-b867-1cab32f01ede","Type":"ContainerStarted","Data":"6ed845ac13902ff87a4299a4235d256425213ffc39ccd65a2aa264e5dd07fccb"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.441709 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" event={"ID":"689f4fd5-222f-46a6-a41b-bc519d7c1005","Type":"ContainerStarted","Data":"567193f7e64b5de04f63761e231f6fe0ff8d3e39b7078cdd44adc31938f3415b"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.451681 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gpfpp" event={"ID":"788765a4-87d8-4477-90b7-97ee6549e1ba","Type":"ContainerStarted","Data":"e0004e5d7f758e9da0827e076a7a21215733a9caacca3570d4a97801872ec7a3"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.453866 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" event={"ID":"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23","Type":"ContainerStarted","Data":"65b51c180a7890720257cb0e99a0ea6a03ccf3227e59d6156cac16d380670ec6"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.457741 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz" event={"ID":"d47abc5e-74bd-4f9a-9a99-1d83d8834ce0","Type":"ContainerStarted","Data":"eba4011d35785dc79779c441c0b61b462bcd2066c6b0de2b4161a29996962d6e"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.458406 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.458616 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.958590709 +0000 UTC m=+156.371983977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.458860 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.459293 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:13.959283945 +0000 UTC m=+156.372677213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.465285 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" event={"ID":"35ef6a81-2b58-4955-8c46-2d6fe7a4c6d3","Type":"ContainerStarted","Data":"0b70d6dcf0fa17d98da2babcd3d44e88cc9a36b7ff203a31c995cb4a09ac37aa"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.469399 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" event={"ID":"74f36017-681f-459c-a204-02bcaaf27d89","Type":"ContainerStarted","Data":"545c325d6162c8de4f9ee206dbfacdbc63677f42667bee6450fe0f003c219dab"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.472006 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" event={"ID":"6d4dd468-0b0e-4767-aa06-7800fd9c449f","Type":"ContainerStarted","Data":"7ef5e4b463bddd41bae5b556eacce5f4d0f0c915867959a71aa843c12062a517"} Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.472710 4856 patch_prober.go:28] interesting pod/console-operator-58897d9998-rc5xn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.472765 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rc5xn" podUID="89b98d6b-28cd-4530-ac68-f717832a84b0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.491445 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-psfmg" podStartSLOduration=128.491425626 podStartE2EDuration="2m8.491425626s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:13.48992048 +0000 UTC m=+155.903313738" watchObservedRunningTime="2025-11-22 07:05:13.491425626 +0000 UTC m=+155.904818884" Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.513754 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lr6p4" podStartSLOduration=128.513719721 podStartE2EDuration="2m8.513719721s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:13.510388891 +0000 UTC m=+155.923782159" watchObservedRunningTime="2025-11-22 07:05:13.513719721 +0000 UTC m=+155.927112999" Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.554819 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" podStartSLOduration=127.554798085 podStartE2EDuration="2m7.554798085s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:13.536031686 +0000 UTC m=+155.949424944" watchObservedRunningTime="2025-11-22 07:05:13.554798085 +0000 UTC m=+155.968191343" Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.556033 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6s7wh" podStartSLOduration=127.556027205 podStartE2EDuration="2m7.556027205s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:13.554704513 +0000 UTC m=+155.968097781" watchObservedRunningTime="2025-11-22 07:05:13.556027205 +0000 UTC m=+155.969420453" Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.562281 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.562502 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.06248053 +0000 UTC m=+156.475873788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.562721 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.564973 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.06495644 +0000 UTC m=+156.478349698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.579170 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-56qbr" podStartSLOduration=128.57914945 podStartE2EDuration="2m8.57914945s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:13.578502304 +0000 UTC m=+155.991895612" watchObservedRunningTime="2025-11-22 07:05:13.57914945 +0000 UTC m=+155.992542708" Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.603363 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-cbrkr" podStartSLOduration=128.603336099 podStartE2EDuration="2m8.603336099s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:13.602205982 +0000 UTC m=+156.015599250" watchObservedRunningTime="2025-11-22 07:05:13.603336099 +0000 UTC m=+156.016729367" Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.633254 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-slgnm" podStartSLOduration=127.633216396 podStartE2EDuration="2m7.633216396s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:13.622020907 +0000 UTC m=+156.035414175" watchObservedRunningTime="2025-11-22 07:05:13.633216396 +0000 UTC m=+156.046609654" Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.665341 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.666135 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.166118785 +0000 UTC m=+156.579512043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.768537 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.769437 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.269404192 +0000 UTC m=+156.682797630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.870575 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.870779 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.370754532 +0000 UTC m=+156.784147800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.870942 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.871389 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.371379957 +0000 UTC m=+156.784773215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.972458 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.972757 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.472709716 +0000 UTC m=+156.886102994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:13 crc kubenswrapper[4856]: I1122 07:05:13.973141 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:13 crc kubenswrapper[4856]: E1122 07:05:13.973623 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.473603378 +0000 UTC m=+156.886996646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.074489 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:14 crc kubenswrapper[4856]: E1122 07:05:14.074975 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.574950668 +0000 UTC m=+156.988343926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.176310 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:14 crc kubenswrapper[4856]: E1122 07:05:14.176770 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.676750799 +0000 UTC m=+157.090144057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.279578 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:14 crc kubenswrapper[4856]: E1122 07:05:14.280098 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.780066156 +0000 UTC m=+157.193459584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.383141 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:14 crc kubenswrapper[4856]: E1122 07:05:14.383675 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.88365303 +0000 UTC m=+157.297046288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.402911 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:14 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:14 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:14 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.403004 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.484669 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:14 crc kubenswrapper[4856]: E1122 07:05:14.484825 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.984792756 +0000 UTC m=+157.398186014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.485209 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:14 crc kubenswrapper[4856]: E1122 07:05:14.485587 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:14.985576684 +0000 UTC m=+157.398969942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.487213 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf" event={"ID":"3a58051f-3a17-420b-aad3-453e819b7b85","Type":"ContainerStarted","Data":"db2db8261a3cf2850fbbacd82706f359f69199f449b1925e1db99e41f93c3124"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.496111 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-mdfqc" event={"ID":"312bb5c3-467c-48bb-967f-b8aadfa43e94","Type":"ContainerStarted","Data":"f3facfbbd1fee55009291c29ad2be211c4a9aefffeddce76b4ebccd4c63db4b8"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.498960 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" event={"ID":"f594bec6-55be-4db1-a25e-1fbe651b3eb2","Type":"ContainerStarted","Data":"76a3d1f70e654c723a38a481c2c1f289666b223a7b8e8b6d62c2342595fa6761"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.504334 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" event={"ID":"eb186618-19e9-4d7e-93ab-38fba228147d","Type":"ContainerStarted","Data":"98ea172248a0aabae7a8928406a05f4a95b0d54bba6c744267057e403df1b01f"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.510001 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" event={"ID":"eb2769ed-3d4b-4e62-8298-b05cc6dcca3b","Type":"ContainerStarted","Data":"c20c148327e9b4fe35609d20d4436b2af503fccf168cda2e637c7bfa57b73a1a"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.510723 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hzskf" podStartSLOduration=128.510707356 podStartE2EDuration="2m8.510707356s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.510274006 +0000 UTC m=+156.923667274" watchObservedRunningTime="2025-11-22 07:05:14.510707356 +0000 UTC m=+156.924100614" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.517035 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" event={"ID":"20d49e34-b412-49d0-8236-227ae0043102","Type":"ContainerStarted","Data":"c1e682722299cb8414291959f0127dc13304bc425fb68cf227565881399a874f"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.526274 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" event={"ID":"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0","Type":"ContainerStarted","Data":"1d0fdfbd8ce6eae6514635e829c0064a508db366427cec3a76b24ec5a0145256"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.526780 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.533274 4856 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-s6j42 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.533613 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" podUID="a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.543048 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xfq4s" podStartSLOduration=128.54298049 podStartE2EDuration="2m8.54298049s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.527329085 +0000 UTC m=+156.940722343" watchObservedRunningTime="2025-11-22 07:05:14.54298049 +0000 UTC m=+156.956373958" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.546679 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" event={"ID":"9a816ade-c1d6-48c0-a246-4d3407f90e58","Type":"ContainerStarted","Data":"642f26ddc4d161287c6d6419e8101b904d24c69b7ad7273e694a840097e31547"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.547939 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.555619 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nnhrm" event={"ID":"0a5ccb31-7635-4995-926a-927e72a69546","Type":"ContainerStarted","Data":"479925a86a9e618ce9efca23a46d92edabb78948bb124f21fc4eb362483ac9e0"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.559049 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" podStartSLOduration=128.559014815 podStartE2EDuration="2m8.559014815s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.555117822 +0000 UTC m=+156.968511080" watchObservedRunningTime="2025-11-22 07:05:14.559014815 +0000 UTC m=+156.972408073" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.563099 4856 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s2hxc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.563177 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" podUID="9a816ade-c1d6-48c0-a246-4d3407f90e58" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.575767 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" event={"ID":"4dff5c22-ed64-4f83-9f80-3c618d5585ab","Type":"ContainerStarted","Data":"83c4d9d83b32533bc108d7b69fd3449284580aaf9b7266965de8d79dfb389540"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.580441 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mvcdt" event={"ID":"15bf26f9-ee66-489a-bda0-cacb5b094844","Type":"ContainerStarted","Data":"478879386f4fd84a22dc896bc5fb97a5aff28bbdbbab16f468feafe2be793180"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.582378 4856 generic.go:334] "Generic (PLEG): container finished" podID="4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c" containerID="4c8419c8738d9ab89d30c38007a86d8984fabeaf9bc42ffe741c3a8df72baffa" exitCode=0 Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.582496 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" event={"ID":"4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c","Type":"ContainerDied","Data":"4c8419c8738d9ab89d30c38007a86d8984fabeaf9bc42ffe741c3a8df72baffa"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.584345 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" podStartSLOduration=129.584332352 podStartE2EDuration="2m9.584332352s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.584030885 +0000 UTC m=+156.997424143" watchObservedRunningTime="2025-11-22 07:05:14.584332352 +0000 UTC m=+156.997725610" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.586249 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:14 crc kubenswrapper[4856]: E1122 07:05:14.586379 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.0863501 +0000 UTC m=+157.499743358 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.588650 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:14 crc kubenswrapper[4856]: E1122 07:05:14.589239 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.089217049 +0000 UTC m=+157.502610497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.589457 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" event={"ID":"8d7bb875-88e2-48e4-a81b-188f251742c2","Type":"ContainerStarted","Data":"9957572d351796f08003bb57f2969c667b282419ab5b900056ecf7e43cacef73"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.591942 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cxh4g" event={"ID":"8e21e2b8-4129-4670-96a9-e587637a3a04","Type":"ContainerStarted","Data":"8313160befef727d5ae4921ee4254e1c83aa737df59697f06461cc59a28b239e"} Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.593344 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.593413 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.593739 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.596655 4856 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85rt7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.596693 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" podUID="44fda25c-1ecf-4334-803c-106306261877" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.596762 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-x4fc7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.596854 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" podUID="aa013f01-5701-4d63-bc2c-284f5d4a397f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.596782 4856 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5xhm6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.597089 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" podUID="6d4dd468-0b0e-4767-aa06-7800fd9c449f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.607879 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-nnhrm" podStartSLOduration=10.607857856 podStartE2EDuration="10.607857856s" podCreationTimestamp="2025-11-22 07:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.602345434 +0000 UTC m=+157.015738692" watchObservedRunningTime="2025-11-22 07:05:14.607857856 +0000 UTC m=+157.021251114" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.658151 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" podStartSLOduration=129.658128141 podStartE2EDuration="2m9.658128141s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.657397804 +0000 UTC m=+157.070791062" watchObservedRunningTime="2025-11-22 07:05:14.658128141 +0000 UTC m=+157.071521399" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.680342 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-nrtm2" podStartSLOduration=129.680296593 podStartE2EDuration="2m9.680296593s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.678282915 +0000 UTC m=+157.091676173" watchObservedRunningTime="2025-11-22 07:05:14.680296593 +0000 UTC m=+157.093689851" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.690933 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:14 crc kubenswrapper[4856]: E1122 07:05:14.691408 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.191380699 +0000 UTC m=+157.604773947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.691808 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:14 crc kubenswrapper[4856]: E1122 07:05:14.697141 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.197126846 +0000 UTC m=+157.610520104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.745231 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-kbbhd" podStartSLOduration=128.74521028 podStartE2EDuration="2m8.74521028s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.744972883 +0000 UTC m=+157.158366151" watchObservedRunningTime="2025-11-22 07:05:14.74521028 +0000 UTC m=+157.158603538" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.768350 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-gpfpp" podStartSLOduration=10.768318554 podStartE2EDuration="10.768318554s" podCreationTimestamp="2025-11-22 07:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.76480763 +0000 UTC m=+157.178200888" watchObservedRunningTime="2025-11-22 07:05:14.768318554 +0000 UTC m=+157.181711812" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.796864 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:14 crc kubenswrapper[4856]: E1122 07:05:14.797031 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.296996492 +0000 UTC m=+157.710389750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.797310 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:14 crc kubenswrapper[4856]: E1122 07:05:14.798727 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.298054466 +0000 UTC m=+157.711447724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.806449 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-57k7r" podStartSLOduration=129.806412627 podStartE2EDuration="2m9.806412627s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.795499366 +0000 UTC m=+157.208892634" watchObservedRunningTime="2025-11-22 07:05:14.806412627 +0000 UTC m=+157.219805885" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.856178 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-m6rv8" podStartSLOduration=128.85614813 podStartE2EDuration="2m8.85614813s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.855974866 +0000 UTC m=+157.269368124" watchObservedRunningTime="2025-11-22 07:05:14.85614813 +0000 UTC m=+157.269541398" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.901497 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:14 crc kubenswrapper[4856]: E1122 07:05:14.902403 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.402374678 +0000 UTC m=+157.815767936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.902593 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" podStartSLOduration=128.902560973 podStartE2EDuration="2m8.902560973s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.90078119 +0000 UTC m=+157.314174458" watchObservedRunningTime="2025-11-22 07:05:14.902560973 +0000 UTC m=+157.315954231" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.925748 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-76kb2" podStartSLOduration=129.925696988 podStartE2EDuration="2m9.925696988s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.924402697 +0000 UTC m=+157.337795955" watchObservedRunningTime="2025-11-22 07:05:14.925696988 +0000 UTC m=+157.339090256" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.948363 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" podStartSLOduration=128.948331711 podStartE2EDuration="2m8.948331711s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.945985794 +0000 UTC m=+157.359379052" watchObservedRunningTime="2025-11-22 07:05:14.948331711 +0000 UTC m=+157.361724969" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.970488 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tkprx" podStartSLOduration=128.970448181 podStartE2EDuration="2m8.970448181s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.96544011 +0000 UTC m=+157.378833368" watchObservedRunningTime="2025-11-22 07:05:14.970448181 +0000 UTC m=+157.383841439" Nov 22 07:05:14 crc kubenswrapper[4856]: I1122 07:05:14.983114 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" podStartSLOduration=128.983093044 podStartE2EDuration="2m8.983093044s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:14.982930979 +0000 UTC m=+157.396324237" watchObservedRunningTime="2025-11-22 07:05:14.983093044 +0000 UTC m=+157.396486302" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.004248 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-cxh4g" podStartSLOduration=129.004225481 podStartE2EDuration="2m9.004225481s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:15.001478735 +0000 UTC m=+157.414871993" watchObservedRunningTime="2025-11-22 07:05:15.004225481 +0000 UTC m=+157.417618749" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.005917 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.006235 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.506221358 +0000 UTC m=+157.919614616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.107272 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.107599 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.607563388 +0000 UTC m=+158.020956646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.107689 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.108182 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.608174523 +0000 UTC m=+158.021567781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.208948 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.209157 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.709115943 +0000 UTC m=+158.122509201 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.209475 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.209888 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.709866252 +0000 UTC m=+158.123259510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.310899 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.311177 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.8111318 +0000 UTC m=+158.224525058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.311628 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.311995 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.81198112 +0000 UTC m=+158.225374368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.403565 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:15 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:15 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:15 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.403688 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.413573 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.413975 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:15.913955935 +0000 UTC m=+158.327349193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.515048 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.515549 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.015527521 +0000 UTC m=+158.428920779 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.600380 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" event={"ID":"ed3bb21a-5d8a-48e2-b115-9953c3021a67","Type":"ContainerStarted","Data":"a1d24876a32d3336229e17ce660c8d978efa57b41c5c1b0ebbd0eeac6311db0b"} Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.600561 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.602422 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" event={"ID":"8b11b4c9-5fcf-4e7e-9efb-d17dcc4f8f23","Type":"ContainerStarted","Data":"352289bf33ab3966bace2fd1ee673dc4ddb1289535a8b3bb3dfb7497fbf7810b"} Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.604080 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" event={"ID":"b237e36f-a520-4471-82a5-5d26aff897b1","Type":"ContainerStarted","Data":"df59b9d4254504cd0f696c721cdd277314758a7f5ef19191475e8bc3e6c7a52e"} Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.606650 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" event={"ID":"d9a5e5b4-a255-4888-b381-e743b2440738","Type":"ContainerStarted","Data":"8f48f1e2f4782033f8a3f0ff5ea46e7838115e24115cd40255854790501e7218"} Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.609216 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" event={"ID":"eb2769ed-3d4b-4e62-8298-b05cc6dcca3b","Type":"ContainerStarted","Data":"e37599fa9ac4413c0b28672775dcd5b056473d904784194bf4a45f6c9b35bbbe"} Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.610714 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mvcdt" event={"ID":"15bf26f9-ee66-489a-bda0-cacb5b094844","Type":"ContainerStarted","Data":"19b2ddd8d41b4a34a2a37157db943140e8ae4334c757f2c229631637eb78645d"} Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.610854 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-mvcdt" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.612056 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz" event={"ID":"d47abc5e-74bd-4f9a-9a99-1d83d8834ce0","Type":"ContainerStarted","Data":"a8ac08439aaeaa86b8a5356b48f88999a43a6b438797cd85d9f0ee43f5c5f593"} Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.614044 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" event={"ID":"4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c","Type":"ContainerStarted","Data":"8e3ecf5d2710b379d53bf840aeb1d4d87529f3ad7537c9ec8eab8fd2f7a0f26f"} Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.615454 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbrxg" event={"ID":"e611448f-b8a0-4e60-94a5-51d929ea1b5f","Type":"ContainerStarted","Data":"784701a7b0e1346140cdd3b8441d3aa1d9985b373ecc936e2efc27f88bec50a6"} Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.615714 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.616100 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.116078142 +0000 UTC m=+158.529471400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.616216 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.616318 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.616414 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.616455 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.616656 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.616792 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.116780298 +0000 UTC m=+158.530173556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.617128 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-x4fc7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.617175 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" podUID="aa013f01-5701-4d63-bc2c-284f5d4a397f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.617183 4856 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-85rt7 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.617242 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" podUID="44fda25c-1ecf-4334-803c-106306261877" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.617254 4856 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5xhm6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" start-of-body= Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.617286 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" podUID="6d4dd468-0b0e-4767-aa06-7800fd9c449f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": dial tcp 10.217.0.35:5443: connect: connection refused" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.617299 4856 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s2hxc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.617353 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" podUID="9a816ade-c1d6-48c0-a246-4d3407f90e58" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.618445 4856 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-s6j42 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.618546 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" podUID="a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.622828 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" podStartSLOduration=129.622812423 podStartE2EDuration="2m9.622812423s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:15.621691077 +0000 UTC m=+158.035084335" watchObservedRunningTime="2025-11-22 07:05:15.622812423 +0000 UTC m=+158.036205681" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.625546 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.628479 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.631948 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.632804 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" podStartSLOduration=129.632767802 podStartE2EDuration="2m9.632767802s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:15.032753355 +0000 UTC m=+157.446146613" watchObservedRunningTime="2025-11-22 07:05:15.632767802 +0000 UTC m=+158.046161060" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.634726 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.634820 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.651784 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-klclm" podStartSLOduration=129.651758377 podStartE2EDuration="2m9.651758377s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:15.648691524 +0000 UTC m=+158.062084802" watchObservedRunningTime="2025-11-22 07:05:15.651758377 +0000 UTC m=+158.065151635" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.683644 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.710380 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lmvpd" podStartSLOduration=129.710351752 podStartE2EDuration="2m9.710351752s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:15.700564297 +0000 UTC m=+158.113957545" watchObservedRunningTime="2025-11-22 07:05:15.710351752 +0000 UTC m=+158.123745010" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.718638 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.718871 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.218837966 +0000 UTC m=+158.632231224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.719860 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.727540 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.227496443 +0000 UTC m=+158.640889861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.728595 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-2szb8" podStartSLOduration=129.728571539 podStartE2EDuration="2m9.728571539s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:15.725408863 +0000 UTC m=+158.138802151" watchObservedRunningTime="2025-11-22 07:05:15.728571539 +0000 UTC m=+158.141964797" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.758657 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-89kjz" podStartSLOduration=130.758625919 podStartE2EDuration="2m10.758625919s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:15.75778944 +0000 UTC m=+158.171182708" watchObservedRunningTime="2025-11-22 07:05:15.758625919 +0000 UTC m=+158.172019177" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.791317 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cbrxg" podStartSLOduration=129.791288963 podStartE2EDuration="2m9.791288963s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:15.785109434 +0000 UTC m=+158.198502702" watchObservedRunningTime="2025-11-22 07:05:15.791288963 +0000 UTC m=+158.204682221" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.813629 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" podStartSLOduration=129.813602358 podStartE2EDuration="2m9.813602358s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:15.812786298 +0000 UTC m=+158.226179576" watchObservedRunningTime="2025-11-22 07:05:15.813602358 +0000 UTC m=+158.226995616" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.821358 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.821564 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.321534548 +0000 UTC m=+158.734927806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.821767 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.822234 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.322217495 +0000 UTC m=+158.735610753 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.860631 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mvcdt" podStartSLOduration=11.860600455 podStartE2EDuration="11.860600455s" podCreationTimestamp="2025-11-22 07:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:15.835854352 +0000 UTC m=+158.249247620" watchObservedRunningTime="2025-11-22 07:05:15.860600455 +0000 UTC m=+158.273993713" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.861203 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-mdfqc" podStartSLOduration=129.861195409 podStartE2EDuration="2m9.861195409s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:15.859000986 +0000 UTC m=+158.272394244" watchObservedRunningTime="2025-11-22 07:05:15.861195409 +0000 UTC m=+158.274588667" Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.925446 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:15 crc kubenswrapper[4856]: E1122 07:05:15.925902 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.42587834 +0000 UTC m=+158.839271598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:15 crc kubenswrapper[4856]: I1122 07:05:15.951705 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.027405 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.027887 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.527868436 +0000 UTC m=+158.941261694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.129676 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.130169 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.630137808 +0000 UTC m=+159.043531066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.130894 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.131370 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.631354007 +0000 UTC m=+159.044747265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: W1122 07:05:16.209295 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-5b1c25d721b5e5a53942b6142c588c189bf6e882f50c147ab2c855912c741471 WatchSource:0}: Error finding container 5b1c25d721b5e5a53942b6142c588c189bf6e882f50c147ab2c855912c741471: Status 404 returned error can't find the container with id 5b1c25d721b5e5a53942b6142c588c189bf6e882f50c147ab2c855912c741471 Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.233016 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.233410 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.733388624 +0000 UTC m=+159.146781882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: W1122 07:05:16.311206 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-3e3bf3915fc5071790b5d702bdf5f35a3a65599b913149b1e85b73311f189614 WatchSource:0}: Error finding container 3e3bf3915fc5071790b5d702bdf5f35a3a65599b913149b1e85b73311f189614: Status 404 returned error can't find the container with id 3e3bf3915fc5071790b5d702bdf5f35a3a65599b913149b1e85b73311f189614 Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.335770 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.346657 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.846624918 +0000 UTC m=+159.260018176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.365173 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-psfmg" Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.367376 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.367453 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.367581 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.367673 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.368200 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.368232 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.406734 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:16 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:16 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:16 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.406858 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.447495 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.447634 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.947607971 +0000 UTC m=+159.361001229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.447926 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.448237 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:16.948229785 +0000 UTC m=+159.361623043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.549559 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.049538714 +0000 UTC m=+159.462931972 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.549590 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.549847 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.550108 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.050101958 +0000 UTC m=+159.463495206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.591690 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.592192 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.594122 4856 patch_prober.go:28] interesting pod/console-f9d7485db-57k7r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.594183 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-57k7r" podUID="2cb75722-66d1-46a3-b867-1cab32f01ede" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.631043 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0b2c4cfe844b4a7dd7af04acf636e90a0a1cdb42f740cc5f1131d793e6248786"} Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.633164 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3e3bf3915fc5071790b5d702bdf5f35a3a65599b913149b1e85b73311f189614"} Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.637961 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5b1c25d721b5e5a53942b6142c588c189bf6e882f50c147ab2c855912c741471"} Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.638684 4856 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s2hxc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.638731 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" podUID="9a816ade-c1d6-48c0-a246-4d3407f90e58" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.638867 4856 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-s6j42 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.638937 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" podUID="a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.651592 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.652138 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.152096014 +0000 UTC m=+159.565489412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.666420 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" podStartSLOduration=130.666389856 podStartE2EDuration="2m10.666389856s" podCreationTimestamp="2025-11-22 07:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:16.662873141 +0000 UTC m=+159.076266399" watchObservedRunningTime="2025-11-22 07:05:16.666389856 +0000 UTC m=+159.079783114" Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.687590 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" podStartSLOduration=131.687561534 podStartE2EDuration="2m11.687561534s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:16.685466563 +0000 UTC m=+159.098859841" watchObservedRunningTime="2025-11-22 07:05:16.687561534 +0000 UTC m=+159.100954792" Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.753294 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.755710 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.255686568 +0000 UTC m=+159.669080016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.854977 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.855162 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.355128202 +0000 UTC m=+159.768521460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.855220 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.855636 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.355628344 +0000 UTC m=+159.769021602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.956340 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.956628 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.456577034 +0000 UTC m=+159.869970302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:16 crc kubenswrapper[4856]: I1122 07:05:16.956965 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:16 crc kubenswrapper[4856]: E1122 07:05:16.958408 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.458396107 +0000 UTC m=+159.871789545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.053706 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.057116 4856 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-csttt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.057202 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" podUID="bbedaf28-a7ca-437c-93a8-8c676c7a9f1f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.057595 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.057714 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.557690358 +0000 UTC m=+159.971083616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.057731 4856 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-csttt container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.057792 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" podUID="bbedaf28-a7ca-437c-93a8-8c676c7a9f1f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.058102 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.058463 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.558454997 +0000 UTC m=+159.971848255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.159835 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.160167 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.660110664 +0000 UTC m=+160.073503932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.160269 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.160762 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.66074398 +0000 UTC m=+160.074137228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.263046 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.263608 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.763586735 +0000 UTC m=+160.176979993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.336374 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-x4fc7 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.336490 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" podUID="aa013f01-5701-4d63-bc2c-284f5d4a397f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.336541 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-x4fc7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.336624 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" podUID="aa013f01-5701-4d63-bc2c-284f5d4a397f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.364704 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.365166 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.865149921 +0000 UTC m=+160.278543179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.377323 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.393565 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-85rt7" Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.396314 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.400035 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:17 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:17 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:17 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.400101 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.441338 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-6b26c" Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.465892 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.466096 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.966065491 +0000 UTC m=+160.379458749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.466252 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.467582 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:17.967572607 +0000 UTC m=+160.380966065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.567811 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.568347 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.068324223 +0000 UTC m=+160.481717481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.594263 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rc5xn" Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.625223 4856 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s2hxc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.625773 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" podUID="9a816ade-c1d6-48c0-a246-4d3407f90e58" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.632870 4856 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-s6j42 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.632956 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" podUID="a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.15:8443/healthz\": dial tcp 10.217.0.15:8443: connect: connection refused" Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.670085 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.672090 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.17207532 +0000 UTC m=+160.585468578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.771735 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.771943 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.271915384 +0000 UTC m=+160.685308642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.772963 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.773373 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.273350108 +0000 UTC m=+160.686743366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.874714 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.874997 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.374955185 +0000 UTC m=+160.788348453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.875101 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.875747 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.375729133 +0000 UTC m=+160.789122391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.980081 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.980323 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.480284301 +0000 UTC m=+160.893677559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:17 crc kubenswrapper[4856]: I1122 07:05:17.980606 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:17 crc kubenswrapper[4856]: E1122 07:05:17.981046 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.481038649 +0000 UTC m=+160.894431907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.082095 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:18 crc kubenswrapper[4856]: E1122 07:05:18.082366 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.582316707 +0000 UTC m=+160.995709965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.082904 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:18 crc kubenswrapper[4856]: E1122 07:05:18.083323 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.583304201 +0000 UTC m=+160.996697479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.184321 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:18 crc kubenswrapper[4856]: E1122 07:05:18.184800 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.684771034 +0000 UTC m=+161.098164292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.286479 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:18 crc kubenswrapper[4856]: E1122 07:05:18.286923 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.786904393 +0000 UTC m=+161.200297641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.371900 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.374628 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ntxhf container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.374692 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" podUID="4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.374716 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ntxhf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.374776 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" podUID="4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.375309 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ntxhf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.375346 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" podUID="4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.383134 4856 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5xhm6 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.383199 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" podUID="6d4dd468-0b0e-4767-aa06-7800fd9c449f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.384257 4856 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5xhm6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.384308 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" podUID="6d4dd468-0b0e-4767-aa06-7800fd9c449f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.35:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.387472 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:18 crc kubenswrapper[4856]: E1122 07:05:18.387615 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.887595117 +0000 UTC m=+161.300988365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.387868 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:18 crc kubenswrapper[4856]: E1122 07:05:18.388271 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.888259704 +0000 UTC m=+161.301652962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.400347 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:18 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:18 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:18 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.400434 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.489348 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:18 crc kubenswrapper[4856]: E1122 07:05:18.490244 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:18.990203467 +0000 UTC m=+161.403596755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.591371 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:18 crc kubenswrapper[4856]: E1122 07:05:18.591872 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:19.091856475 +0000 UTC m=+161.505249733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.653168 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4529416d7e3357d84dee73cb2f3c96d8f5d873ae94051b5de1e7cb788d15a54a"} Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.656836 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" event={"ID":"d9a5e5b4-a255-4888-b381-e743b2440738","Type":"ContainerStarted","Data":"0a6b26d87c6bd22990b5053b7659d675f8a7b61364ec694a30181351e415c47d"} Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.692159 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:18 crc kubenswrapper[4856]: E1122 07:05:18.692428 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:19.192397176 +0000 UTC m=+161.605790434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.693167 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:18 crc kubenswrapper[4856]: E1122 07:05:18.693588 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:19.193567904 +0000 UTC m=+161.606961162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.795256 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:18 crc kubenswrapper[4856]: E1122 07:05:18.795582 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:19.295530748 +0000 UTC m=+161.708924016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.795998 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:18 crc kubenswrapper[4856]: E1122 07:05:18.796419 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:19.29640501 +0000 UTC m=+161.709798268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:18 crc kubenswrapper[4856]: I1122 07:05:18.898181 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:18 crc kubenswrapper[4856]: E1122 07:05:18.898838 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:19.398815515 +0000 UTC m=+161.812208773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.000358 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.000897 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:19.500880943 +0000 UTC m=+161.914274211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.012861 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.013705 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.016343 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.016568 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.026653 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.101036 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.101291 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:19.60124942 +0000 UTC m=+162.014642668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.101367 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8717add-c237-46d4-8ea3-dc4c6b8cbeb8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.101563 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.101638 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8717add-c237-46d4-8ea3-dc4c6b8cbeb8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.102114 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:19.60210677 +0000 UTC m=+162.015500028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.203330 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.203624 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:19.703580073 +0000 UTC m=+162.116973331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.203699 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8717add-c237-46d4-8ea3-dc4c6b8cbeb8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.203798 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8717add-c237-46d4-8ea3-dc4c6b8cbeb8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.203896 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.203936 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8717add-c237-46d4-8ea3-dc4c6b8cbeb8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.204413 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:19.704402252 +0000 UTC m=+162.117795510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.232871 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8717add-c237-46d4-8ea3-dc4c6b8cbeb8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.305337 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.305998 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:19.805961208 +0000 UTC m=+162.219354476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.368548 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.401732 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:19 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:19 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:19 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.402250 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.408063 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.408572 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:19.908545408 +0000 UTC m=+162.321938666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.509270 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.510012 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.009986921 +0000 UTC m=+162.423380179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.610730 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.612538 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.112520879 +0000 UTC m=+162.525914137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.614967 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-75ztc"] Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.616159 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.620535 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.643951 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.653258 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-75ztc"] Nov 22 07:05:19 crc kubenswrapper[4856]: W1122 07:05:19.656064 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf8717add_c237_46d4_8ea3_dc4c6b8cbeb8.slice/crio-49ea8664d17fd7bdf613bc37eecd5874c987bdb9b98d7e317aa7e18cd96ebe59 WatchSource:0}: Error finding container 49ea8664d17fd7bdf613bc37eecd5874c987bdb9b98d7e317aa7e18cd96ebe59: Status 404 returned error can't find the container with id 49ea8664d17fd7bdf613bc37eecd5874c987bdb9b98d7e317aa7e18cd96ebe59 Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.672554 4856 generic.go:334] "Generic (PLEG): container finished" podID="20d49e34-b412-49d0-8236-227ae0043102" containerID="c1e682722299cb8414291959f0127dc13304bc425fb68cf227565881399a874f" exitCode=0 Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.672643 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" event={"ID":"20d49e34-b412-49d0-8236-227ae0043102","Type":"ContainerDied","Data":"c1e682722299cb8414291959f0127dc13304bc425fb68cf227565881399a874f"} Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.675723 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8","Type":"ContainerStarted","Data":"49ea8664d17fd7bdf613bc37eecd5874c987bdb9b98d7e317aa7e18cd96ebe59"} Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.716187 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.716309 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.216289217 +0000 UTC m=+162.629682475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.716476 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-catalog-content\") pod \"community-operators-75ztc\" (UID: \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\") " pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.716540 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.716560 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-utilities\") pod \"community-operators-75ztc\" (UID: \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\") " pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.716604 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6swr\" (UniqueName: \"kubernetes.io/projected/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-kube-api-access-r6swr\") pod \"community-operators-75ztc\" (UID: \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\") " pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.718268 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.218242863 +0000 UTC m=+162.631636121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.818725 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.818904 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.318878247 +0000 UTC m=+162.732271505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.819049 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.819125 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-utilities\") pod \"community-operators-75ztc\" (UID: \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\") " pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.819245 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6swr\" (UniqueName: \"kubernetes.io/projected/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-kube-api-access-r6swr\") pod \"community-operators-75ztc\" (UID: \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\") " pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.819342 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-catalog-content\") pod \"community-operators-75ztc\" (UID: \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\") " pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.819577 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-utilities\") pod \"community-operators-75ztc\" (UID: \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\") " pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.819871 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.31986248 +0000 UTC m=+162.733255738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.820026 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-catalog-content\") pod \"community-operators-75ztc\" (UID: \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\") " pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.831323 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gvf9n"] Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.832527 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.836473 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.863556 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gvf9n"] Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.874844 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6swr\" (UniqueName: \"kubernetes.io/projected/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-kube-api-access-r6swr\") pod \"community-operators-75ztc\" (UID: \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\") " pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.920717 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.920942 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.420909923 +0000 UTC m=+162.834303181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.921167 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52860224-c188-4eda-830e-9101706f4ce2-utilities\") pod \"certified-operators-gvf9n\" (UID: \"52860224-c188-4eda-830e-9101706f4ce2\") " pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.921239 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8sjp\" (UniqueName: \"kubernetes.io/projected/52860224-c188-4eda-830e-9101706f4ce2-kube-api-access-h8sjp\") pod \"certified-operators-gvf9n\" (UID: \"52860224-c188-4eda-830e-9101706f4ce2\") " pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.921455 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:19 crc kubenswrapper[4856]: E1122 07:05:19.921846 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.421836196 +0000 UTC m=+162.835229454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.922237 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52860224-c188-4eda-830e-9101706f4ce2-catalog-content\") pod \"certified-operators-gvf9n\" (UID: \"52860224-c188-4eda-830e-9101706f4ce2\") " pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:05:19 crc kubenswrapper[4856]: I1122 07:05:19.968367 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.018122 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m79j6"] Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.019474 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.024118 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.024403 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52860224-c188-4eda-830e-9101706f4ce2-utilities\") pod \"certified-operators-gvf9n\" (UID: \"52860224-c188-4eda-830e-9101706f4ce2\") " pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.024462 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8sjp\" (UniqueName: \"kubernetes.io/projected/52860224-c188-4eda-830e-9101706f4ce2-kube-api-access-h8sjp\") pod \"certified-operators-gvf9n\" (UID: \"52860224-c188-4eda-830e-9101706f4ce2\") " pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.024556 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52860224-c188-4eda-830e-9101706f4ce2-catalog-content\") pod \"certified-operators-gvf9n\" (UID: \"52860224-c188-4eda-830e-9101706f4ce2\") " pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.025251 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52860224-c188-4eda-830e-9101706f4ce2-catalog-content\") pod \"certified-operators-gvf9n\" (UID: \"52860224-c188-4eda-830e-9101706f4ce2\") " pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:05:20 crc kubenswrapper[4856]: E1122 07:05:20.025344 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.525320057 +0000 UTC m=+162.938713315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.025623 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52860224-c188-4eda-830e-9101706f4ce2-utilities\") pod \"certified-operators-gvf9n\" (UID: \"52860224-c188-4eda-830e-9101706f4ce2\") " pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.033569 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m79j6"] Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.052377 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8sjp\" (UniqueName: \"kubernetes.io/projected/52860224-c188-4eda-830e-9101706f4ce2-kube-api-access-h8sjp\") pod \"certified-operators-gvf9n\" (UID: \"52860224-c188-4eda-830e-9101706f4ce2\") " pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.126020 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f76d43-404b-4b05-97d9-39a17e5774ed-catalog-content\") pod \"community-operators-m79j6\" (UID: \"56f76d43-404b-4b05-97d9-39a17e5774ed\") " pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.126170 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.126295 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f76d43-404b-4b05-97d9-39a17e5774ed-utilities\") pod \"community-operators-m79j6\" (UID: \"56f76d43-404b-4b05-97d9-39a17e5774ed\") " pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.126327 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqgpt\" (UniqueName: \"kubernetes.io/projected/56f76d43-404b-4b05-97d9-39a17e5774ed-kube-api-access-vqgpt\") pod \"community-operators-m79j6\" (UID: \"56f76d43-404b-4b05-97d9-39a17e5774ed\") " pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:05:20 crc kubenswrapper[4856]: E1122 07:05:20.126965 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.626922983 +0000 UTC m=+163.040316431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.149859 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.204867 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xx8tx"] Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.208936 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.216756 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xx8tx"] Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.227739 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.228163 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f76d43-404b-4b05-97d9-39a17e5774ed-catalog-content\") pod \"community-operators-m79j6\" (UID: \"56f76d43-404b-4b05-97d9-39a17e5774ed\") " pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.228351 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f76d43-404b-4b05-97d9-39a17e5774ed-utilities\") pod \"community-operators-m79j6\" (UID: \"56f76d43-404b-4b05-97d9-39a17e5774ed\") " pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.228394 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqgpt\" (UniqueName: \"kubernetes.io/projected/56f76d43-404b-4b05-97d9-39a17e5774ed-kube-api-access-vqgpt\") pod \"community-operators-m79j6\" (UID: \"56f76d43-404b-4b05-97d9-39a17e5774ed\") " pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:05:20 crc kubenswrapper[4856]: E1122 07:05:20.229122 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.729102703 +0000 UTC m=+163.142495961 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.229979 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f76d43-404b-4b05-97d9-39a17e5774ed-catalog-content\") pod \"community-operators-m79j6\" (UID: \"56f76d43-404b-4b05-97d9-39a17e5774ed\") " pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.230292 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f76d43-404b-4b05-97d9-39a17e5774ed-utilities\") pod \"community-operators-m79j6\" (UID: \"56f76d43-404b-4b05-97d9-39a17e5774ed\") " pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.298420 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqgpt\" (UniqueName: \"kubernetes.io/projected/56f76d43-404b-4b05-97d9-39a17e5774ed-kube-api-access-vqgpt\") pod \"community-operators-m79j6\" (UID: \"56f76d43-404b-4b05-97d9-39a17e5774ed\") " pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.330020 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjfxx\" (UniqueName: \"kubernetes.io/projected/072e5312-2542-496b-bda2-58d411f4f1c3-kube-api-access-qjfxx\") pod \"certified-operators-xx8tx\" (UID: \"072e5312-2542-496b-bda2-58d411f4f1c3\") " pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.330095 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/072e5312-2542-496b-bda2-58d411f4f1c3-utilities\") pod \"certified-operators-xx8tx\" (UID: \"072e5312-2542-496b-bda2-58d411f4f1c3\") " pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.330212 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/072e5312-2542-496b-bda2-58d411f4f1c3-catalog-content\") pod \"certified-operators-xx8tx\" (UID: \"072e5312-2542-496b-bda2-58d411f4f1c3\") " pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.330240 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:20 crc kubenswrapper[4856]: E1122 07:05:20.330736 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.83071927 +0000 UTC m=+163.244112528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.350978 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.401105 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:20 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:20 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:20 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.401228 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.431542 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:20 crc kubenswrapper[4856]: E1122 07:05:20.431704 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.931674821 +0000 UTC m=+163.345068079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.431952 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/072e5312-2542-496b-bda2-58d411f4f1c3-catalog-content\") pod \"certified-operators-xx8tx\" (UID: \"072e5312-2542-496b-bda2-58d411f4f1c3\") " pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.431993 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.432036 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjfxx\" (UniqueName: \"kubernetes.io/projected/072e5312-2542-496b-bda2-58d411f4f1c3-kube-api-access-qjfxx\") pod \"certified-operators-xx8tx\" (UID: \"072e5312-2542-496b-bda2-58d411f4f1c3\") " pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.432455 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/072e5312-2542-496b-bda2-58d411f4f1c3-catalog-content\") pod \"certified-operators-xx8tx\" (UID: \"072e5312-2542-496b-bda2-58d411f4f1c3\") " pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:05:20 crc kubenswrapper[4856]: E1122 07:05:20.432646 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:20.932619574 +0000 UTC m=+163.346012832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.433062 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/072e5312-2542-496b-bda2-58d411f4f1c3-utilities\") pod \"certified-operators-xx8tx\" (UID: \"072e5312-2542-496b-bda2-58d411f4f1c3\") " pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.433098 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/072e5312-2542-496b-bda2-58d411f4f1c3-utilities\") pod \"certified-operators-xx8tx\" (UID: \"072e5312-2542-496b-bda2-58d411f4f1c3\") " pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.450640 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gvf9n"] Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.461252 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjfxx\" (UniqueName: \"kubernetes.io/projected/072e5312-2542-496b-bda2-58d411f4f1c3-kube-api-access-qjfxx\") pod \"certified-operators-xx8tx\" (UID: \"072e5312-2542-496b-bda2-58d411f4f1c3\") " pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.538071 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:20 crc kubenswrapper[4856]: E1122 07:05:20.538667 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:21.038649865 +0000 UTC m=+163.452043123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.572149 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.573420 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-75ztc"] Nov 22 07:05:20 crc kubenswrapper[4856]: W1122 07:05:20.589480 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02bb740d_242c_4846_8bbf_5fe3e4f1b97a.slice/crio-eafca7ea07a33013d7dee24b9508e3e2b9b8ae620c158d9c14cc146be0bc85fc WatchSource:0}: Error finding container eafca7ea07a33013d7dee24b9508e3e2b9b8ae620c158d9c14cc146be0bc85fc: Status 404 returned error can't find the container with id eafca7ea07a33013d7dee24b9508e3e2b9b8ae620c158d9c14cc146be0bc85fc Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.628065 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m79j6"] Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.644088 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:20 crc kubenswrapper[4856]: E1122 07:05:20.644491 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:21.144471923 +0000 UTC m=+163.557865181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.690782 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"59e22405d09e45247e0508f73b3c8a939a6d56d2d9f2e0c182cb5d5aaa1bf6a8"} Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.697499 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" event={"ID":"53ec176a-6d8d-43a2-8523-78fd3cd12cd9","Type":"ContainerStarted","Data":"b933248aa7882ee63d28abacd39e46b80becc8440bf2dd1ece65edafb5811126"} Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.702310 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75ztc" event={"ID":"02bb740d-242c-4846-8bbf-5fe3e4f1b97a","Type":"ContainerStarted","Data":"eafca7ea07a33013d7dee24b9508e3e2b9b8ae620c158d9c14cc146be0bc85fc"} Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.705873 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"93138f953a79114ba3c46643684fa28a58014c1e808ccceec8eb200df635031a"} Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.707653 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m79j6" event={"ID":"56f76d43-404b-4b05-97d9-39a17e5774ed","Type":"ContainerStarted","Data":"77c7ecf1b52c9d437296280b7599642a1172a04feefa94231a2f0a37eb7b7247"} Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.720598 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvf9n" event={"ID":"52860224-c188-4eda-830e-9101706f4ce2","Type":"ContainerStarted","Data":"49b7575847245235d0c6fc5220d4af7a5a9f23c37325e254afadd39706ed8147"} Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.745960 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:20 crc kubenswrapper[4856]: E1122 07:05:20.746701 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:21.246678634 +0000 UTC m=+163.660071892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.848740 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:20 crc kubenswrapper[4856]: E1122 07:05:20.850299 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:21.350282318 +0000 UTC m=+163.763675576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.950631 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:20 crc kubenswrapper[4856]: E1122 07:05:20.951081 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:21.451060455 +0000 UTC m=+163.864453713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.967415 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" podStartSLOduration=135.967380986 podStartE2EDuration="2m15.967380986s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:20.835200036 +0000 UTC m=+163.248593324" watchObservedRunningTime="2025-11-22 07:05:20.967380986 +0000 UTC m=+163.380774244" Nov 22 07:05:20 crc kubenswrapper[4856]: I1122 07:05:20.969441 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xx8tx"] Nov 22 07:05:21 crc kubenswrapper[4856]: W1122 07:05:21.029130 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod072e5312_2542_496b_bda2_58d411f4f1c3.slice/crio-1c8c6e0e0038cf6719881be81c738d4d0260e501ef05a9804780b51684d8d3dc WatchSource:0}: Error finding container 1c8c6e0e0038cf6719881be81c738d4d0260e501ef05a9804780b51684d8d3dc: Status 404 returned error can't find the container with id 1c8c6e0e0038cf6719881be81c738d4d0260e501ef05a9804780b51684d8d3dc Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.054412 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.055111 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:21.555080169 +0000 UTC m=+163.968473607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.119066 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.156245 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.157053 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:21.657010563 +0000 UTC m=+164.070403821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.258591 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20d49e34-b412-49d0-8236-227ae0043102-secret-volume\") pod \"20d49e34-b412-49d0-8236-227ae0043102\" (UID: \"20d49e34-b412-49d0-8236-227ae0043102\") " Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.258671 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59skw\" (UniqueName: \"kubernetes.io/projected/20d49e34-b412-49d0-8236-227ae0043102-kube-api-access-59skw\") pod \"20d49e34-b412-49d0-8236-227ae0043102\" (UID: \"20d49e34-b412-49d0-8236-227ae0043102\") " Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.258696 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20d49e34-b412-49d0-8236-227ae0043102-config-volume\") pod \"20d49e34-b412-49d0-8236-227ae0043102\" (UID: \"20d49e34-b412-49d0-8236-227ae0043102\") " Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.258890 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.259215 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:21.759199673 +0000 UTC m=+164.172592931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.261327 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20d49e34-b412-49d0-8236-227ae0043102-config-volume" (OuterVolumeSpecName: "config-volume") pod "20d49e34-b412-49d0-8236-227ae0043102" (UID: "20d49e34-b412-49d0-8236-227ae0043102"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.272020 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d49e34-b412-49d0-8236-227ae0043102-kube-api-access-59skw" (OuterVolumeSpecName: "kube-api-access-59skw") pod "20d49e34-b412-49d0-8236-227ae0043102" (UID: "20d49e34-b412-49d0-8236-227ae0043102"). InnerVolumeSpecName "kube-api-access-59skw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.272856 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d49e34-b412-49d0-8236-227ae0043102-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "20d49e34-b412-49d0-8236-227ae0043102" (UID: "20d49e34-b412-49d0-8236-227ae0043102"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.361396 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.361967 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20d49e34-b412-49d0-8236-227ae0043102-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.362082 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59skw\" (UniqueName: \"kubernetes.io/projected/20d49e34-b412-49d0-8236-227ae0043102-kube-api-access-59skw\") on node \"crc\" DevicePath \"\"" Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.362083 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:21.862034159 +0000 UTC m=+164.275427427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.362170 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20d49e34-b412-49d0-8236-227ae0043102-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.372075 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ntxhf container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.372165 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" podUID="4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.372105 4856 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-ntxhf container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.372242 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" podUID="4ad6e66e-dfbf-443e-85fe-9f4f4ed88b0c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.403888 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:21 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:21 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:21 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.403974 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.420448 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.420561 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.423047 4856 patch_prober.go:28] interesting pod/apiserver-76f77b778f-lpbp9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.423160 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" podUID="d9a5e5b4-a255-4888-b381-e743b2440738" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.463717 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.464326 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:21.964292541 +0000 UTC m=+164.377685929 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.559490 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.559575 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.565852 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.566052 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.06602432 +0000 UTC m=+164.479417578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.566703 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.567093 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.067082116 +0000 UTC m=+164.480475374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.667813 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.668113 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.168072347 +0000 UTC m=+164.581465605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.668427 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.668862 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.168846876 +0000 UTC m=+164.582240134 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.725541 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8","Type":"ContainerStarted","Data":"d24f691d97ca499bf0ef916d4ba0b12806f2a6aa857ed1128580b6a8f7df8aa8"} Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.727231 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xx8tx" event={"ID":"072e5312-2542-496b-bda2-58d411f4f1c3","Type":"ContainerStarted","Data":"1c8c6e0e0038cf6719881be81c738d4d0260e501ef05a9804780b51684d8d3dc"} Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.729397 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" event={"ID":"20d49e34-b412-49d0-8236-227ae0043102","Type":"ContainerDied","Data":"aece58ebeead4a67cd4f5d8110e8a28fe7826570f1968199b52ed1860b057d8a"} Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.729483 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aece58ebeead4a67cd4f5d8110e8a28fe7826570f1968199b52ed1860b057d8a" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.729772 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.770650 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.771249 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.270889612 +0000 UTC m=+164.684282870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.771342 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.772104 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.272077721 +0000 UTC m=+164.685470979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.824778 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dxlt7"] Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.825602 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d49e34-b412-49d0-8236-227ae0043102" containerName="collect-profiles" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.825619 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d49e34-b412-49d0-8236-227ae0043102" containerName="collect-profiles" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.825791 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d49e34-b412-49d0-8236-227ae0043102" containerName="collect-profiles" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.826919 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.832438 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.835022 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxlt7"] Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.874561 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.874775 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.374735163 +0000 UTC m=+164.788128431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.875104 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.876615 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.376606938 +0000 UTC m=+164.790000196 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.904874 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.905758 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.908097 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.909563 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.967097 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.976523 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.976884 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-utilities\") pod \"redhat-marketplace-dxlt7\" (UID: \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\") " pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.976923 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-catalog-content\") pod \"redhat-marketplace-dxlt7\" (UID: \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\") " pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:05:21 crc kubenswrapper[4856]: I1122 07:05:21.976965 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8pqb\" (UniqueName: \"kubernetes.io/projected/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-kube-api-access-c8pqb\") pod \"redhat-marketplace-dxlt7\" (UID: \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\") " pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:05:21 crc kubenswrapper[4856]: E1122 07:05:21.977485 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.477461146 +0000 UTC m=+164.890854394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.079099 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-utilities\") pod \"redhat-marketplace-dxlt7\" (UID: \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\") " pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.079490 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-catalog-content\") pod \"redhat-marketplace-dxlt7\" (UID: \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\") " pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.079667 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.079815 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8pqb\" (UniqueName: \"kubernetes.io/projected/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-kube-api-access-c8pqb\") pod \"redhat-marketplace-dxlt7\" (UID: \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\") " pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.080435 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-utilities\") pod \"redhat-marketplace-dxlt7\" (UID: \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\") " pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.080415 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-catalog-content\") pod \"redhat-marketplace-dxlt7\" (UID: \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\") " pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:05:22 crc kubenswrapper[4856]: E1122 07:05:22.080057 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.580033965 +0000 UTC m=+164.993427343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.080706 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa1d67b-074f-437d-9b55-2d0522bb1db8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"baa1d67b-074f-437d-9b55-2d0522bb1db8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.080828 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa1d67b-074f-437d-9b55-2d0522bb1db8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"baa1d67b-074f-437d-9b55-2d0522bb1db8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.106333 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8pqb\" (UniqueName: \"kubernetes.io/projected/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-kube-api-access-c8pqb\") pod \"redhat-marketplace-dxlt7\" (UID: \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\") " pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.145676 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.182680 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.183182 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa1d67b-074f-437d-9b55-2d0522bb1db8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"baa1d67b-074f-437d-9b55-2d0522bb1db8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.183247 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa1d67b-074f-437d-9b55-2d0522bb1db8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"baa1d67b-074f-437d-9b55-2d0522bb1db8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.183418 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa1d67b-074f-437d-9b55-2d0522bb1db8-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"baa1d67b-074f-437d-9b55-2d0522bb1db8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:05:22 crc kubenswrapper[4856]: E1122 07:05:22.183545 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.683524867 +0000 UTC m=+165.096918135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.208356 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa1d67b-074f-437d-9b55-2d0522bb1db8-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"baa1d67b-074f-437d-9b55-2d0522bb1db8\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.220216 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.228443 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qzwhc"] Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.229603 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.250961 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qzwhc"] Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.289991 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/657ddf29-027f-425f-92bb-27a76a9c19c6-catalog-content\") pod \"redhat-marketplace-qzwhc\" (UID: \"657ddf29-027f-425f-92bb-27a76a9c19c6\") " pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.290276 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.290370 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq78l\" (UniqueName: \"kubernetes.io/projected/657ddf29-027f-425f-92bb-27a76a9c19c6-kube-api-access-sq78l\") pod \"redhat-marketplace-qzwhc\" (UID: \"657ddf29-027f-425f-92bb-27a76a9c19c6\") " pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.290537 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/657ddf29-027f-425f-92bb-27a76a9c19c6-utilities\") pod \"redhat-marketplace-qzwhc\" (UID: \"657ddf29-027f-425f-92bb-27a76a9c19c6\") " pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:05:22 crc kubenswrapper[4856]: E1122 07:05:22.291152 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.791134037 +0000 UTC m=+165.204527295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.392255 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:22 crc kubenswrapper[4856]: E1122 07:05:22.392969 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.892922468 +0000 UTC m=+165.306315726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.393144 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/657ddf29-027f-425f-92bb-27a76a9c19c6-catalog-content\") pod \"redhat-marketplace-qzwhc\" (UID: \"657ddf29-027f-425f-92bb-27a76a9c19c6\") " pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.393501 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.393587 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq78l\" (UniqueName: \"kubernetes.io/projected/657ddf29-027f-425f-92bb-27a76a9c19c6-kube-api-access-sq78l\") pod \"redhat-marketplace-qzwhc\" (UID: \"657ddf29-027f-425f-92bb-27a76a9c19c6\") " pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.393618 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/657ddf29-027f-425f-92bb-27a76a9c19c6-utilities\") pod \"redhat-marketplace-qzwhc\" (UID: \"657ddf29-027f-425f-92bb-27a76a9c19c6\") " pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.393758 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/657ddf29-027f-425f-92bb-27a76a9c19c6-catalog-content\") pod \"redhat-marketplace-qzwhc\" (UID: \"657ddf29-027f-425f-92bb-27a76a9c19c6\") " pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:05:22 crc kubenswrapper[4856]: E1122 07:05:22.393913 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:22.893892011 +0000 UTC m=+165.307285269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.394050 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/657ddf29-027f-425f-92bb-27a76a9c19c6-utilities\") pod \"redhat-marketplace-qzwhc\" (UID: \"657ddf29-027f-425f-92bb-27a76a9c19c6\") " pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.419751 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:22 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:22 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:22 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.419820 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.446060 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq78l\" (UniqueName: \"kubernetes.io/projected/657ddf29-027f-425f-92bb-27a76a9c19c6-kube-api-access-sq78l\") pod \"redhat-marketplace-qzwhc\" (UID: \"657ddf29-027f-425f-92bb-27a76a9c19c6\") " pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.501765 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:22 crc kubenswrapper[4856]: E1122 07:05:22.501949 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:23.001932883 +0000 UTC m=+165.415326141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.502163 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:22 crc kubenswrapper[4856]: E1122 07:05:22.502542 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:23.002534587 +0000 UTC m=+165.415927835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.602980 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:22 crc kubenswrapper[4856]: E1122 07:05:22.603217 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:23.1031835 +0000 UTC m=+165.516576758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.603261 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:22 crc kubenswrapper[4856]: E1122 07:05:22.603628 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:23.10361657 +0000 UTC m=+165.517009828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.614914 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.682329 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.704265 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:22 crc kubenswrapper[4856]: E1122 07:05:22.704927 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:23.204905619 +0000 UTC m=+165.618298877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.726964 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxlt7"] Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.753252 4856 generic.go:334] "Generic (PLEG): container finished" podID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" containerID="7f0755ec0247e397e9891c13f4eaa9cbc7d9ccae6441700d9a817b3cff437506" exitCode=0 Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.753396 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75ztc" event={"ID":"02bb740d-242c-4846-8bbf-5fe3e4f1b97a","Type":"ContainerDied","Data":"7f0755ec0247e397e9891c13f4eaa9cbc7d9ccae6441700d9a817b3cff437506"} Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.762531 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"baa1d67b-074f-437d-9b55-2d0522bb1db8","Type":"ContainerStarted","Data":"ac4fb497dc02ae2f887c3f92c7427d966ef8d9ac57d91865bc9a111a7caec887"} Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.774666 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m79j6" event={"ID":"56f76d43-404b-4b05-97d9-39a17e5774ed","Type":"ContainerStarted","Data":"8bda9a30d86c3e8d6faae562fe4de4759384aa6ff38989446f9e02dd413e1829"} Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.806217 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:22 crc kubenswrapper[4856]: E1122 07:05:22.806718 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:23.306702469 +0000 UTC m=+165.720095717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.822752 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvf9n" event={"ID":"52860224-c188-4eda-830e-9101706f4ce2","Type":"ContainerStarted","Data":"c58ef295300db8ebc25113075638644a0fc6d13316c264dfc4377003b21045af"} Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.829370 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sptws"] Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.830638 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.836226 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.857327 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sptws"] Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.859070 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.859056425 podStartE2EDuration="3.859056425s" podCreationTimestamp="2025-11-22 07:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:22.85758283 +0000 UTC m=+165.270976098" watchObservedRunningTime="2025-11-22 07:05:22.859056425 +0000 UTC m=+165.272449683" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.926119 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.927535 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-utilities\") pod \"redhat-operators-sptws\" (UID: \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\") " pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.927601 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-catalog-content\") pod \"redhat-operators-sptws\" (UID: \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\") " pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:05:22 crc kubenswrapper[4856]: I1122 07:05:22.927641 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxbrk\" (UniqueName: \"kubernetes.io/projected/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-kube-api-access-cxbrk\") pod \"redhat-operators-sptws\" (UID: \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\") " pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:05:22 crc kubenswrapper[4856]: E1122 07:05:22.931994 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:23.431938233 +0000 UTC m=+165.845331581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.037300 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-utilities\") pod \"redhat-operators-sptws\" (UID: \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\") " pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.037353 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-catalog-content\") pod \"redhat-operators-sptws\" (UID: \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\") " pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.037377 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxbrk\" (UniqueName: \"kubernetes.io/projected/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-kube-api-access-cxbrk\") pod \"redhat-operators-sptws\" (UID: \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\") " pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.037417 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:23 crc kubenswrapper[4856]: E1122 07:05:23.037746 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:23.537732709 +0000 UTC m=+165.951125977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.038232 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-utilities\") pod \"redhat-operators-sptws\" (UID: \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\") " pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.038437 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-catalog-content\") pod \"redhat-operators-sptws\" (UID: \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\") " pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.109364 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qzwhc"] Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.120666 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxbrk\" (UniqueName: \"kubernetes.io/projected/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-kube-api-access-cxbrk\") pod \"redhat-operators-sptws\" (UID: \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\") " pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.139057 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:23 crc kubenswrapper[4856]: E1122 07:05:23.139470 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:23.639456318 +0000 UTC m=+166.052849576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.183935 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.212245 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mhjvb"] Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.213493 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.228077 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mhjvb"] Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.240382 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.241279 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:23 crc kubenswrapper[4856]: E1122 07:05:23.241625 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:23.741610528 +0000 UTC m=+166.155003786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.265540 4856 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-rpfn9 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 22 07:05:23 crc kubenswrapper[4856]: [+]log ok Nov 22 07:05:23 crc kubenswrapper[4856]: [+]etcd ok Nov 22 07:05:23 crc kubenswrapper[4856]: [+]etcd-readiness ok Nov 22 07:05:23 crc kubenswrapper[4856]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 22 07:05:23 crc kubenswrapper[4856]: [-]informer-sync failed: reason withheld Nov 22 07:05:23 crc kubenswrapper[4856]: [+]poststarthook/generic-apiserver-start-informers ok Nov 22 07:05:23 crc kubenswrapper[4856]: [+]poststarthook/max-in-flight-filter ok Nov 22 07:05:23 crc kubenswrapper[4856]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 22 07:05:23 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-StartUserInformer ok Nov 22 07:05:23 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-StartOAuthInformer ok Nov 22 07:05:23 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Nov 22 07:05:23 crc kubenswrapper[4856]: [+]shutdown ok Nov 22 07:05:23 crc kubenswrapper[4856]: readyz check failed Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.265596 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" podUID="b237e36f-a520-4471-82a5-5d26aff897b1" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.342966 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:23 crc kubenswrapper[4856]: E1122 07:05:23.343278 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:23.843225304 +0000 UTC m=+166.256618692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.343388 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-catalog-content\") pod \"redhat-operators-mhjvb\" (UID: \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\") " pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.345444 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj98g\" (UniqueName: \"kubernetes.io/projected/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-kube-api-access-bj98g\") pod \"redhat-operators-mhjvb\" (UID: \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\") " pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.345552 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-utilities\") pod \"redhat-operators-mhjvb\" (UID: \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\") " pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.345651 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:23 crc kubenswrapper[4856]: E1122 07:05:23.346464 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:23.846452112 +0000 UTC m=+166.259845370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.399608 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:23 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:23 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:23 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.399958 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.450103 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.450437 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-catalog-content\") pod \"redhat-operators-mhjvb\" (UID: \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\") " pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.450520 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj98g\" (UniqueName: \"kubernetes.io/projected/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-kube-api-access-bj98g\") pod \"redhat-operators-mhjvb\" (UID: \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\") " pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.450557 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-utilities\") pod \"redhat-operators-mhjvb\" (UID: \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\") " pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.451018 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-utilities\") pod \"redhat-operators-mhjvb\" (UID: \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\") " pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:05:23 crc kubenswrapper[4856]: E1122 07:05:23.451103 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:23.95108247 +0000 UTC m=+166.364475728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.451342 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-catalog-content\") pod \"redhat-operators-mhjvb\" (UID: \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\") " pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.502609 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj98g\" (UniqueName: \"kubernetes.io/projected/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-kube-api-access-bj98g\") pod \"redhat-operators-mhjvb\" (UID: \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\") " pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.552089 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:23 crc kubenswrapper[4856]: E1122 07:05:23.552419 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:24.05240698 +0000 UTC m=+166.465800238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.652962 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:23 crc kubenswrapper[4856]: E1122 07:05:23.653304 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:24.153288599 +0000 UTC m=+166.566681857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.670290 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sptws"] Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.704983 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.755148 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:23 crc kubenswrapper[4856]: E1122 07:05:23.755818 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:24.255802097 +0000 UTC m=+166.669195355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.832997 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sptws" event={"ID":"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6","Type":"ContainerStarted","Data":"cc0289042839db9f281c6598675e98f1149088a5c6be8db6126170a762b73dbf"} Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.835700 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzwhc" event={"ID":"657ddf29-027f-425f-92bb-27a76a9c19c6","Type":"ContainerStarted","Data":"2fa9b82af75e0c892addcfd362cc1f253d523ebd1000ec611b27fd0c7b4053d4"} Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.837271 4856 generic.go:334] "Generic (PLEG): container finished" podID="f8717add-c237-46d4-8ea3-dc4c6b8cbeb8" containerID="d24f691d97ca499bf0ef916d4ba0b12806f2a6aa857ed1128580b6a8f7df8aa8" exitCode=0 Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.837337 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8","Type":"ContainerDied","Data":"d24f691d97ca499bf0ef916d4ba0b12806f2a6aa857ed1128580b6a8f7df8aa8"} Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.838491 4856 generic.go:334] "Generic (PLEG): container finished" podID="072e5312-2542-496b-bda2-58d411f4f1c3" containerID="db326da0681fefa891ed72ba815941c1b84becf24fa5a429860ac23c012d4797" exitCode=0 Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.838565 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xx8tx" event={"ID":"072e5312-2542-496b-bda2-58d411f4f1c3","Type":"ContainerDied","Data":"db326da0681fefa891ed72ba815941c1b84becf24fa5a429860ac23c012d4797"} Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.839651 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxlt7" event={"ID":"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598","Type":"ContainerStarted","Data":"9b15f426f507ff2aa1249ac29714fa4b546decc4315e525ed1a0a709cb858bf9"} Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.840734 4856 generic.go:334] "Generic (PLEG): container finished" podID="56f76d43-404b-4b05-97d9-39a17e5774ed" containerID="8bda9a30d86c3e8d6faae562fe4de4759384aa6ff38989446f9e02dd413e1829" exitCode=0 Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.840799 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m79j6" event={"ID":"56f76d43-404b-4b05-97d9-39a17e5774ed","Type":"ContainerDied","Data":"8bda9a30d86c3e8d6faae562fe4de4759384aa6ff38989446f9e02dd413e1829"} Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.841884 4856 generic.go:334] "Generic (PLEG): container finished" podID="52860224-c188-4eda-830e-9101706f4ce2" containerID="c58ef295300db8ebc25113075638644a0fc6d13316c264dfc4377003b21045af" exitCode=0 Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.842590 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvf9n" event={"ID":"52860224-c188-4eda-830e-9101706f4ce2","Type":"ContainerDied","Data":"c58ef295300db8ebc25113075638644a0fc6d13316c264dfc4377003b21045af"} Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.844323 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.856914 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:23 crc kubenswrapper[4856]: E1122 07:05:23.857024 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:24.356996573 +0000 UTC m=+166.770389831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.857110 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:23 crc kubenswrapper[4856]: E1122 07:05:23.857465 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:24.357451535 +0000 UTC m=+166.770844793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.958538 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:23 crc kubenswrapper[4856]: E1122 07:05:23.958847 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:24.458801605 +0000 UTC m=+166.872194863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:23 crc kubenswrapper[4856]: I1122 07:05:23.958899 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:23 crc kubenswrapper[4856]: E1122 07:05:23.959347 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:24.459340097 +0000 UTC m=+166.872733355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.060063 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.060231 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:24.560194397 +0000 UTC m=+166.973587655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.060414 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.060742 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:24.560734749 +0000 UTC m=+166.974128007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.070137 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mhjvb"] Nov 22 07:05:24 crc kubenswrapper[4856]: W1122 07:05:24.076377 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcf4bc58_e602_45b5_9b0c_3be4cb956dbf.slice/crio-fb7db6aa527b7fc41ef739a2ce7337a572b90b7ab88df4ddb62c4331a4fa0719 WatchSource:0}: Error finding container fb7db6aa527b7fc41ef739a2ce7337a572b90b7ab88df4ddb62c4331a4fa0719: Status 404 returned error can't find the container with id fb7db6aa527b7fc41ef739a2ce7337a572b90b7ab88df4ddb62c4331a4fa0719 Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.160980 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.161490 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:24.661446344 +0000 UTC m=+167.074839592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.263258 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.263859 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:24.763832779 +0000 UTC m=+167.177226037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.364728 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.365035 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:24.864985944 +0000 UTC m=+167.278379222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.377892 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-ntxhf" Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.401053 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:24 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:24 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:24 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.401152 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.466184 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.466749 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:24.966722694 +0000 UTC m=+167.380115942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.567442 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:25.067390058 +0000 UTC m=+167.480783356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.567273 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.568061 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.568463 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:25.068447404 +0000 UTC m=+167.481840702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.670448 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.670885 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:25.170865059 +0000 UTC m=+167.584258327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.772766 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.773367 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:25.273339466 +0000 UTC m=+167.686732764 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.849622 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhjvb" event={"ID":"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf","Type":"ContainerStarted","Data":"fb7db6aa527b7fc41ef739a2ce7337a572b90b7ab88df4ddb62c4331a4fa0719"} Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.876359 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.876529 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:25.37648818 +0000 UTC m=+167.789881438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.876829 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.877353 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:25.37734403 +0000 UTC m=+167.790737288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.978553 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.978859 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:25.478812013 +0000 UTC m=+167.892205281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:24 crc kubenswrapper[4856]: I1122 07:05:24.979022 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:24 crc kubenswrapper[4856]: E1122 07:05:24.980034 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:25.480008491 +0000 UTC m=+167.893401749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.080766 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:25 crc kubenswrapper[4856]: E1122 07:05:25.081333 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:25.581316281 +0000 UTC m=+167.994709539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.126914 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.182089 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8717add-c237-46d4-8ea3-dc4c6b8cbeb8-kube-api-access\") pod \"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8\" (UID: \"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8\") " Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.182293 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8717add-c237-46d4-8ea3-dc4c6b8cbeb8-kubelet-dir\") pod \"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8\" (UID: \"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8\") " Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.182456 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:25 crc kubenswrapper[4856]: E1122 07:05:25.182790 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:25.682777383 +0000 UTC m=+168.096170641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.183800 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8717add-c237-46d4-8ea3-dc4c6b8cbeb8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f8717add-c237-46d4-8ea3-dc4c6b8cbeb8" (UID: "f8717add-c237-46d4-8ea3-dc4c6b8cbeb8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.192670 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8717add-c237-46d4-8ea3-dc4c6b8cbeb8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f8717add-c237-46d4-8ea3-dc4c6b8cbeb8" (UID: "f8717add-c237-46d4-8ea3-dc4c6b8cbeb8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.283350 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.283594 4856 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f8717add-c237-46d4-8ea3-dc4c6b8cbeb8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.283617 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f8717add-c237-46d4-8ea3-dc4c6b8cbeb8-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:05:25 crc kubenswrapper[4856]: E1122 07:05:25.283679 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:25.783665372 +0000 UTC m=+168.197058630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.385389 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:25 crc kubenswrapper[4856]: E1122 07:05:25.385714 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:25.885702699 +0000 UTC m=+168.299095957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.398795 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:25 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:25 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:25 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.398846 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.486961 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:25 crc kubenswrapper[4856]: E1122 07:05:25.487643 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:25.987617743 +0000 UTC m=+168.401011041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.589816 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:25 crc kubenswrapper[4856]: E1122 07:05:25.590410 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:26.090390887 +0000 UTC m=+168.503784145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.629652 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.691809 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:25 crc kubenswrapper[4856]: E1122 07:05:25.692763 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:26.192727302 +0000 UTC m=+168.606120580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.793802 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:25 crc kubenswrapper[4856]: E1122 07:05:25.794245 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:26.294225395 +0000 UTC m=+168.707618653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.802468 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mvcdt" Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.857617 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f8717add-c237-46d4-8ea3-dc4c6b8cbeb8","Type":"ContainerDied","Data":"49ea8664d17fd7bdf613bc37eecd5874c987bdb9b98d7e317aa7e18cd96ebe59"} Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.858143 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49ea8664d17fd7bdf613bc37eecd5874c987bdb9b98d7e317aa7e18cd96ebe59" Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.857679 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.865205 4856 generic.go:334] "Generic (PLEG): container finished" podID="e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" containerID="bbc242aebbbedf6dc56a8fb5b277fc15a0223daa1dadd481b2f699cc9c4d17f6" exitCode=0 Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.865279 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxlt7" event={"ID":"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598","Type":"ContainerDied","Data":"bbc242aebbbedf6dc56a8fb5b277fc15a0223daa1dadd481b2f699cc9c4d17f6"} Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.866641 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"baa1d67b-074f-437d-9b55-2d0522bb1db8","Type":"ContainerStarted","Data":"c5a077d19b49149de8baa22b91d9ab8169ad88d977740f98135f4d503c4a68ad"} Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.867630 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sptws" event={"ID":"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6","Type":"ContainerStarted","Data":"afffcbf3a5e5228e89a11cf71a449cb0f2646e4850c10996c3df3ca2885162f3"} Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.895468 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:25 crc kubenswrapper[4856]: E1122 07:05:25.895765 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:26.395705848 +0000 UTC m=+168.809099156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.896166 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:25 crc kubenswrapper[4856]: E1122 07:05:25.897190 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:26.397167314 +0000 UTC m=+168.810560602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:25 crc kubenswrapper[4856]: I1122 07:05:25.998080 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:25 crc kubenswrapper[4856]: E1122 07:05:25.998546 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:26.498488403 +0000 UTC m=+168.911881651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.100404 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.100951 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:26.600926779 +0000 UTC m=+169.014320037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.201863 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.202118 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:26.702081754 +0000 UTC m=+169.115475012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.202274 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.202715 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:26.70270103 +0000 UTC m=+169.116094478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.304264 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.304541 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:26.804464289 +0000 UTC m=+169.217857587 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.304634 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.305413 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:26.805392772 +0000 UTC m=+169.218786210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.364867 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.365129 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.365425 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.365570 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.400783 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:26 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:26 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:26 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.401482 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.406773 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.407011 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:26.906974168 +0000 UTC m=+169.320367436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.407268 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.407810 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:26.907798047 +0000 UTC m=+169.321191315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.509225 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.509413 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.009372833 +0000 UTC m=+169.422766301 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.510352 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.510902 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.010888969 +0000 UTC m=+169.424282227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.566282 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-rpfn9" Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.592552 4856 patch_prober.go:28] interesting pod/console-f9d7485db-57k7r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.592661 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-57k7r" podUID="2cb75722-66d1-46a3-b867-1cab32f01ede" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.611434 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.611744 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.111699127 +0000 UTC m=+169.525092525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.611856 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.612326 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.112316651 +0000 UTC m=+169.525709909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.713200 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.713467 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.213445217 +0000 UTC m=+169.626838475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.713584 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.714124 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.214099262 +0000 UTC m=+169.627492520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.814940 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.815201 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.315155555 +0000 UTC m=+169.728548813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.815713 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.816447 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.316428335 +0000 UTC m=+169.729821593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.874597 4856 generic.go:334] "Generic (PLEG): container finished" podID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" containerID="afffcbf3a5e5228e89a11cf71a449cb0f2646e4850c10996c3df3ca2885162f3" exitCode=0 Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.874736 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sptws" event={"ID":"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6","Type":"ContainerDied","Data":"afffcbf3a5e5228e89a11cf71a449cb0f2646e4850c10996c3df3ca2885162f3"} Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.876323 4856 generic.go:334] "Generic (PLEG): container finished" podID="657ddf29-027f-425f-92bb-27a76a9c19c6" containerID="6acd901f83779136f4c44d2e31a782244adcd2490adc4491da55e97914fa42c9" exitCode=0 Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.876419 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzwhc" event={"ID":"657ddf29-027f-425f-92bb-27a76a9c19c6","Type":"ContainerDied","Data":"6acd901f83779136f4c44d2e31a782244adcd2490adc4491da55e97914fa42c9"} Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.878002 4856 generic.go:334] "Generic (PLEG): container finished" podID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" containerID="d6d0e3806b6c53ae332780f09d95e1547fbbd715b9f2ea455f487da1cfdfe6c4" exitCode=0 Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.878048 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhjvb" event={"ID":"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf","Type":"ContainerDied","Data":"d6d0e3806b6c53ae332780f09d95e1547fbbd715b9f2ea455f487da1cfdfe6c4"} Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.900673 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.900646305 podStartE2EDuration="5.900646305s" podCreationTimestamp="2025-11-22 07:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:26.89753503 +0000 UTC m=+169.310928288" watchObservedRunningTime="2025-11-22 07:05:26.900646305 +0000 UTC m=+169.314039563" Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.917152 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.917407 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.417350965 +0000 UTC m=+169.830744223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:26 crc kubenswrapper[4856]: I1122 07:05:26.917600 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:26 crc kubenswrapper[4856]: E1122 07:05:26.918070 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.418060703 +0000 UTC m=+169.831453961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.019082 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.019346 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.51931568 +0000 UTC m=+169.932708938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.019429 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.019858 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.519845123 +0000 UTC m=+169.933238511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.059798 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.121664 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.121900 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.621853679 +0000 UTC m=+170.035246947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.122184 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.122660 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.622641488 +0000 UTC m=+170.036034756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.223403 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.223667 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.723626729 +0000 UTC m=+170.137019987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.224362 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.224979 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.724951921 +0000 UTC m=+170.138345179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.325770 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.326130 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.826113857 +0000 UTC m=+170.239507115 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.341553 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.389545 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5xhm6" Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.399733 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:27 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:27 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:27 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.399777 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.428423 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.429215 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:27.929192859 +0000 UTC m=+170.342586117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.530187 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.530439 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.030406765 +0000 UTC m=+170.443800023 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.530553 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.530884 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.030877236 +0000 UTC m=+170.444270494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.630851 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.631425 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.631879 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.131856887 +0000 UTC m=+170.545250145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.631983 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.632334 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.132319879 +0000 UTC m=+170.545713137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.636867 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.732986 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.733114 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.233094416 +0000 UTC m=+170.646487674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.733374 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.735296 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.235281968 +0000 UTC m=+170.648675426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.834171 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.834711 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.334688511 +0000 UTC m=+170.748081769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:27 crc kubenswrapper[4856]: I1122 07:05:27.936202 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:27 crc kubenswrapper[4856]: E1122 07:05:27.937115 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.437090746 +0000 UTC m=+170.850483994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.038938 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.039217 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.539161264 +0000 UTC m=+170.952554522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.039265 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.039801 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.539780339 +0000 UTC m=+170.953173597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.141417 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.141680 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.641648632 +0000 UTC m=+171.055041890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.141859 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.142479 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.642456151 +0000 UTC m=+171.055849439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.192724 4856 patch_prober.go:28] interesting pod/apiserver-76f77b778f-lpbp9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 22 07:05:28 crc kubenswrapper[4856]: [+]log ok Nov 22 07:05:28 crc kubenswrapper[4856]: [+]etcd ok Nov 22 07:05:28 crc kubenswrapper[4856]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 22 07:05:28 crc kubenswrapper[4856]: [+]poststarthook/generic-apiserver-start-informers ok Nov 22 07:05:28 crc kubenswrapper[4856]: [+]poststarthook/max-in-flight-filter ok Nov 22 07:05:28 crc kubenswrapper[4856]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 22 07:05:28 crc kubenswrapper[4856]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 22 07:05:28 crc kubenswrapper[4856]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 22 07:05:28 crc kubenswrapper[4856]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 22 07:05:28 crc kubenswrapper[4856]: [+]poststarthook/project.openshift.io-projectcache ok Nov 22 07:05:28 crc kubenswrapper[4856]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 22 07:05:28 crc kubenswrapper[4856]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Nov 22 07:05:28 crc kubenswrapper[4856]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 22 07:05:28 crc kubenswrapper[4856]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 22 07:05:28 crc kubenswrapper[4856]: livez check failed Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.192795 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" podUID="d9a5e5b4-a255-4888-b381-e743b2440738" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.243890 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.244173 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.744129359 +0000 UTC m=+171.157522627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.244882 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.245124 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.245659 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.745642396 +0000 UTC m=+171.159035674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.253150 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/dda6b6e5-61a2-459c-9207-5e5aa500869f-metrics-certs\") pod \"network-metrics-daemon-722tb\" (UID: \"dda6b6e5-61a2-459c-9207-5e5aa500869f\") " pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.346190 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.346551 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.846481804 +0000 UTC m=+171.259875062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.346675 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.347125 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.847109098 +0000 UTC m=+171.260502356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.399596 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:28 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:28 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:28 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.399658 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.448482 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.448755 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.948716465 +0000 UTC m=+171.362109723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.448998 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.449481 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:28.949473752 +0000 UTC m=+171.362867010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.544066 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-722tb" Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.549788 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.550139 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.050087745 +0000 UTC m=+171.463481003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.550232 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.550570 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.050556087 +0000 UTC m=+171.463949335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.651971 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.652207 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.152180583 +0000 UTC m=+171.565573841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.652675 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.653156 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.153138626 +0000 UTC m=+171.566531884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.754603 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.254566328 +0000 UTC m=+171.667959596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.754727 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.756085 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.256071374 +0000 UTC m=+171.669464632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.755320 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.759799 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-722tb"] Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.857758 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.857999 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.357892866 +0000 UTC m=+171.771286124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.858143 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.858765 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.358741937 +0000 UTC m=+171.772135195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.894399 4856 generic.go:334] "Generic (PLEG): container finished" podID="baa1d67b-074f-437d-9b55-2d0522bb1db8" containerID="c5a077d19b49149de8baa22b91d9ab8169ad88d977740f98135f4d503c4a68ad" exitCode=0 Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.894493 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"baa1d67b-074f-437d-9b55-2d0522bb1db8","Type":"ContainerDied","Data":"c5a077d19b49149de8baa22b91d9ab8169ad88d977740f98135f4d503c4a68ad"} Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.896564 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-722tb" event={"ID":"dda6b6e5-61a2-459c-9207-5e5aa500869f","Type":"ContainerStarted","Data":"f657a427113c880eb870e0a0d471b314fda0a785b850bd35af01c67741e83cb3"} Nov 22 07:05:28 crc kubenswrapper[4856]: I1122 07:05:28.959832 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:28 crc kubenswrapper[4856]: E1122 07:05:28.961286 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.461265505 +0000 UTC m=+171.874658763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.061416 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.062064 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.562038271 +0000 UTC m=+171.975431529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.163246 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.163533 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.663467113 +0000 UTC m=+172.076860381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.163652 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.164100 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.664089778 +0000 UTC m=+172.077483046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.265266 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.265910 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.765835247 +0000 UTC m=+172.179228515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.367635 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.368115 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.86809487 +0000 UTC m=+172.281488128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.401590 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:29 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:29 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:29 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.401805 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.469123 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.469371 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.969332997 +0000 UTC m=+172.382726255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.470091 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.470550 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:29.970541866 +0000 UTC m=+172.383935124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.571745 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.572065 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.072016339 +0000 UTC m=+172.485409607 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.572222 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.572737 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.072716006 +0000 UTC m=+172.486109284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.674322 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.674583 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.174510207 +0000 UTC m=+172.587903475 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.675779 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.676332 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.17630625 +0000 UTC m=+172.589699538 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.754263 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.754350 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.777975 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.778333 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.278282965 +0000 UTC m=+172.691676283 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.778689 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.779296 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.279267119 +0000 UTC m=+172.692660427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.879961 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.880160 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.380123017 +0000 UTC m=+172.793516285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.880442 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.880936 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.380913906 +0000 UTC m=+172.794307174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:29 crc kubenswrapper[4856]: I1122 07:05:29.981744 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:29 crc kubenswrapper[4856]: E1122 07:05:29.982172 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.482143543 +0000 UTC m=+172.895536811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.085115 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:30 crc kubenswrapper[4856]: E1122 07:05:30.085609 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.585584424 +0000 UTC m=+172.998977682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.186814 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:30 crc kubenswrapper[4856]: E1122 07:05:30.187246 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.687227671 +0000 UTC m=+173.100620929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.198277 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.289068 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa1d67b-074f-437d-9b55-2d0522bb1db8-kubelet-dir\") pod \"baa1d67b-074f-437d-9b55-2d0522bb1db8\" (UID: \"baa1d67b-074f-437d-9b55-2d0522bb1db8\") " Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.289158 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa1d67b-074f-437d-9b55-2d0522bb1db8-kube-api-access\") pod \"baa1d67b-074f-437d-9b55-2d0522bb1db8\" (UID: \"baa1d67b-074f-437d-9b55-2d0522bb1db8\") " Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.289441 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/baa1d67b-074f-437d-9b55-2d0522bb1db8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "baa1d67b-074f-437d-9b55-2d0522bb1db8" (UID: "baa1d67b-074f-437d-9b55-2d0522bb1db8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.289973 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.290036 4856 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baa1d67b-074f-437d-9b55-2d0522bb1db8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 22 07:05:30 crc kubenswrapper[4856]: E1122 07:05:30.290365 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.790351993 +0000 UTC m=+173.203745251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.296130 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baa1d67b-074f-437d-9b55-2d0522bb1db8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "baa1d67b-074f-437d-9b55-2d0522bb1db8" (UID: "baa1d67b-074f-437d-9b55-2d0522bb1db8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.391850 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:30 crc kubenswrapper[4856]: E1122 07:05:30.392031 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.892004601 +0000 UTC m=+173.305397849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.392267 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.392354 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/baa1d67b-074f-437d-9b55-2d0522bb1db8-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 22 07:05:30 crc kubenswrapper[4856]: E1122 07:05:30.392633 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.892625926 +0000 UTC m=+173.306019184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.399915 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:30 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:30 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:30 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.400003 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.493625 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:30 crc kubenswrapper[4856]: E1122 07:05:30.493816 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.993780161 +0000 UTC m=+173.407173459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.493906 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:30 crc kubenswrapper[4856]: E1122 07:05:30.494369 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:30.994350575 +0000 UTC m=+173.407743873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.595153 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:30 crc kubenswrapper[4856]: E1122 07:05:30.595366 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.095338267 +0000 UTC m=+173.508731525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.595521 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:30 crc kubenswrapper[4856]: E1122 07:05:30.595786 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.095779187 +0000 UTC m=+173.509172445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.696484 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:30 crc kubenswrapper[4856]: E1122 07:05:30.696818 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.196776259 +0000 UTC m=+173.610169547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.798437 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:30 crc kubenswrapper[4856]: E1122 07:05:30.799023 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.29899978 +0000 UTC m=+173.712393038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.901504 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:30 crc kubenswrapper[4856]: E1122 07:05:30.901660 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.401632111 +0000 UTC m=+173.815025369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.901888 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:30 crc kubenswrapper[4856]: E1122 07:05:30.902437 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.40242363 +0000 UTC m=+173.815816888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.916201 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"baa1d67b-074f-437d-9b55-2d0522bb1db8","Type":"ContainerDied","Data":"ac4fb497dc02ae2f887c3f92c7427d966ef8d9ac57d91865bc9a111a7caec887"} Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.916238 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac4fb497dc02ae2f887c3f92c7427d966ef8d9ac57d91865bc9a111a7caec887" Nov 22 07:05:30 crc kubenswrapper[4856]: I1122 07:05:30.916326 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.003444 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.003825 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.50376533 +0000 UTC m=+173.917158588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.004047 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.004372 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.504362494 +0000 UTC m=+173.917755752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.105220 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.105466 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.605433458 +0000 UTC m=+174.018826716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.105783 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.106066 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.606058793 +0000 UTC m=+174.019452051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.207265 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.207488 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.707437824 +0000 UTC m=+174.120831082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.207679 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.208092 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.708073739 +0000 UTC m=+174.121466997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.309239 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.309425 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.809395849 +0000 UTC m=+174.222789117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.309516 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.309878 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.80987018 +0000 UTC m=+174.223263438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.400495 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:31 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:31 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:31 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.400612 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.411006 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.411287 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.911243381 +0000 UTC m=+174.324636639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.411365 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.411876 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:31.911857195 +0000 UTC m=+174.325250453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.426052 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.431575 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-lpbp9" Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.513440 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.515263 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:32.015236784 +0000 UTC m=+174.428630042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.615830 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.616124 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:32.116112833 +0000 UTC m=+174.529506091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.717438 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.717933 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:32.217909514 +0000 UTC m=+174.631302772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.820065 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.821885 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:32.321860106 +0000 UTC m=+174.735253364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.921207 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.921435 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:32.421402913 +0000 UTC m=+174.834796171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:31 crc kubenswrapper[4856]: I1122 07:05:31.921501 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:31 crc kubenswrapper[4856]: E1122 07:05:31.921910 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:32.421894405 +0000 UTC m=+174.835287653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.023338 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:32 crc kubenswrapper[4856]: E1122 07:05:32.023555 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:32.523529892 +0000 UTC m=+174.936923160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.023747 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:32 crc kubenswrapper[4856]: E1122 07:05:32.025175 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:32.525152501 +0000 UTC m=+174.938545759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.124660 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:32 crc kubenswrapper[4856]: E1122 07:05:32.124954 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:32.624913183 +0000 UTC m=+175.038306441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.125013 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:32 crc kubenswrapper[4856]: E1122 07:05:32.125573 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:32.625559879 +0000 UTC m=+175.038953137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.226387 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:32 crc kubenswrapper[4856]: E1122 07:05:32.227032 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:32.72698961 +0000 UTC m=+175.140382868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.328321 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:32 crc kubenswrapper[4856]: E1122 07:05:32.328823 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:32.828619818 +0000 UTC m=+175.242013076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.399494 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:32 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:32 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:32 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.399587 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.429838 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:32 crc kubenswrapper[4856]: E1122 07:05:32.430182 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:32.930164472 +0000 UTC m=+175.343557730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.531374 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:32 crc kubenswrapper[4856]: E1122 07:05:32.531747 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:33.031733158 +0000 UTC m=+175.445126416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.632023 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:32 crc kubenswrapper[4856]: E1122 07:05:32.632134 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:33.132118145 +0000 UTC m=+175.545511403 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.632455 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:32 crc kubenswrapper[4856]: E1122 07:05:32.632792 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:33.132775461 +0000 UTC m=+175.546168719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.733715 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:32 crc kubenswrapper[4856]: E1122 07:05:32.734078 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:33.23406307 +0000 UTC m=+175.647456318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.835760 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:32 crc kubenswrapper[4856]: E1122 07:05:32.836131 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:33.336118756 +0000 UTC m=+175.749512014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.931466 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-722tb" event={"ID":"dda6b6e5-61a2-459c-9207-5e5aa500869f","Type":"ContainerStarted","Data":"b1d90cecff6d18a935727f6fd0d9110e4c8145959d0a92dee60c440af7f287d5"} Nov 22 07:05:32 crc kubenswrapper[4856]: I1122 07:05:32.937340 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:32 crc kubenswrapper[4856]: E1122 07:05:32.937779 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:33.437758114 +0000 UTC m=+175.851151372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.039266 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.039877 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:33.539826461 +0000 UTC m=+175.953219729 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.141189 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.141601 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:33.641582121 +0000 UTC m=+176.054975379 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.243110 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.243488 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:33.743468274 +0000 UTC m=+176.156861532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.343933 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.344096 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:33.844071886 +0000 UTC m=+176.257465144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.344189 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.344732 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:33.844723882 +0000 UTC m=+176.258117140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.399683 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:33 crc kubenswrapper[4856]: [-]has-synced failed: reason withheld Nov 22 07:05:33 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:33 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.399745 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.445722 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.445941 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:33.945910948 +0000 UTC m=+176.359304206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.446041 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.446418 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:33.94640244 +0000 UTC m=+176.359795708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.547245 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.547343 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.04732759 +0000 UTC m=+176.460720848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.547742 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.548051 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.048042937 +0000 UTC m=+176.461436185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.649559 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.649813 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.149765037 +0000 UTC m=+176.563158295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.650069 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.650696 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.150685098 +0000 UTC m=+176.564078356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.756494 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.756695 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.25666549 +0000 UTC m=+176.670058748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.756775 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.757135 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.2571068 +0000 UTC m=+176.670500058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.858374 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.358355348 +0000 UTC m=+176.771748606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.858397 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.859770 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.860220 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.360210102 +0000 UTC m=+176.773603360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.961641 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.961923 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.46187564 +0000 UTC m=+176.875268908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:33 crc kubenswrapper[4856]: I1122 07:05:33.962014 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:33 crc kubenswrapper[4856]: E1122 07:05:33.962472 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.462452064 +0000 UTC m=+176.875845322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.063791 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.063988 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.563959697 +0000 UTC m=+176.977352955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.064058 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.064411 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.564391569 +0000 UTC m=+176.977784827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.165536 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.165739 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.665710178 +0000 UTC m=+177.079103436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.165843 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.166322 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.666305101 +0000 UTC m=+177.079698359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.266864 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.267086 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.767056077 +0000 UTC m=+177.180449335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.267322 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.267828 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.767807836 +0000 UTC m=+177.181201094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.368767 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.369017 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.868983042 +0000 UTC m=+177.282376310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.369273 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.369743 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.86972095 +0000 UTC m=+177.283114208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.399187 4856 patch_prober.go:28] interesting pod/router-default-5444994796-2grnj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 22 07:05:34 crc kubenswrapper[4856]: [+]has-synced ok Nov 22 07:05:34 crc kubenswrapper[4856]: [+]process-running ok Nov 22 07:05:34 crc kubenswrapper[4856]: healthz check failed Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.399327 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-2grnj" podUID="c7c1d403-b7f4-4d42-b707-54ac23853d3f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.469929 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.470087 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.970061645 +0000 UTC m=+177.383454903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.470126 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.470465 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:34.970455065 +0000 UTC m=+177.383848313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.572009 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.572236 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.072210044 +0000 UTC m=+177.485603302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.572415 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.572703 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.072696267 +0000 UTC m=+177.486089515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.673651 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.673886 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.173859252 +0000 UTC m=+177.587252510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.673960 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.674412 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.174393225 +0000 UTC m=+177.587786483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.775865 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.776112 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.276066323 +0000 UTC m=+177.689459581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.776326 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.778747 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.276978135 +0000 UTC m=+177.690371393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.878809 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.879034 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.379003341 +0000 UTC m=+177.792396599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.879596 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.879968 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.379960454 +0000 UTC m=+177.793353712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.980728 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.981030 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.480963266 +0000 UTC m=+177.894356534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:34 crc kubenswrapper[4856]: I1122 07:05:34.981289 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:34 crc kubenswrapper[4856]: E1122 07:05:34.981488 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.481474809 +0000 UTC m=+177.894868067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.083358 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.084141 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.584122219 +0000 UTC m=+177.997515477 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.186618 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.186985 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.686930674 +0000 UTC m=+178.100323932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.287826 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.288118 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.78806205 +0000 UTC m=+178.201455308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.288524 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.288952 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.78894236 +0000 UTC m=+178.202335618 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.389835 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.390061 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.890031374 +0000 UTC m=+178.303424632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.390335 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.390767 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.890750462 +0000 UTC m=+178.304143710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.399315 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.402656 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-2grnj" Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.491886 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.493103 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:35.993086756 +0000 UTC m=+178.406480014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.593601 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.594039 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.094018176 +0000 UTC m=+178.507411434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.695052 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.695258 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.195225952 +0000 UTC m=+178.608619210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.695329 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.695674 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.195661063 +0000 UTC m=+178.609054321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.796133 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.796307 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.296290646 +0000 UTC m=+178.709683904 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.796676 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.797005 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.296997013 +0000 UTC m=+178.710390271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.897283 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.897472 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.397436431 +0000 UTC m=+178.810829699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.898935 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:35 crc kubenswrapper[4856]: E1122 07:05:35.899252 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.399239554 +0000 UTC m=+178.812632812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:35 crc kubenswrapper[4856]: I1122 07:05:35.969973 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-722tb" event={"ID":"dda6b6e5-61a2-459c-9207-5e5aa500869f","Type":"ContainerStarted","Data":"8d6ab79aae7fedb65154fdf4d22d7658546e533b64c38ac5894dd394d672b866"} Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.000589 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.000872 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.500838791 +0000 UTC m=+178.914232049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.000941 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.001358 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.501345163 +0000 UTC m=+178.914738421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.101930 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.102871 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.602825857 +0000 UTC m=+179.016219105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.203669 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.204127 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.704111715 +0000 UTC m=+179.117504983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.313067 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.313368 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.813256892 +0000 UTC m=+179.226650150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.313540 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.313929 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.813917987 +0000 UTC m=+179.227311245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.365153 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.365167 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.365220 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.365267 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-psfmg" Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.365223 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.365851 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"e23ade9a36de2945fdd952c812a20c1af881dd1ee5e0dc8dd858debd512f1f58"} pod="openshift-console/downloads-7954f5f757-psfmg" containerMessage="Container download-server failed liveness probe, will be restarted" Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.365916 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.365944 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.365944 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" containerID="cri-o://e23ade9a36de2945fdd952c812a20c1af881dd1ee5e0dc8dd858debd512f1f58" gracePeriod=2 Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.415271 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.415685 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:36.915663357 +0000 UTC m=+179.329056615 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.516484 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.516963 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.016945586 +0000 UTC m=+179.430338844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.592298 4856 patch_prober.go:28] interesting pod/console-f9d7485db-57k7r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.592361 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-57k7r" podUID="2cb75722-66d1-46a3-b867-1cab32f01ede" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.617495 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.617678 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.11764825 +0000 UTC m=+179.531041518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.617869 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.618219 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.118203714 +0000 UTC m=+179.531596972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.719390 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.719615 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.219585525 +0000 UTC m=+179.632978783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.719762 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.720165 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.220154338 +0000 UTC m=+179.633547596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.821014 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.821290 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.321261313 +0000 UTC m=+179.734654571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.821347 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.821685 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.321672273 +0000 UTC m=+179.735065531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.923470 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.923745 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.423694639 +0000 UTC m=+179.837087897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.923845 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:36 crc kubenswrapper[4856]: E1122 07:05:36.924280 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.424263692 +0000 UTC m=+179.837656950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.979049 4856 generic.go:334] "Generic (PLEG): container finished" podID="52414feb-0c08-4591-a84a-985167853ba3" containerID="e23ade9a36de2945fdd952c812a20c1af881dd1ee5e0dc8dd858debd512f1f58" exitCode=0 Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.979118 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-psfmg" event={"ID":"52414feb-0c08-4591-a84a-985167853ba3","Type":"ContainerDied","Data":"e23ade9a36de2945fdd952c812a20c1af881dd1ee5e0dc8dd858debd512f1f58"} Nov 22 07:05:36 crc kubenswrapper[4856]: I1122 07:05:36.984150 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" event={"ID":"53ec176a-6d8d-43a2-8523-78fd3cd12cd9","Type":"ContainerStarted","Data":"337432eac0a0d676daab6e233dda0e80ca32a5701faf8abceed4851a0129f64f"} Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.028084 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.028656 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.528635225 +0000 UTC m=+179.942028483 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.129997 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.130432 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.630417395 +0000 UTC m=+180.043810653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.231654 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.231851 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.731825717 +0000 UTC m=+180.145218975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.231962 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.232255 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.732241607 +0000 UTC m=+180.145634865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.333637 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.333933 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.833911565 +0000 UTC m=+180.247304823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.402537 4856 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.435432 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.435787 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:37.935772748 +0000 UTC m=+180.349166006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.536880 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.537192 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:38.037176509 +0000 UTC m=+180.450569767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.637940 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.638330 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:38.138318204 +0000 UTC m=+180.551711462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.739756 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.739892 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:38.23986989 +0000 UTC m=+180.653263148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.740069 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.740405 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:38.240397482 +0000 UTC m=+180.653790730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.841628 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.841821 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:38.341793464 +0000 UTC m=+180.755186722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.841942 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.843066 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:38.343055973 +0000 UTC m=+180.756449231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.942679 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.942873 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-22 07:05:38.442846227 +0000 UTC m=+180.856239485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:37 crc kubenswrapper[4856]: I1122 07:05:37.942916 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:37 crc kubenswrapper[4856]: E1122 07:05:37.943266 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-22 07:05:38.443258976 +0000 UTC m=+180.856652234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sl25x" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 22 07:05:38 crc kubenswrapper[4856]: I1122 07:05:38.002915 4856 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-22T07:05:37.402563812Z","Handler":null,"Name":""} Nov 22 07:05:38 crc kubenswrapper[4856]: I1122 07:05:38.005552 4856 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 22 07:05:38 crc kubenswrapper[4856]: I1122 07:05:38.005587 4856 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 22 07:05:38 crc kubenswrapper[4856]: I1122 07:05:38.044743 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 22 07:05:38 crc kubenswrapper[4856]: I1122 07:05:38.053770 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 07:05:38 crc kubenswrapper[4856]: I1122 07:05:38.146044 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:38 crc kubenswrapper[4856]: I1122 07:05:38.152196 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 07:05:38 crc kubenswrapper[4856]: I1122 07:05:38.152227 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:38 crc kubenswrapper[4856]: I1122 07:05:38.182384 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sl25x\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:38 crc kubenswrapper[4856]: I1122 07:05:38.193892 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:05:38 crc kubenswrapper[4856]: I1122 07:05:38.739121 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 22 07:05:46 crc kubenswrapper[4856]: I1122 07:05:46.366658 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:05:46 crc kubenswrapper[4856]: I1122 07:05:46.367545 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:05:46 crc kubenswrapper[4856]: I1122 07:05:46.596393 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:46 crc kubenswrapper[4856]: I1122 07:05:46.600617 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:05:46 crc kubenswrapper[4856]: I1122 07:05:46.621694 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-722tb" podStartSLOduration=161.621675679 podStartE2EDuration="2m41.621675679s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:05:37.001962306 +0000 UTC m=+179.415355564" watchObservedRunningTime="2025-11-22 07:05:46.621675679 +0000 UTC m=+189.035068937" Nov 22 07:05:47 crc kubenswrapper[4856]: I1122 07:05:47.433389 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-gckss" Nov 22 07:05:55 crc kubenswrapper[4856]: I1122 07:05:55.684225 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 22 07:05:56 crc kubenswrapper[4856]: I1122 07:05:56.365145 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:05:56 crc kubenswrapper[4856]: I1122 07:05:56.365492 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:05:59 crc kubenswrapper[4856]: I1122 07:05:59.754217 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:05:59 crc kubenswrapper[4856]: I1122 07:05:59.754615 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:06:06 crc kubenswrapper[4856]: I1122 07:06:06.364716 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:06:06 crc kubenswrapper[4856]: I1122 07:06:06.365100 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:06:13 crc kubenswrapper[4856]: E1122 07:06:13.261787 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 22 07:06:13 crc kubenswrapper[4856]: E1122 07:06:13.262583 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8pqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-dxlt7_openshift-marketplace(e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:06:13 crc kubenswrapper[4856]: E1122 07:06:13.263891 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-dxlt7" podUID="e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" Nov 22 07:06:13 crc kubenswrapper[4856]: E1122 07:06:13.269474 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 22 07:06:13 crc kubenswrapper[4856]: E1122 07:06:13.269795 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sq78l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qzwhc_openshift-marketplace(657ddf29-027f-425f-92bb-27a76a9c19c6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:06:13 crc kubenswrapper[4856]: E1122 07:06:13.271569 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qzwhc" podUID="657ddf29-027f-425f-92bb-27a76a9c19c6" Nov 22 07:06:16 crc kubenswrapper[4856]: I1122 07:06:16.364477 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:06:16 crc kubenswrapper[4856]: I1122 07:06:16.365375 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:06:26 crc kubenswrapper[4856]: I1122 07:06:26.364920 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:06:26 crc kubenswrapper[4856]: I1122 07:06:26.365711 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:06:28 crc kubenswrapper[4856]: E1122 07:06:28.125300 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:01b5e3d8fc7ea5ac8437d32aff937c6044be65d55a140d575bae717c506609b3: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:01b5e3d8fc7ea5ac8437d32aff937c6044be65d55a140d575bae717c506609b3\": context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 22 07:06:28 crc kubenswrapper[4856]: E1122 07:06:28.125787 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqgpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-m79j6_openshift-marketplace(56f76d43-404b-4b05-97d9-39a17e5774ed): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:01b5e3d8fc7ea5ac8437d32aff937c6044be65d55a140d575bae717c506609b3: Get \"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:01b5e3d8fc7ea5ac8437d32aff937c6044be65d55a140d575bae717c506609b3\": context canceled" logger="UnhandledError" Nov 22 07:06:28 crc kubenswrapper[4856]: E1122 07:06:28.127071 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:01b5e3d8fc7ea5ac8437d32aff937c6044be65d55a140d575bae717c506609b3: Get \\\"https://registry.redhat.io/v2/redhat/community-operator-index/blobs/sha256:01b5e3d8fc7ea5ac8437d32aff937c6044be65d55a140d575bae717c506609b3\\\": context canceled\"" pod="openshift-marketplace/community-operators-m79j6" podUID="56f76d43-404b-4b05-97d9-39a17e5774ed" Nov 22 07:06:29 crc kubenswrapper[4856]: I1122 07:06:29.755268 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:06:29 crc kubenswrapper[4856]: I1122 07:06:29.756759 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:06:29 crc kubenswrapper[4856]: I1122 07:06:29.756865 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:06:29 crc kubenswrapper[4856]: I1122 07:06:29.757654 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:06:29 crc kubenswrapper[4856]: I1122 07:06:29.757733 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b" gracePeriod=600 Nov 22 07:06:30 crc kubenswrapper[4856]: E1122 07:06:30.360188 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 22 07:06:30 crc kubenswrapper[4856]: E1122 07:06:30.360361 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bj98g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mhjvb_openshift-marketplace(bcf4bc58-e602-45b5-9b0c-3be4cb956dbf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:06:30 crc kubenswrapper[4856]: E1122 07:06:30.361713 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mhjvb" podUID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" Nov 22 07:06:34 crc kubenswrapper[4856]: I1122 07:06:34.313801 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b" exitCode=0 Nov 22 07:06:34 crc kubenswrapper[4856]: I1122 07:06:34.313893 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b"} Nov 22 07:06:36 crc kubenswrapper[4856]: I1122 07:06:36.364682 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:06:36 crc kubenswrapper[4856]: I1122 07:06:36.364772 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:06:46 crc kubenswrapper[4856]: I1122 07:06:46.365289 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:06:46 crc kubenswrapper[4856]: I1122 07:06:46.366734 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:06:47 crc kubenswrapper[4856]: E1122 07:06:47.061316 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 22 07:06:47 crc kubenswrapper[4856]: E1122 07:06:47.061554 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxbrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sptws_openshift-marketplace(bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:06:47 crc kubenswrapper[4856]: E1122 07:06:47.063045 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-sptws" podUID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" Nov 22 07:06:47 crc kubenswrapper[4856]: E1122 07:06:47.531649 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 22 07:06:47 crc kubenswrapper[4856]: E1122 07:06:47.532330 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6swr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-75ztc_openshift-marketplace(02bb740d-242c-4846-8bbf-5fe3e4f1b97a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:06:47 crc kubenswrapper[4856]: E1122 07:06:47.533841 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-75ztc" podUID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" Nov 22 07:06:49 crc kubenswrapper[4856]: E1122 07:06:49.130414 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sptws" podUID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" Nov 22 07:06:49 crc kubenswrapper[4856]: E1122 07:06:49.131779 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-75ztc" podUID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" Nov 22 07:06:49 crc kubenswrapper[4856]: E1122 07:06:49.197817 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 22 07:06:49 crc kubenswrapper[4856]: E1122 07:06:49.198434 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8sjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gvf9n_openshift-marketplace(52860224-c188-4eda-830e-9101706f4ce2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:06:49 crc kubenswrapper[4856]: E1122 07:06:49.199896 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gvf9n" podUID="52860224-c188-4eda-830e-9101706f4ce2" Nov 22 07:06:49 crc kubenswrapper[4856]: I1122 07:06:49.352349 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sl25x"] Nov 22 07:06:49 crc kubenswrapper[4856]: W1122 07:06:49.378852 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7faca66b_795d_46b2_aebd_53f45fdb51de.slice/crio-8793c558353f849fe37475b70d9e032a026eea3481c69141adc0619745beffdd WatchSource:0}: Error finding container 8793c558353f849fe37475b70d9e032a026eea3481c69141adc0619745beffdd: Status 404 returned error can't find the container with id 8793c558353f849fe37475b70d9e032a026eea3481c69141adc0619745beffdd Nov 22 07:06:49 crc kubenswrapper[4856]: I1122 07:06:49.396884 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" event={"ID":"7faca66b-795d-46b2-aebd-53f45fdb51de","Type":"ContainerStarted","Data":"8793c558353f849fe37475b70d9e032a026eea3481c69141adc0619745beffdd"} Nov 22 07:06:49 crc kubenswrapper[4856]: I1122 07:06:49.398697 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-psfmg" event={"ID":"52414feb-0c08-4591-a84a-985167853ba3","Type":"ContainerStarted","Data":"9fc59df7a6f0300e18ead1fd3103bee76cd54e6242bbdb3c251250b3461dd7d5"} Nov 22 07:06:49 crc kubenswrapper[4856]: E1122 07:06:49.402521 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gvf9n" podUID="52860224-c188-4eda-830e-9101706f4ce2" Nov 22 07:06:50 crc kubenswrapper[4856]: I1122 07:06:50.408306 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" event={"ID":"7faca66b-795d-46b2-aebd-53f45fdb51de","Type":"ContainerStarted","Data":"99214763ec63c9422df1762c62fa78bfb914b9ff912460aa8392b9ba16ed287c"} Nov 22 07:06:50 crc kubenswrapper[4856]: I1122 07:06:50.411152 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"446652a4e7a6c1452a08d9219d8000e01189f33aeb22bd2b2862fae72dd9e328"} Nov 22 07:06:50 crc kubenswrapper[4856]: I1122 07:06:50.413913 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" event={"ID":"53ec176a-6d8d-43a2-8523-78fd3cd12cd9","Type":"ContainerStarted","Data":"827344985cdc86c10cdd3d8eb0be6edb0072dca1fd93144058dd4274b008b085"} Nov 22 07:06:50 crc kubenswrapper[4856]: I1122 07:06:50.414152 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-psfmg" Nov 22 07:06:50 crc kubenswrapper[4856]: I1122 07:06:50.414739 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:06:50 crc kubenswrapper[4856]: I1122 07:06:50.414821 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:06:51 crc kubenswrapper[4856]: I1122 07:06:51.419410 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:06:51 crc kubenswrapper[4856]: I1122 07:06:51.419720 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:06:51 crc kubenswrapper[4856]: I1122 07:06:51.442492 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" podStartSLOduration=226.442471628 podStartE2EDuration="3m46.442471628s" podCreationTimestamp="2025-11-22 07:03:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:06:51.437412038 +0000 UTC m=+253.850805316" watchObservedRunningTime="2025-11-22 07:06:51.442471628 +0000 UTC m=+253.855864886" Nov 22 07:06:52 crc kubenswrapper[4856]: E1122 07:06:52.716372 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 22 07:06:52 crc kubenswrapper[4856]: E1122 07:06:52.716811 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjfxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-xx8tx_openshift-marketplace(072e5312-2542-496b-bda2-58d411f4f1c3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:06:52 crc kubenswrapper[4856]: E1122 07:06:52.718083 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-xx8tx" podUID="072e5312-2542-496b-bda2-58d411f4f1c3" Nov 22 07:06:54 crc kubenswrapper[4856]: E1122 07:06:54.457898 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-xx8tx" podUID="072e5312-2542-496b-bda2-58d411f4f1c3" Nov 22 07:06:56 crc kubenswrapper[4856]: I1122 07:06:56.365187 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:06:56 crc kubenswrapper[4856]: I1122 07:06:56.365738 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:06:56 crc kubenswrapper[4856]: I1122 07:06:56.365227 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:06:56 crc kubenswrapper[4856]: I1122 07:06:56.365924 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:06:58 crc kubenswrapper[4856]: I1122 07:06:58.195200 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:06:58 crc kubenswrapper[4856]: I1122 07:06:58.463361 4856 generic.go:334] "Generic (PLEG): container finished" podID="56f76d43-404b-4b05-97d9-39a17e5774ed" containerID="80f26ec0a7633dfb22a582680a090b7374dd430b2e2567900b3e659bb3aa5fda" exitCode=0 Nov 22 07:06:58 crc kubenswrapper[4856]: I1122 07:06:58.463486 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m79j6" event={"ID":"56f76d43-404b-4b05-97d9-39a17e5774ed","Type":"ContainerDied","Data":"80f26ec0a7633dfb22a582680a090b7374dd430b2e2567900b3e659bb3aa5fda"} Nov 22 07:06:58 crc kubenswrapper[4856]: I1122 07:06:58.467204 4856 generic.go:334] "Generic (PLEG): container finished" podID="657ddf29-027f-425f-92bb-27a76a9c19c6" containerID="7ccc469f86b1e8a611045f1c54fa01b522ee23abeb5e672852e47abf996a2781" exitCode=0 Nov 22 07:06:58 crc kubenswrapper[4856]: I1122 07:06:58.467306 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzwhc" event={"ID":"657ddf29-027f-425f-92bb-27a76a9c19c6","Type":"ContainerDied","Data":"7ccc469f86b1e8a611045f1c54fa01b522ee23abeb5e672852e47abf996a2781"} Nov 22 07:06:58 crc kubenswrapper[4856]: I1122 07:06:58.472291 4856 generic.go:334] "Generic (PLEG): container finished" podID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" containerID="a6e80e12b9149e2a6a4820d6ce32d15963caca07f43394f511396d597058fdfd" exitCode=0 Nov 22 07:06:58 crc kubenswrapper[4856]: I1122 07:06:58.472414 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhjvb" event={"ID":"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf","Type":"ContainerDied","Data":"a6e80e12b9149e2a6a4820d6ce32d15963caca07f43394f511396d597058fdfd"} Nov 22 07:06:58 crc kubenswrapper[4856]: I1122 07:06:58.476766 4856 generic.go:334] "Generic (PLEG): container finished" podID="e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" containerID="218750b267ae397531b8026dde4ae8a357804ee86c5c9797436acffa003fd52d" exitCode=0 Nov 22 07:06:58 crc kubenswrapper[4856]: I1122 07:06:58.476831 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxlt7" event={"ID":"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598","Type":"ContainerDied","Data":"218750b267ae397531b8026dde4ae8a357804ee86c5c9797436acffa003fd52d"} Nov 22 07:06:58 crc kubenswrapper[4856]: I1122 07:06:58.483763 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" event={"ID":"53ec176a-6d8d-43a2-8523-78fd3cd12cd9","Type":"ContainerStarted","Data":"43e69b76d58a26c202080d829d34c3983df5620edb8f13521ecf8b74fb51bab2"} Nov 22 07:06:58 crc kubenswrapper[4856]: I1122 07:06:58.583847 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-hlxfj" podStartSLOduration=114.583823864 podStartE2EDuration="1m54.583823864s" podCreationTimestamp="2025-11-22 07:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:06:58.579104446 +0000 UTC m=+260.992497714" watchObservedRunningTime="2025-11-22 07:06:58.583823864 +0000 UTC m=+260.997217112" Nov 22 07:07:01 crc kubenswrapper[4856]: I1122 07:07:01.513775 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m79j6" event={"ID":"56f76d43-404b-4b05-97d9-39a17e5774ed","Type":"ContainerStarted","Data":"f5d08b907ce2983be6d182110a44ea39f360f0efea910c9ab325d51a1b8d9d1d"} Nov 22 07:07:01 crc kubenswrapper[4856]: I1122 07:07:01.517140 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzwhc" event={"ID":"657ddf29-027f-425f-92bb-27a76a9c19c6","Type":"ContainerStarted","Data":"2114761a68222146c49252abb82c010693ecd2f31f7f6807f7f54b412a11fb3e"} Nov 22 07:07:01 crc kubenswrapper[4856]: I1122 07:07:01.519629 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhjvb" event={"ID":"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf","Type":"ContainerStarted","Data":"da12b6f3a0c8cf40bb14f6b14582144e0264481f9a8b3c18081f20ff195d97a1"} Nov 22 07:07:01 crc kubenswrapper[4856]: I1122 07:07:01.538918 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m79j6" podStartSLOduration=6.480036918 podStartE2EDuration="1m42.538899853s" podCreationTimestamp="2025-11-22 07:05:19 +0000 UTC" firstStartedPulling="2025-11-22 07:05:24.869166753 +0000 UTC m=+167.282560001" lastFinishedPulling="2025-11-22 07:07:00.928029638 +0000 UTC m=+263.341422936" observedRunningTime="2025-11-22 07:07:01.538536375 +0000 UTC m=+263.951929633" watchObservedRunningTime="2025-11-22 07:07:01.538899853 +0000 UTC m=+263.952293111" Nov 22 07:07:01 crc kubenswrapper[4856]: I1122 07:07:01.561388 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qzwhc" podStartSLOduration=7.8058008690000005 podStartE2EDuration="1m39.561350234s" podCreationTimestamp="2025-11-22 07:05:22 +0000 UTC" firstStartedPulling="2025-11-22 07:05:28.947770851 +0000 UTC m=+171.361164109" lastFinishedPulling="2025-11-22 07:07:00.703320226 +0000 UTC m=+263.116713474" observedRunningTime="2025-11-22 07:07:01.56071777 +0000 UTC m=+263.974111048" watchObservedRunningTime="2025-11-22 07:07:01.561350234 +0000 UTC m=+263.974743492" Nov 22 07:07:01 crc kubenswrapper[4856]: I1122 07:07:01.741076 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mhjvb" podStartSLOduration=6.712970797 podStartE2EDuration="1m38.741048582s" podCreationTimestamp="2025-11-22 07:05:23 +0000 UTC" firstStartedPulling="2025-11-22 07:05:28.94775141 +0000 UTC m=+171.361144668" lastFinishedPulling="2025-11-22 07:07:00.975829185 +0000 UTC m=+263.389222453" observedRunningTime="2025-11-22 07:07:01.595350368 +0000 UTC m=+264.008743636" watchObservedRunningTime="2025-11-22 07:07:01.741048582 +0000 UTC m=+264.154441860" Nov 22 07:07:02 crc kubenswrapper[4856]: I1122 07:07:02.528602 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxlt7" event={"ID":"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598","Type":"ContainerStarted","Data":"b1b4571db108f3f19bdae7ddc8d518dd2ac04f0c858d16dec57acf5f3ad7c5f8"} Nov 22 07:07:02 crc kubenswrapper[4856]: I1122 07:07:02.550083 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dxlt7" podStartSLOduration=8.217906409 podStartE2EDuration="1m41.550060804s" podCreationTimestamp="2025-11-22 07:05:21 +0000 UTC" firstStartedPulling="2025-11-22 07:05:27.886682028 +0000 UTC m=+170.300075286" lastFinishedPulling="2025-11-22 07:07:01.218836393 +0000 UTC m=+263.632229681" observedRunningTime="2025-11-22 07:07:02.549669136 +0000 UTC m=+264.963062424" watchObservedRunningTime="2025-11-22 07:07:02.550060804 +0000 UTC m=+264.963454062" Nov 22 07:07:02 crc kubenswrapper[4856]: I1122 07:07:02.615735 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:07:02 crc kubenswrapper[4856]: I1122 07:07:02.616075 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:07:03 crc kubenswrapper[4856]: I1122 07:07:03.706435 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:07:03 crc kubenswrapper[4856]: I1122 07:07:03.706743 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:07:04 crc kubenswrapper[4856]: I1122 07:07:04.456671 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qzwhc" podUID="657ddf29-027f-425f-92bb-27a76a9c19c6" containerName="registry-server" probeResult="failure" output=< Nov 22 07:07:04 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 07:07:04 crc kubenswrapper[4856]: > Nov 22 07:07:04 crc kubenswrapper[4856]: I1122 07:07:04.755088 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mhjvb" podUID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" containerName="registry-server" probeResult="failure" output=< Nov 22 07:07:04 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 07:07:04 crc kubenswrapper[4856]: > Nov 22 07:07:06 crc kubenswrapper[4856]: I1122 07:07:06.365184 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:07:06 crc kubenswrapper[4856]: I1122 07:07:06.365538 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:07:06 crc kubenswrapper[4856]: I1122 07:07:06.365330 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:07:06 crc kubenswrapper[4856]: I1122 07:07:06.365682 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:07:08 crc kubenswrapper[4856]: I1122 07:07:08.206290 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:07:10 crc kubenswrapper[4856]: I1122 07:07:10.352670 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:07:10 crc kubenswrapper[4856]: I1122 07:07:10.353151 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:07:10 crc kubenswrapper[4856]: I1122 07:07:10.520875 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:07:10 crc kubenswrapper[4856]: I1122 07:07:10.616364 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:07:10 crc kubenswrapper[4856]: I1122 07:07:10.760396 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m79j6"] Nov 22 07:07:12 crc kubenswrapper[4856]: I1122 07:07:12.146881 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:07:12 crc kubenswrapper[4856]: I1122 07:07:12.146949 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:07:12 crc kubenswrapper[4856]: I1122 07:07:12.201655 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:07:12 crc kubenswrapper[4856]: I1122 07:07:12.592605 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m79j6" podUID="56f76d43-404b-4b05-97d9-39a17e5774ed" containerName="registry-server" containerID="cri-o://f5d08b907ce2983be6d182110a44ea39f360f0efea910c9ab325d51a1b8d9d1d" gracePeriod=2 Nov 22 07:07:12 crc kubenswrapper[4856]: I1122 07:07:12.635140 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:07:12 crc kubenswrapper[4856]: I1122 07:07:12.665405 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:07:12 crc kubenswrapper[4856]: I1122 07:07:12.724534 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:07:13 crc kubenswrapper[4856]: I1122 07:07:13.156619 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qzwhc"] Nov 22 07:07:13 crc kubenswrapper[4856]: I1122 07:07:13.748243 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:07:13 crc kubenswrapper[4856]: I1122 07:07:13.796535 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:07:14 crc kubenswrapper[4856]: I1122 07:07:14.606094 4856 generic.go:334] "Generic (PLEG): container finished" podID="56f76d43-404b-4b05-97d9-39a17e5774ed" containerID="f5d08b907ce2983be6d182110a44ea39f360f0efea910c9ab325d51a1b8d9d1d" exitCode=0 Nov 22 07:07:14 crc kubenswrapper[4856]: I1122 07:07:14.606184 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m79j6" event={"ID":"56f76d43-404b-4b05-97d9-39a17e5774ed","Type":"ContainerDied","Data":"f5d08b907ce2983be6d182110a44ea39f360f0efea910c9ab325d51a1b8d9d1d"} Nov 22 07:07:14 crc kubenswrapper[4856]: I1122 07:07:14.606547 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qzwhc" podUID="657ddf29-027f-425f-92bb-27a76a9c19c6" containerName="registry-server" containerID="cri-o://2114761a68222146c49252abb82c010693ecd2f31f7f6807f7f54b412a11fb3e" gracePeriod=2 Nov 22 07:07:15 crc kubenswrapper[4856]: I1122 07:07:15.612084 4856 generic.go:334] "Generic (PLEG): container finished" podID="657ddf29-027f-425f-92bb-27a76a9c19c6" containerID="2114761a68222146c49252abb82c010693ecd2f31f7f6807f7f54b412a11fb3e" exitCode=0 Nov 22 07:07:15 crc kubenswrapper[4856]: I1122 07:07:15.612173 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzwhc" event={"ID":"657ddf29-027f-425f-92bb-27a76a9c19c6","Type":"ContainerDied","Data":"2114761a68222146c49252abb82c010693ecd2f31f7f6807f7f54b412a11fb3e"} Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.154810 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mhjvb"] Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.155011 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mhjvb" podUID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" containerName="registry-server" containerID="cri-o://da12b6f3a0c8cf40bb14f6b14582144e0264481f9a8b3c18081f20ff195d97a1" gracePeriod=2 Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.364685 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.364721 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.364738 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.364773 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.364818 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-psfmg" Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.365313 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.365330 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"9fc59df7a6f0300e18ead1fd3103bee76cd54e6242bbdb3c251250b3461dd7d5"} pod="openshift-console/downloads-7954f5f757-psfmg" containerMessage="Container download-server failed liveness probe, will be restarted" Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.365362 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" containerID="cri-o://9fc59df7a6f0300e18ead1fd3103bee76cd54e6242bbdb3c251250b3461dd7d5" gracePeriod=2 Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.365338 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.527148 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.620017 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m79j6" event={"ID":"56f76d43-404b-4b05-97d9-39a17e5774ed","Type":"ContainerDied","Data":"77c7ecf1b52c9d437296280b7599642a1172a04feefa94231a2f0a37eb7b7247"} Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.620127 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m79j6" Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.620109 4856 scope.go:117] "RemoveContainer" containerID="f5d08b907ce2983be6d182110a44ea39f360f0efea910c9ab325d51a1b8d9d1d" Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.707401 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f76d43-404b-4b05-97d9-39a17e5774ed-catalog-content\") pod \"56f76d43-404b-4b05-97d9-39a17e5774ed\" (UID: \"56f76d43-404b-4b05-97d9-39a17e5774ed\") " Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.707481 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f76d43-404b-4b05-97d9-39a17e5774ed-utilities\") pod \"56f76d43-404b-4b05-97d9-39a17e5774ed\" (UID: \"56f76d43-404b-4b05-97d9-39a17e5774ed\") " Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.707615 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqgpt\" (UniqueName: \"kubernetes.io/projected/56f76d43-404b-4b05-97d9-39a17e5774ed-kube-api-access-vqgpt\") pod \"56f76d43-404b-4b05-97d9-39a17e5774ed\" (UID: \"56f76d43-404b-4b05-97d9-39a17e5774ed\") " Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.709446 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f76d43-404b-4b05-97d9-39a17e5774ed-utilities" (OuterVolumeSpecName: "utilities") pod "56f76d43-404b-4b05-97d9-39a17e5774ed" (UID: "56f76d43-404b-4b05-97d9-39a17e5774ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.714438 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f76d43-404b-4b05-97d9-39a17e5774ed-kube-api-access-vqgpt" (OuterVolumeSpecName: "kube-api-access-vqgpt") pod "56f76d43-404b-4b05-97d9-39a17e5774ed" (UID: "56f76d43-404b-4b05-97d9-39a17e5774ed"). InnerVolumeSpecName "kube-api-access-vqgpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.809050 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqgpt\" (UniqueName: \"kubernetes.io/projected/56f76d43-404b-4b05-97d9-39a17e5774ed-kube-api-access-vqgpt\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.809093 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f76d43-404b-4b05-97d9-39a17e5774ed-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:16 crc kubenswrapper[4856]: I1122 07:07:16.982946 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f76d43-404b-4b05-97d9-39a17e5774ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56f76d43-404b-4b05-97d9-39a17e5774ed" (UID: "56f76d43-404b-4b05-97d9-39a17e5774ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:07:17 crc kubenswrapper[4856]: I1122 07:07:17.012498 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f76d43-404b-4b05-97d9-39a17e5774ed-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:17 crc kubenswrapper[4856]: I1122 07:07:17.261595 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m79j6"] Nov 22 07:07:17 crc kubenswrapper[4856]: I1122 07:07:17.268173 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m79j6"] Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.386034 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.427067 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq78l\" (UniqueName: \"kubernetes.io/projected/657ddf29-027f-425f-92bb-27a76a9c19c6-kube-api-access-sq78l\") pod \"657ddf29-027f-425f-92bb-27a76a9c19c6\" (UID: \"657ddf29-027f-425f-92bb-27a76a9c19c6\") " Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.427111 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/657ddf29-027f-425f-92bb-27a76a9c19c6-catalog-content\") pod \"657ddf29-027f-425f-92bb-27a76a9c19c6\" (UID: \"657ddf29-027f-425f-92bb-27a76a9c19c6\") " Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.427158 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/657ddf29-027f-425f-92bb-27a76a9c19c6-utilities\") pod \"657ddf29-027f-425f-92bb-27a76a9c19c6\" (UID: \"657ddf29-027f-425f-92bb-27a76a9c19c6\") " Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.428062 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/657ddf29-027f-425f-92bb-27a76a9c19c6-utilities" (OuterVolumeSpecName: "utilities") pod "657ddf29-027f-425f-92bb-27a76a9c19c6" (UID: "657ddf29-027f-425f-92bb-27a76a9c19c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.431058 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/657ddf29-027f-425f-92bb-27a76a9c19c6-kube-api-access-sq78l" (OuterVolumeSpecName: "kube-api-access-sq78l") pod "657ddf29-027f-425f-92bb-27a76a9c19c6" (UID: "657ddf29-027f-425f-92bb-27a76a9c19c6"). InnerVolumeSpecName "kube-api-access-sq78l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.442315 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/657ddf29-027f-425f-92bb-27a76a9c19c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "657ddf29-027f-425f-92bb-27a76a9c19c6" (UID: "657ddf29-027f-425f-92bb-27a76a9c19c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.530136 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/657ddf29-027f-425f-92bb-27a76a9c19c6-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.530202 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq78l\" (UniqueName: \"kubernetes.io/projected/657ddf29-027f-425f-92bb-27a76a9c19c6-kube-api-access-sq78l\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.530221 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/657ddf29-027f-425f-92bb-27a76a9c19c6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.636578 4856 generic.go:334] "Generic (PLEG): container finished" podID="52414feb-0c08-4591-a84a-985167853ba3" containerID="9fc59df7a6f0300e18ead1fd3103bee76cd54e6242bbdb3c251250b3461dd7d5" exitCode=0 Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.636644 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-psfmg" event={"ID":"52414feb-0c08-4591-a84a-985167853ba3","Type":"ContainerDied","Data":"9fc59df7a6f0300e18ead1fd3103bee76cd54e6242bbdb3c251250b3461dd7d5"} Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.640335 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qzwhc" event={"ID":"657ddf29-027f-425f-92bb-27a76a9c19c6","Type":"ContainerDied","Data":"2fa9b82af75e0c892addcfd362cc1f253d523ebd1000ec611b27fd0c7b4053d4"} Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.640575 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qzwhc" Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.646130 4856 generic.go:334] "Generic (PLEG): container finished" podID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" containerID="da12b6f3a0c8cf40bb14f6b14582144e0264481f9a8b3c18081f20ff195d97a1" exitCode=0 Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.646186 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhjvb" event={"ID":"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf","Type":"ContainerDied","Data":"da12b6f3a0c8cf40bb14f6b14582144e0264481f9a8b3c18081f20ff195d97a1"} Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.675644 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qzwhc"] Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.678845 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qzwhc"] Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.718819 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f76d43-404b-4b05-97d9-39a17e5774ed" path="/var/lib/kubelet/pods/56f76d43-404b-4b05-97d9-39a17e5774ed/volumes" Nov 22 07:07:18 crc kubenswrapper[4856]: I1122 07:07:18.719842 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="657ddf29-027f-425f-92bb-27a76a9c19c6" path="/var/lib/kubelet/pods/657ddf29-027f-425f-92bb-27a76a9c19c6/volumes" Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.425442 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.442569 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj98g\" (UniqueName: \"kubernetes.io/projected/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-kube-api-access-bj98g\") pod \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\" (UID: \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\") " Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.442777 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-utilities\") pod \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\" (UID: \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\") " Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.442801 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-catalog-content\") pod \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\" (UID: \"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf\") " Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.443587 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-utilities" (OuterVolumeSpecName: "utilities") pod "bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" (UID: "bcf4bc58-e602-45b5-9b0c-3be4cb956dbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.447551 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-kube-api-access-bj98g" (OuterVolumeSpecName: "kube-api-access-bj98g") pod "bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" (UID: "bcf4bc58-e602-45b5-9b0c-3be4cb956dbf"). InnerVolumeSpecName "kube-api-access-bj98g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.544006 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.544044 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj98g\" (UniqueName: \"kubernetes.io/projected/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-kube-api-access-bj98g\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.653774 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhjvb" event={"ID":"bcf4bc58-e602-45b5-9b0c-3be4cb956dbf","Type":"ContainerDied","Data":"fb7db6aa527b7fc41ef739a2ce7337a572b90b7ab88df4ddb62c4331a4fa0719"} Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.653894 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhjvb" Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.882219 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" (UID: "bcf4bc58-e602-45b5-9b0c-3be4cb956dbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.947559 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.985471 4856 scope.go:117] "RemoveContainer" containerID="80f26ec0a7633dfb22a582680a090b7374dd430b2e2567900b3e659bb3aa5fda" Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.987547 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mhjvb"] Nov 22 07:07:19 crc kubenswrapper[4856]: I1122 07:07:19.995224 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mhjvb"] Nov 22 07:07:20 crc kubenswrapper[4856]: I1122 07:07:20.716116 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" path="/var/lib/kubelet/pods/bcf4bc58-e602-45b5-9b0c-3be4cb956dbf/volumes" Nov 22 07:07:20 crc kubenswrapper[4856]: I1122 07:07:20.753214 4856 scope.go:117] "RemoveContainer" containerID="8bda9a30d86c3e8d6faae562fe4de4759384aa6ff38989446f9e02dd413e1829" Nov 22 07:07:20 crc kubenswrapper[4856]: I1122 07:07:20.793161 4856 scope.go:117] "RemoveContainer" containerID="e23ade9a36de2945fdd952c812a20c1af881dd1ee5e0dc8dd858debd512f1f58" Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.039672 4856 scope.go:117] "RemoveContainer" containerID="2114761a68222146c49252abb82c010693ecd2f31f7f6807f7f54b412a11fb3e" Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.071285 4856 scope.go:117] "RemoveContainer" containerID="7ccc469f86b1e8a611045f1c54fa01b522ee23abeb5e672852e47abf996a2781" Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.119739 4856 scope.go:117] "RemoveContainer" containerID="6acd901f83779136f4c44d2e31a782244adcd2490adc4491da55e97914fa42c9" Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.166021 4856 scope.go:117] "RemoveContainer" containerID="da12b6f3a0c8cf40bb14f6b14582144e0264481f9a8b3c18081f20ff195d97a1" Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.209363 4856 scope.go:117] "RemoveContainer" containerID="a6e80e12b9149e2a6a4820d6ce32d15963caca07f43394f511396d597058fdfd" Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.261741 4856 scope.go:117] "RemoveContainer" containerID="d6d0e3806b6c53ae332780f09d95e1547fbbd715b9f2ea455f487da1cfdfe6c4" Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.666679 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvf9n" event={"ID":"52860224-c188-4eda-830e-9101706f4ce2","Type":"ContainerStarted","Data":"24cb499d4151a464e7b9a32f655039746ac307b3792c7f4df97600e5fd9dc956"} Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.668337 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sptws" event={"ID":"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6","Type":"ContainerStarted","Data":"88ca96eb16429d3ed58e5f677f7acc92c2f4e8a53fe8d1764e5f92840c5f099a"} Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.671246 4856 generic.go:334] "Generic (PLEG): container finished" podID="072e5312-2542-496b-bda2-58d411f4f1c3" containerID="738dd9b27b1d293d8078add503cb3d4cb5a4786626c9f8935f4bd3995c578d02" exitCode=0 Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.671298 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xx8tx" event={"ID":"072e5312-2542-496b-bda2-58d411f4f1c3","Type":"ContainerDied","Data":"738dd9b27b1d293d8078add503cb3d4cb5a4786626c9f8935f4bd3995c578d02"} Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.673548 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-psfmg" event={"ID":"52414feb-0c08-4591-a84a-985167853ba3","Type":"ContainerStarted","Data":"b64a4710a525ef25217443439da9f8c17e34de9019a80ca2a657672753da1dee"} Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.674019 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-psfmg" Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.674114 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.674152 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.675548 4856 generic.go:334] "Generic (PLEG): container finished" podID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" containerID="8207b09b5a884d3a53ba25a5bdb4275045b617f883d2be3f134ab1631731eff5" exitCode=0 Nov 22 07:07:21 crc kubenswrapper[4856]: I1122 07:07:21.675592 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75ztc" event={"ID":"02bb740d-242c-4846-8bbf-5fe3e4f1b97a","Type":"ContainerDied","Data":"8207b09b5a884d3a53ba25a5bdb4275045b617f883d2be3f134ab1631731eff5"} Nov 22 07:07:22 crc kubenswrapper[4856]: I1122 07:07:22.695905 4856 generic.go:334] "Generic (PLEG): container finished" podID="52860224-c188-4eda-830e-9101706f4ce2" containerID="24cb499d4151a464e7b9a32f655039746ac307b3792c7f4df97600e5fd9dc956" exitCode=0 Nov 22 07:07:22 crc kubenswrapper[4856]: I1122 07:07:22.696343 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvf9n" event={"ID":"52860224-c188-4eda-830e-9101706f4ce2","Type":"ContainerDied","Data":"24cb499d4151a464e7b9a32f655039746ac307b3792c7f4df97600e5fd9dc956"} Nov 22 07:07:22 crc kubenswrapper[4856]: I1122 07:07:22.708330 4856 generic.go:334] "Generic (PLEG): container finished" podID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" containerID="88ca96eb16429d3ed58e5f677f7acc92c2f4e8a53fe8d1764e5f92840c5f099a" exitCode=0 Nov 22 07:07:22 crc kubenswrapper[4856]: I1122 07:07:22.709177 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sptws" event={"ID":"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6","Type":"ContainerDied","Data":"88ca96eb16429d3ed58e5f677f7acc92c2f4e8a53fe8d1764e5f92840c5f099a"} Nov 22 07:07:22 crc kubenswrapper[4856]: I1122 07:07:22.709695 4856 patch_prober.go:28] interesting pod/downloads-7954f5f757-psfmg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Nov 22 07:07:22 crc kubenswrapper[4856]: I1122 07:07:22.709742 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-psfmg" podUID="52414feb-0c08-4591-a84a-985167853ba3" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.26:8080/\": dial tcp 10.217.0.26:8080: connect: connection refused" Nov 22 07:07:26 crc kubenswrapper[4856]: I1122 07:07:26.393503 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-psfmg" Nov 22 07:07:26 crc kubenswrapper[4856]: I1122 07:07:26.726611 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xx8tx" event={"ID":"072e5312-2542-496b-bda2-58d411f4f1c3","Type":"ContainerStarted","Data":"cebcb5ead6cb9c3e566da54d39efa7a962e9ac194f5b1bac84ada241666e12bd"} Nov 22 07:07:26 crc kubenswrapper[4856]: I1122 07:07:26.748475 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xx8tx" podStartSLOduration=8.948835108 podStartE2EDuration="2m6.748457243s" podCreationTimestamp="2025-11-22 07:05:20 +0000 UTC" firstStartedPulling="2025-11-22 07:05:24.855568638 +0000 UTC m=+167.268961896" lastFinishedPulling="2025-11-22 07:07:22.655190773 +0000 UTC m=+285.068584031" observedRunningTime="2025-11-22 07:07:26.747122363 +0000 UTC m=+289.160515621" watchObservedRunningTime="2025-11-22 07:07:26.748457243 +0000 UTC m=+289.161850511" Nov 22 07:07:27 crc kubenswrapper[4856]: I1122 07:07:27.694340 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2hxc"] Nov 22 07:07:29 crc kubenswrapper[4856]: I1122 07:07:29.742396 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75ztc" event={"ID":"02bb740d-242c-4846-8bbf-5fe3e4f1b97a","Type":"ContainerStarted","Data":"22446d1b76478eca78a08dca44b3932bb1fb7a776fc514a49366a626cf06ad5d"} Nov 22 07:07:30 crc kubenswrapper[4856]: I1122 07:07:30.574149 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:07:30 crc kubenswrapper[4856]: I1122 07:07:30.574704 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:07:30 crc kubenswrapper[4856]: I1122 07:07:30.615014 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:07:30 crc kubenswrapper[4856]: I1122 07:07:30.770100 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-75ztc" podStartSLOduration=7.485556161 podStartE2EDuration="2m11.770085046s" podCreationTimestamp="2025-11-22 07:05:19 +0000 UTC" firstStartedPulling="2025-11-22 07:05:23.8439143 +0000 UTC m=+166.257307558" lastFinishedPulling="2025-11-22 07:07:28.128443185 +0000 UTC m=+290.541836443" observedRunningTime="2025-11-22 07:07:30.76765118 +0000 UTC m=+293.181044448" watchObservedRunningTime="2025-11-22 07:07:30.770085046 +0000 UTC m=+293.183478304" Nov 22 07:07:30 crc kubenswrapper[4856]: I1122 07:07:30.790984 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:07:31 crc kubenswrapper[4856]: I1122 07:07:31.355411 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xx8tx"] Nov 22 07:07:32 crc kubenswrapper[4856]: I1122 07:07:32.757674 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xx8tx" podUID="072e5312-2542-496b-bda2-58d411f4f1c3" containerName="registry-server" containerID="cri-o://cebcb5ead6cb9c3e566da54d39efa7a962e9ac194f5b1bac84ada241666e12bd" gracePeriod=2 Nov 22 07:07:33 crc kubenswrapper[4856]: I1122 07:07:33.777298 4856 generic.go:334] "Generic (PLEG): container finished" podID="072e5312-2542-496b-bda2-58d411f4f1c3" containerID="cebcb5ead6cb9c3e566da54d39efa7a962e9ac194f5b1bac84ada241666e12bd" exitCode=0 Nov 22 07:07:33 crc kubenswrapper[4856]: I1122 07:07:33.777383 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xx8tx" event={"ID":"072e5312-2542-496b-bda2-58d411f4f1c3","Type":"ContainerDied","Data":"cebcb5ead6cb9c3e566da54d39efa7a962e9ac194f5b1bac84ada241666e12bd"} Nov 22 07:07:35 crc kubenswrapper[4856]: I1122 07:07:35.132761 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:07:35 crc kubenswrapper[4856]: I1122 07:07:35.256243 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/072e5312-2542-496b-bda2-58d411f4f1c3-utilities\") pod \"072e5312-2542-496b-bda2-58d411f4f1c3\" (UID: \"072e5312-2542-496b-bda2-58d411f4f1c3\") " Nov 22 07:07:35 crc kubenswrapper[4856]: I1122 07:07:35.256318 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/072e5312-2542-496b-bda2-58d411f4f1c3-catalog-content\") pod \"072e5312-2542-496b-bda2-58d411f4f1c3\" (UID: \"072e5312-2542-496b-bda2-58d411f4f1c3\") " Nov 22 07:07:35 crc kubenswrapper[4856]: I1122 07:07:35.256410 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjfxx\" (UniqueName: \"kubernetes.io/projected/072e5312-2542-496b-bda2-58d411f4f1c3-kube-api-access-qjfxx\") pod \"072e5312-2542-496b-bda2-58d411f4f1c3\" (UID: \"072e5312-2542-496b-bda2-58d411f4f1c3\") " Nov 22 07:07:35 crc kubenswrapper[4856]: I1122 07:07:35.257204 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/072e5312-2542-496b-bda2-58d411f4f1c3-utilities" (OuterVolumeSpecName: "utilities") pod "072e5312-2542-496b-bda2-58d411f4f1c3" (UID: "072e5312-2542-496b-bda2-58d411f4f1c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:07:35 crc kubenswrapper[4856]: I1122 07:07:35.262833 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/072e5312-2542-496b-bda2-58d411f4f1c3-kube-api-access-qjfxx" (OuterVolumeSpecName: "kube-api-access-qjfxx") pod "072e5312-2542-496b-bda2-58d411f4f1c3" (UID: "072e5312-2542-496b-bda2-58d411f4f1c3"). InnerVolumeSpecName "kube-api-access-qjfxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:07:35 crc kubenswrapper[4856]: I1122 07:07:35.357576 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjfxx\" (UniqueName: \"kubernetes.io/projected/072e5312-2542-496b-bda2-58d411f4f1c3-kube-api-access-qjfxx\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:35 crc kubenswrapper[4856]: I1122 07:07:35.357608 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/072e5312-2542-496b-bda2-58d411f4f1c3-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:35 crc kubenswrapper[4856]: I1122 07:07:35.790960 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xx8tx" event={"ID":"072e5312-2542-496b-bda2-58d411f4f1c3","Type":"ContainerDied","Data":"1c8c6e0e0038cf6719881be81c738d4d0260e501ef05a9804780b51684d8d3dc"} Nov 22 07:07:35 crc kubenswrapper[4856]: I1122 07:07:35.791040 4856 scope.go:117] "RemoveContainer" containerID="cebcb5ead6cb9c3e566da54d39efa7a962e9ac194f5b1bac84ada241666e12bd" Nov 22 07:07:35 crc kubenswrapper[4856]: I1122 07:07:35.791050 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xx8tx" Nov 22 07:07:37 crc kubenswrapper[4856]: I1122 07:07:37.087713 4856 scope.go:117] "RemoveContainer" containerID="738dd9b27b1d293d8078add503cb3d4cb5a4786626c9f8935f4bd3995c578d02" Nov 22 07:07:37 crc kubenswrapper[4856]: I1122 07:07:37.104465 4856 scope.go:117] "RemoveContainer" containerID="db326da0681fefa891ed72ba815941c1b84becf24fa5a429860ac23c012d4797" Nov 22 07:07:38 crc kubenswrapper[4856]: I1122 07:07:38.816780 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvf9n" event={"ID":"52860224-c188-4eda-830e-9101706f4ce2","Type":"ContainerStarted","Data":"126b9246c8f41cac0b15cc326516fc47f9772f50bd0b584727a432cd18d95673"} Nov 22 07:07:38 crc kubenswrapper[4856]: I1122 07:07:38.822249 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sptws" event={"ID":"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6","Type":"ContainerStarted","Data":"fd30092e9d73f425bfde6cb3e6a4ed66e2d7c19e1f29a09b54991948f942fd61"} Nov 22 07:07:38 crc kubenswrapper[4856]: I1122 07:07:38.862639 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gvf9n" podStartSLOduration=7.631297812 podStartE2EDuration="2m19.862615299s" podCreationTimestamp="2025-11-22 07:05:19 +0000 UTC" firstStartedPulling="2025-11-22 07:05:24.856727175 +0000 UTC m=+167.270120433" lastFinishedPulling="2025-11-22 07:07:37.088044652 +0000 UTC m=+299.501437920" observedRunningTime="2025-11-22 07:07:38.840954977 +0000 UTC m=+301.254348235" watchObservedRunningTime="2025-11-22 07:07:38.862615299 +0000 UTC m=+301.276008547" Nov 22 07:07:38 crc kubenswrapper[4856]: I1122 07:07:38.862856 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sptws" podStartSLOduration=8.141205343 podStartE2EDuration="2m16.862850635s" podCreationTimestamp="2025-11-22 07:05:22 +0000 UTC" firstStartedPulling="2025-11-22 07:05:27.886160166 +0000 UTC m=+170.299553424" lastFinishedPulling="2025-11-22 07:07:36.607805458 +0000 UTC m=+299.021198716" observedRunningTime="2025-11-22 07:07:38.859613701 +0000 UTC m=+301.273006969" watchObservedRunningTime="2025-11-22 07:07:38.862850635 +0000 UTC m=+301.276243883" Nov 22 07:07:39 crc kubenswrapper[4856]: I1122 07:07:39.969542 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:07:39 crc kubenswrapper[4856]: I1122 07:07:39.970541 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:07:40 crc kubenswrapper[4856]: I1122 07:07:40.011296 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:07:40 crc kubenswrapper[4856]: I1122 07:07:40.150939 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:07:40 crc kubenswrapper[4856]: I1122 07:07:40.151523 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:07:40 crc kubenswrapper[4856]: I1122 07:07:40.189165 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:07:40 crc kubenswrapper[4856]: I1122 07:07:40.865762 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:07:42 crc kubenswrapper[4856]: I1122 07:07:42.677594 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/072e5312-2542-496b-bda2-58d411f4f1c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "072e5312-2542-496b-bda2-58d411f4f1c3" (UID: "072e5312-2542-496b-bda2-58d411f4f1c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:07:42 crc kubenswrapper[4856]: I1122 07:07:42.719284 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xx8tx"] Nov 22 07:07:42 crc kubenswrapper[4856]: I1122 07:07:42.722560 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xx8tx"] Nov 22 07:07:42 crc kubenswrapper[4856]: I1122 07:07:42.749276 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/072e5312-2542-496b-bda2-58d411f4f1c3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:43 crc kubenswrapper[4856]: I1122 07:07:43.185419 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:07:43 crc kubenswrapper[4856]: I1122 07:07:43.185541 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:07:43 crc kubenswrapper[4856]: I1122 07:07:43.230901 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:07:43 crc kubenswrapper[4856]: I1122 07:07:43.878236 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:07:44 crc kubenswrapper[4856]: I1122 07:07:44.718624 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="072e5312-2542-496b-bda2-58d411f4f1c3" path="/var/lib/kubelet/pods/072e5312-2542-496b-bda2-58d411f4f1c3/volumes" Nov 22 07:07:50 crc kubenswrapper[4856]: I1122 07:07:50.191878 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:07:52 crc kubenswrapper[4856]: I1122 07:07:52.722121 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" podUID="9a816ade-c1d6-48c0-a246-4d3407f90e58" containerName="oauth-openshift" containerID="cri-o://642f26ddc4d161287c6d6419e8101b904d24c69b7ad7273e694a840097e31547" gracePeriod=15 Nov 22 07:07:56 crc kubenswrapper[4856]: I1122 07:07:56.918132 4856 generic.go:334] "Generic (PLEG): container finished" podID="9a816ade-c1d6-48c0-a246-4d3407f90e58" containerID="642f26ddc4d161287c6d6419e8101b904d24c69b7ad7273e694a840097e31547" exitCode=0 Nov 22 07:07:56 crc kubenswrapper[4856]: I1122 07:07:56.918245 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" event={"ID":"9a816ade-c1d6-48c0-a246-4d3407f90e58","Type":"ContainerDied","Data":"642f26ddc4d161287c6d6419e8101b904d24c69b7ad7273e694a840097e31547"} Nov 22 07:07:57 crc kubenswrapper[4856]: I1122 07:07:57.624956 4856 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-s2hxc container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Nov 22 07:07:57 crc kubenswrapper[4856]: I1122 07:07:57.625030 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" podUID="9a816ade-c1d6-48c0-a246-4d3407f90e58" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.915871 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.966744 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.966741 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-s2hxc" event={"ID":"9a816ade-c1d6-48c0-a246-4d3407f90e58","Type":"ContainerDied","Data":"4a8025d528eb75f91f293d8bd8881eea7b72255a319dafb320e24c37a1d86221"} Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.966946 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5f78599457-lsztc"] Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967234 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f76d43-404b-4b05-97d9-39a17e5774ed" containerName="extract-content" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967249 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f76d43-404b-4b05-97d9-39a17e5774ed" containerName="extract-content" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967265 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baa1d67b-074f-437d-9b55-2d0522bb1db8" containerName="pruner" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967273 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="baa1d67b-074f-437d-9b55-2d0522bb1db8" containerName="pruner" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967461 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f76d43-404b-4b05-97d9-39a17e5774ed" containerName="registry-server" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967519 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f76d43-404b-4b05-97d9-39a17e5774ed" containerName="registry-server" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967538 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="072e5312-2542-496b-bda2-58d411f4f1c3" containerName="extract-content" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967546 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="072e5312-2542-496b-bda2-58d411f4f1c3" containerName="extract-content" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967558 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657ddf29-027f-425f-92bb-27a76a9c19c6" containerName="extract-utilities" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967567 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="657ddf29-027f-425f-92bb-27a76a9c19c6" containerName="extract-utilities" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967577 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657ddf29-027f-425f-92bb-27a76a9c19c6" containerName="registry-server" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967585 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="657ddf29-027f-425f-92bb-27a76a9c19c6" containerName="registry-server" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967598 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657ddf29-027f-425f-92bb-27a76a9c19c6" containerName="extract-content" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967608 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="657ddf29-027f-425f-92bb-27a76a9c19c6" containerName="extract-content" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967623 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" containerName="extract-utilities" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967634 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" containerName="extract-utilities" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967647 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" containerName="extract-content" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967658 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" containerName="extract-content" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967670 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8717add-c237-46d4-8ea3-dc4c6b8cbeb8" containerName="pruner" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967679 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8717add-c237-46d4-8ea3-dc4c6b8cbeb8" containerName="pruner" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967691 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="072e5312-2542-496b-bda2-58d411f4f1c3" containerName="registry-server" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967703 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="072e5312-2542-496b-bda2-58d411f4f1c3" containerName="registry-server" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967716 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="072e5312-2542-496b-bda2-58d411f4f1c3" containerName="extract-utilities" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967725 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="072e5312-2542-496b-bda2-58d411f4f1c3" containerName="extract-utilities" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967738 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" containerName="registry-server" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967749 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" containerName="registry-server" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967765 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a816ade-c1d6-48c0-a246-4d3407f90e58" containerName="oauth-openshift" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967774 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a816ade-c1d6-48c0-a246-4d3407f90e58" containerName="oauth-openshift" Nov 22 07:07:58 crc kubenswrapper[4856]: E1122 07:07:58.967786 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f76d43-404b-4b05-97d9-39a17e5774ed" containerName="extract-utilities" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967796 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f76d43-404b-4b05-97d9-39a17e5774ed" containerName="extract-utilities" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.967804 4856 scope.go:117] "RemoveContainer" containerID="642f26ddc4d161287c6d6419e8101b904d24c69b7ad7273e694a840097e31547" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.968017 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8717add-c237-46d4-8ea3-dc4c6b8cbeb8" containerName="pruner" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.968042 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="baa1d67b-074f-437d-9b55-2d0522bb1db8" containerName="pruner" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.968060 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a816ade-c1d6-48c0-a246-4d3407f90e58" containerName="oauth-openshift" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.968072 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="657ddf29-027f-425f-92bb-27a76a9c19c6" containerName="registry-server" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.968108 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="072e5312-2542-496b-bda2-58d411f4f1c3" containerName="registry-server" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.968121 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf4bc58-e602-45b5-9b0c-3be4cb956dbf" containerName="registry-server" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.968134 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f76d43-404b-4b05-97d9-39a17e5774ed" containerName="registry-server" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.968932 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:58 crc kubenswrapper[4856]: I1122 07:07:58.971189 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5f78599457-lsztc"] Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.066983 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-service-ca\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067062 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-idp-0-file-data\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067084 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-serving-cert\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067121 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-router-certs\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067155 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-audit-policies\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067192 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a816ade-c1d6-48c0-a246-4d3407f90e58-audit-dir\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067215 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-ocp-branding-template\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067253 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-login\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067297 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-trusted-ca-bundle\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067348 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-error\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067338 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a816ade-c1d6-48c0-a246-4d3407f90e58-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067374 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stzmp\" (UniqueName: \"kubernetes.io/projected/9a816ade-c1d6-48c0-a246-4d3407f90e58-kube-api-access-stzmp\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067541 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-session\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067624 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-cliconfig\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.067654 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-provider-selection\") pod \"9a816ade-c1d6-48c0-a246-4d3407f90e58\" (UID: \"9a816ade-c1d6-48c0-a246-4d3407f90e58\") " Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.068146 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.068300 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.068329 4856 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a816ade-c1d6-48c0-a246-4d3407f90e58-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.069428 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.069715 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.070233 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.074490 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.075288 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.078779 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a816ade-c1d6-48c0-a246-4d3407f90e58-kube-api-access-stzmp" (OuterVolumeSpecName: "kube-api-access-stzmp") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "kube-api-access-stzmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.079255 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.083350 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.084288 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.084857 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.085269 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.085540 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9a816ade-c1d6-48c0-a246-4d3407f90e58" (UID: "9a816ade-c1d6-48c0-a246-4d3407f90e58"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.170208 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.170373 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.170428 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.170465 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-user-template-login\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.170938 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.170997 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/85222058-81a4-4395-9292-f7b16d6e5669-audit-dir\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171048 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6g5l\" (UniqueName: \"kubernetes.io/projected/85222058-81a4-4395-9292-f7b16d6e5669-kube-api-access-z6g5l\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171109 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-user-template-error\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171273 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171317 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/85222058-81a4-4395-9292-f7b16d6e5669-audit-policies\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171364 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171399 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-session\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171446 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171593 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171684 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171705 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171723 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171765 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stzmp\" (UniqueName: \"kubernetes.io/projected/9a816ade-c1d6-48c0-a246-4d3407f90e58-kube-api-access-stzmp\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171780 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171794 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171813 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171827 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171840 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171857 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171872 4856 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a816ade-c1d6-48c0-a246-4d3407f90e58-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.171885 4856 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a816ade-c1d6-48c0-a246-4d3407f90e58-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.273687 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.273782 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-session\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.273823 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.273856 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.273890 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.273925 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.274033 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.274059 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-user-template-login\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.274095 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.274125 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/85222058-81a4-4395-9292-f7b16d6e5669-audit-dir\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.274161 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6g5l\" (UniqueName: \"kubernetes.io/projected/85222058-81a4-4395-9292-f7b16d6e5669-kube-api-access-z6g5l\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.274191 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-user-template-error\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.274222 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.274250 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/85222058-81a4-4395-9292-f7b16d6e5669-audit-policies\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.275420 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/85222058-81a4-4395-9292-f7b16d6e5669-audit-policies\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.275857 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.280214 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-session\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.281188 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-user-template-login\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.281892 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.281945 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/85222058-81a4-4395-9292-f7b16d6e5669-audit-dir\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.284248 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.287284 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.288576 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-user-template-error\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.288679 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-service-ca\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.292804 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.294967 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.295684 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/85222058-81a4-4395-9292-f7b16d6e5669-v4-0-config-system-router-certs\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.303457 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6g5l\" (UniqueName: \"kubernetes.io/projected/85222058-81a4-4395-9292-f7b16d6e5669-kube-api-access-z6g5l\") pod \"oauth-openshift-5f78599457-lsztc\" (UID: \"85222058-81a4-4395-9292-f7b16d6e5669\") " pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.348831 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2hxc"] Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.353245 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-s2hxc"] Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.585577 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.840137 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5f78599457-lsztc"] Nov 22 07:07:59 crc kubenswrapper[4856]: I1122 07:07:59.974124 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" event={"ID":"85222058-81a4-4395-9292-f7b16d6e5669","Type":"ContainerStarted","Data":"1f2dabb5e5bd4f06328effad65e8f071eaa110a941338487c635322a6bc66b41"} Nov 22 07:08:00 crc kubenswrapper[4856]: I1122 07:08:00.723704 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a816ade-c1d6-48c0-a246-4d3407f90e58" path="/var/lib/kubelet/pods/9a816ade-c1d6-48c0-a246-4d3407f90e58/volumes" Nov 22 07:08:01 crc kubenswrapper[4856]: I1122 07:08:01.989122 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" event={"ID":"85222058-81a4-4395-9292-f7b16d6e5669","Type":"ContainerStarted","Data":"6d610afc5578985d1950f125dafd904f405e75918e8c529c030d3fb9ef90655b"} Nov 22 07:08:02 crc kubenswrapper[4856]: I1122 07:08:02.995363 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:08:03 crc kubenswrapper[4856]: I1122 07:08:03.001023 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" Nov 22 07:08:03 crc kubenswrapper[4856]: I1122 07:08:03.023108 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" podStartSLOduration=36.023086755 podStartE2EDuration="36.023086755s" podCreationTimestamp="2025-11-22 07:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:08:03.020450934 +0000 UTC m=+325.433844202" watchObservedRunningTime="2025-11-22 07:08:03.023086755 +0000 UTC m=+325.436480013" Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.747528 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gvf9n"] Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.749013 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gvf9n" podUID="52860224-c188-4eda-830e-9101706f4ce2" containerName="registry-server" containerID="cri-o://126b9246c8f41cac0b15cc326516fc47f9772f50bd0b584727a432cd18d95673" gracePeriod=30 Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.760266 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-75ztc"] Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.761127 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-75ztc" podUID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" containerName="registry-server" containerID="cri-o://22446d1b76478eca78a08dca44b3932bb1fb7a776fc514a49366a626cf06ad5d" gracePeriod=30 Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.765356 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x4fc7"] Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.765634 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" podUID="aa013f01-5701-4d63-bc2c-284f5d4a397f" containerName="marketplace-operator" containerID="cri-o://e8fda0b1d8dbbcd711bba9088ec5870e05fdd03cb7cf57ba13d6a369dbf3c804" gracePeriod=30 Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.779884 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxlt7"] Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.780750 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dxlt7" podUID="e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" containerName="registry-server" containerID="cri-o://b1b4571db108f3f19bdae7ddc8d518dd2ac04f0c858d16dec57acf5f3ad7c5f8" gracePeriod=30 Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.791696 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwqfg"] Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.793224 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.796638 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sptws"] Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.797193 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sptws" podUID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" containerName="registry-server" containerID="cri-o://fd30092e9d73f425bfde6cb3e6a4ed66e2d7c19e1f29a09b54991948f942fd61" gracePeriod=30 Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.817664 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwqfg"] Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.892603 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2a94a89-16b5-480b-b1fd-18af97bc38da-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwqfg\" (UID: \"e2a94a89-16b5-480b-b1fd-18af97bc38da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.892653 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e2a94a89-16b5-480b-b1fd-18af97bc38da-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwqfg\" (UID: \"e2a94a89-16b5-480b-b1fd-18af97bc38da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.892678 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxjq\" (UniqueName: \"kubernetes.io/projected/e2a94a89-16b5-480b-b1fd-18af97bc38da-kube-api-access-sjxjq\") pod \"marketplace-operator-79b997595-kwqfg\" (UID: \"e2a94a89-16b5-480b-b1fd-18af97bc38da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.993916 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e2a94a89-16b5-480b-b1fd-18af97bc38da-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwqfg\" (UID: \"e2a94a89-16b5-480b-b1fd-18af97bc38da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.993979 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjxjq\" (UniqueName: \"kubernetes.io/projected/e2a94a89-16b5-480b-b1fd-18af97bc38da-kube-api-access-sjxjq\") pod \"marketplace-operator-79b997595-kwqfg\" (UID: \"e2a94a89-16b5-480b-b1fd-18af97bc38da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.994082 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2a94a89-16b5-480b-b1fd-18af97bc38da-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwqfg\" (UID: \"e2a94a89-16b5-480b-b1fd-18af97bc38da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" Nov 22 07:08:25 crc kubenswrapper[4856]: I1122 07:08:25.996225 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e2a94a89-16b5-480b-b1fd-18af97bc38da-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kwqfg\" (UID: \"e2a94a89-16b5-480b-b1fd-18af97bc38da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" Nov 22 07:08:26 crc kubenswrapper[4856]: I1122 07:08:26.002675 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e2a94a89-16b5-480b-b1fd-18af97bc38da-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kwqfg\" (UID: \"e2a94a89-16b5-480b-b1fd-18af97bc38da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" Nov 22 07:08:26 crc kubenswrapper[4856]: I1122 07:08:26.014245 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjxjq\" (UniqueName: \"kubernetes.io/projected/e2a94a89-16b5-480b-b1fd-18af97bc38da-kube-api-access-sjxjq\") pod \"marketplace-operator-79b997595-kwqfg\" (UID: \"e2a94a89-16b5-480b-b1fd-18af97bc38da\") " pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" Nov 22 07:08:26 crc kubenswrapper[4856]: I1122 07:08:26.114308 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" Nov 22 07:08:26 crc kubenswrapper[4856]: I1122 07:08:26.343735 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kwqfg"] Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.143632 4856 generic.go:334] "Generic (PLEG): container finished" podID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" containerID="22446d1b76478eca78a08dca44b3932bb1fb7a776fc514a49366a626cf06ad5d" exitCode=0 Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.144237 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75ztc" event={"ID":"02bb740d-242c-4846-8bbf-5fe3e4f1b97a","Type":"ContainerDied","Data":"22446d1b76478eca78a08dca44b3932bb1fb7a776fc514a49366a626cf06ad5d"} Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.144551 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-75ztc" event={"ID":"02bb740d-242c-4846-8bbf-5fe3e4f1b97a","Type":"ContainerDied","Data":"eafca7ea07a33013d7dee24b9508e3e2b9b8ae620c158d9c14cc146be0bc85fc"} Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.144579 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eafca7ea07a33013d7dee24b9508e3e2b9b8ae620c158d9c14cc146be0bc85fc" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.151817 4856 generic.go:334] "Generic (PLEG): container finished" podID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" containerID="fd30092e9d73f425bfde6cb3e6a4ed66e2d7c19e1f29a09b54991948f942fd61" exitCode=0 Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.152052 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sptws" event={"ID":"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6","Type":"ContainerDied","Data":"fd30092e9d73f425bfde6cb3e6a4ed66e2d7c19e1f29a09b54991948f942fd61"} Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.158475 4856 generic.go:334] "Generic (PLEG): container finished" podID="52860224-c188-4eda-830e-9101706f4ce2" containerID="126b9246c8f41cac0b15cc326516fc47f9772f50bd0b584727a432cd18d95673" exitCode=0 Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.158669 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvf9n" event={"ID":"52860224-c188-4eda-830e-9101706f4ce2","Type":"ContainerDied","Data":"126b9246c8f41cac0b15cc326516fc47f9772f50bd0b584727a432cd18d95673"} Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.163779 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" event={"ID":"e2a94a89-16b5-480b-b1fd-18af97bc38da","Type":"ContainerStarted","Data":"927dec7cc46779a6b3e040511e92052856b864742a052b300a83b3e5d900ce77"} Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.163824 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" event={"ID":"e2a94a89-16b5-480b-b1fd-18af97bc38da","Type":"ContainerStarted","Data":"07118ca1df6f588eabf4863e66087e181c2255c40c0fb9272f2158f38a6eda39"} Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.165640 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.169928 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kwqfg container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" start-of-body= Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.170003 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" podUID="e2a94a89-16b5-480b-b1fd-18af97bc38da" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.55:8080/healthz\": dial tcp 10.217.0.55:8080: connect: connection refused" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.172453 4856 generic.go:334] "Generic (PLEG): container finished" podID="e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" containerID="b1b4571db108f3f19bdae7ddc8d518dd2ac04f0c858d16dec57acf5f3ad7c5f8" exitCode=0 Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.172553 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxlt7" event={"ID":"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598","Type":"ContainerDied","Data":"b1b4571db108f3f19bdae7ddc8d518dd2ac04f0c858d16dec57acf5f3ad7c5f8"} Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.174381 4856 generic.go:334] "Generic (PLEG): container finished" podID="aa013f01-5701-4d63-bc2c-284f5d4a397f" containerID="e8fda0b1d8dbbcd711bba9088ec5870e05fdd03cb7cf57ba13d6a369dbf3c804" exitCode=0 Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.174447 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" event={"ID":"aa013f01-5701-4d63-bc2c-284f5d4a397f","Type":"ContainerDied","Data":"e8fda0b1d8dbbcd711bba9088ec5870e05fdd03cb7cf57ba13d6a369dbf3c804"} Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.190907 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" podStartSLOduration=2.190888587 podStartE2EDuration="2.190888587s" podCreationTimestamp="2025-11-22 07:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:08:27.188797682 +0000 UTC m=+349.602190940" watchObservedRunningTime="2025-11-22 07:08:27.190888587 +0000 UTC m=+349.604281845" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.215021 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.334030 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.336127 4856 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-x4fc7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.336177 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" podUID="aa013f01-5701-4d63-bc2c-284f5d4a397f" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.417621 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6swr\" (UniqueName: \"kubernetes.io/projected/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-kube-api-access-r6swr\") pod \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\" (UID: \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.417692 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-utilities\") pod \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\" (UID: \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.417839 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-catalog-content\") pod \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\" (UID: \"02bb740d-242c-4846-8bbf-5fe3e4f1b97a\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.420373 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-utilities" (OuterVolumeSpecName: "utilities") pod "02bb740d-242c-4846-8bbf-5fe3e4f1b97a" (UID: "02bb740d-242c-4846-8bbf-5fe3e4f1b97a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.427287 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-kube-api-access-r6swr" (OuterVolumeSpecName: "kube-api-access-r6swr") pod "02bb740d-242c-4846-8bbf-5fe3e4f1b97a" (UID: "02bb740d-242c-4846-8bbf-5fe3e4f1b97a"). InnerVolumeSpecName "kube-api-access-r6swr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.445989 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.508216 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.516354 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.519543 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-catalog-content\") pod \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\" (UID: \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.519633 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-utilities\") pod \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\" (UID: \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.519699 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8pqb\" (UniqueName: \"kubernetes.io/projected/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-kube-api-access-c8pqb\") pod \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\" (UID: \"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.519974 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa013f01-5701-4d63-bc2c-284f5d4a397f-marketplace-trusted-ca\") pod \"aa013f01-5701-4d63-bc2c-284f5d4a397f\" (UID: \"aa013f01-5701-4d63-bc2c-284f5d4a397f\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.520012 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8sjp\" (UniqueName: \"kubernetes.io/projected/52860224-c188-4eda-830e-9101706f4ce2-kube-api-access-h8sjp\") pod \"52860224-c188-4eda-830e-9101706f4ce2\" (UID: \"52860224-c188-4eda-830e-9101706f4ce2\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.520037 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-catalog-content\") pod \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\" (UID: \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.520086 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa013f01-5701-4d63-bc2c-284f5d4a397f-marketplace-operator-metrics\") pod \"aa013f01-5701-4d63-bc2c-284f5d4a397f\" (UID: \"aa013f01-5701-4d63-bc2c-284f5d4a397f\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.520445 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6swr\" (UniqueName: \"kubernetes.io/projected/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-kube-api-access-r6swr\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.520474 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.522037 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-utilities" (OuterVolumeSpecName: "utilities") pod "e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" (UID: "e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.526801 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa013f01-5701-4d63-bc2c-284f5d4a397f-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "aa013f01-5701-4d63-bc2c-284f5d4a397f" (UID: "aa013f01-5701-4d63-bc2c-284f5d4a397f"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.526975 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52860224-c188-4eda-830e-9101706f4ce2-kube-api-access-h8sjp" (OuterVolumeSpecName: "kube-api-access-h8sjp") pod "52860224-c188-4eda-830e-9101706f4ce2" (UID: "52860224-c188-4eda-830e-9101706f4ce2"). InnerVolumeSpecName "kube-api-access-h8sjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.541522 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02bb740d-242c-4846-8bbf-5fe3e4f1b97a" (UID: "02bb740d-242c-4846-8bbf-5fe3e4f1b97a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.538543 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa013f01-5701-4d63-bc2c-284f5d4a397f-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "aa013f01-5701-4d63-bc2c-284f5d4a397f" (UID: "aa013f01-5701-4d63-bc2c-284f5d4a397f"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.539183 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-kube-api-access-c8pqb" (OuterVolumeSpecName: "kube-api-access-c8pqb") pod "e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" (UID: "e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598"). InnerVolumeSpecName "kube-api-access-c8pqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.558452 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" (UID: "e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.621135 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52860224-c188-4eda-830e-9101706f4ce2-catalog-content\") pod \"52860224-c188-4eda-830e-9101706f4ce2\" (UID: \"52860224-c188-4eda-830e-9101706f4ce2\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.621402 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxbrk\" (UniqueName: \"kubernetes.io/projected/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-kube-api-access-cxbrk\") pod \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\" (UID: \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.621961 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52860224-c188-4eda-830e-9101706f4ce2-utilities\") pod \"52860224-c188-4eda-830e-9101706f4ce2\" (UID: \"52860224-c188-4eda-830e-9101706f4ce2\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.622016 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r67bs\" (UniqueName: \"kubernetes.io/projected/aa013f01-5701-4d63-bc2c-284f5d4a397f-kube-api-access-r67bs\") pod \"aa013f01-5701-4d63-bc2c-284f5d4a397f\" (UID: \"aa013f01-5701-4d63-bc2c-284f5d4a397f\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.622056 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-utilities\") pod \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\" (UID: \"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6\") " Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.622371 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02bb740d-242c-4846-8bbf-5fe3e4f1b97a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.622390 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.622403 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.622415 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8pqb\" (UniqueName: \"kubernetes.io/projected/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598-kube-api-access-c8pqb\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.622426 4856 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa013f01-5701-4d63-bc2c-284f5d4a397f-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.622436 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8sjp\" (UniqueName: \"kubernetes.io/projected/52860224-c188-4eda-830e-9101706f4ce2-kube-api-access-h8sjp\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.622447 4856 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/aa013f01-5701-4d63-bc2c-284f5d4a397f-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.623178 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-utilities" (OuterVolumeSpecName: "utilities") pod "bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" (UID: "bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.623489 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52860224-c188-4eda-830e-9101706f4ce2-utilities" (OuterVolumeSpecName: "utilities") pod "52860224-c188-4eda-830e-9101706f4ce2" (UID: "52860224-c188-4eda-830e-9101706f4ce2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.624235 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-kube-api-access-cxbrk" (OuterVolumeSpecName: "kube-api-access-cxbrk") pod "bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" (UID: "bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6"). InnerVolumeSpecName "kube-api-access-cxbrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.626329 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa013f01-5701-4d63-bc2c-284f5d4a397f-kube-api-access-r67bs" (OuterVolumeSpecName: "kube-api-access-r67bs") pod "aa013f01-5701-4d63-bc2c-284f5d4a397f" (UID: "aa013f01-5701-4d63-bc2c-284f5d4a397f"). InnerVolumeSpecName "kube-api-access-r67bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.638579 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" (UID: "bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.673685 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52860224-c188-4eda-830e-9101706f4ce2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52860224-c188-4eda-830e-9101706f4ce2" (UID: "52860224-c188-4eda-830e-9101706f4ce2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.723998 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.724040 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxbrk\" (UniqueName: \"kubernetes.io/projected/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-kube-api-access-cxbrk\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.724051 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52860224-c188-4eda-830e-9101706f4ce2-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.724060 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r67bs\" (UniqueName: \"kubernetes.io/projected/aa013f01-5701-4d63-bc2c-284f5d4a397f-kube-api-access-r67bs\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.724069 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:27 crc kubenswrapper[4856]: I1122 07:08:27.724077 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52860224-c188-4eda-830e-9101706f4ce2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.184761 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gvf9n" event={"ID":"52860224-c188-4eda-830e-9101706f4ce2","Type":"ContainerDied","Data":"49b7575847245235d0c6fc5220d4af7a5a9f23c37325e254afadd39706ed8147"} Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.184813 4856 scope.go:117] "RemoveContainer" containerID="126b9246c8f41cac0b15cc326516fc47f9772f50bd0b584727a432cd18d95673" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.185358 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gvf9n" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.188232 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sptws" event={"ID":"bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6","Type":"ContainerDied","Data":"cc0289042839db9f281c6598675e98f1149088a5c6be8db6126170a762b73dbf"} Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.188319 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sptws" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.191328 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dxlt7" event={"ID":"e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598","Type":"ContainerDied","Data":"9b15f426f507ff2aa1249ac29714fa4b546decc4315e525ed1a0a709cb858bf9"} Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.191446 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dxlt7" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.197644 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-75ztc" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.197715 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" event={"ID":"aa013f01-5701-4d63-bc2c-284f5d4a397f","Type":"ContainerDied","Data":"8d5542d7fdf572831ec1ebf9cb55075a48553a24ec8eb3deed1715234573c0cb"} Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.197862 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-x4fc7" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.210095 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kwqfg" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.211139 4856 scope.go:117] "RemoveContainer" containerID="24cb499d4151a464e7b9a32f655039746ac307b3792c7f4df97600e5fd9dc956" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.235553 4856 scope.go:117] "RemoveContainer" containerID="c58ef295300db8ebc25113075638644a0fc6d13316c264dfc4377003b21045af" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.288651 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxlt7"] Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.291304 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dxlt7"] Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.308343 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sptws"] Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.310585 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sptws"] Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.324189 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gvf9n"] Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.324736 4856 scope.go:117] "RemoveContainer" containerID="fd30092e9d73f425bfde6cb3e6a4ed66e2d7c19e1f29a09b54991948f942fd61" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.331292 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gvf9n"] Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.345160 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-75ztc"] Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.349590 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-75ztc"] Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.353800 4856 scope.go:117] "RemoveContainer" containerID="88ca96eb16429d3ed58e5f677f7acc92c2f4e8a53fe8d1764e5f92840c5f099a" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.358280 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x4fc7"] Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.363210 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-x4fc7"] Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.369963 4856 scope.go:117] "RemoveContainer" containerID="afffcbf3a5e5228e89a11cf71a449cb0f2646e4850c10996c3df3ca2885162f3" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.387834 4856 scope.go:117] "RemoveContainer" containerID="b1b4571db108f3f19bdae7ddc8d518dd2ac04f0c858d16dec57acf5f3ad7c5f8" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.402318 4856 scope.go:117] "RemoveContainer" containerID="218750b267ae397531b8026dde4ae8a357804ee86c5c9797436acffa003fd52d" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.419364 4856 scope.go:117] "RemoveContainer" containerID="bbc242aebbbedf6dc56a8fb5b277fc15a0223daa1dadd481b2f699cc9c4d17f6" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.435189 4856 scope.go:117] "RemoveContainer" containerID="e8fda0b1d8dbbcd711bba9088ec5870e05fdd03cb7cf57ba13d6a369dbf3c804" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.716232 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" path="/var/lib/kubelet/pods/02bb740d-242c-4846-8bbf-5fe3e4f1b97a/volumes" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.717052 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52860224-c188-4eda-830e-9101706f4ce2" path="/var/lib/kubelet/pods/52860224-c188-4eda-830e-9101706f4ce2/volumes" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.717796 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa013f01-5701-4d63-bc2c-284f5d4a397f" path="/var/lib/kubelet/pods/aa013f01-5701-4d63-bc2c-284f5d4a397f/volumes" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.718880 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" path="/var/lib/kubelet/pods/bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6/volumes" Nov 22 07:08:28 crc kubenswrapper[4856]: I1122 07:08:28.719610 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" path="/var/lib/kubelet/pods/e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598/volumes" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.956600 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g4jn9"] Nov 22 07:08:29 crc kubenswrapper[4856]: E1122 07:08:29.957055 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" containerName="extract-utilities" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957082 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" containerName="extract-utilities" Nov 22 07:08:29 crc kubenswrapper[4856]: E1122 07:08:29.957104 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" containerName="extract-utilities" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957118 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" containerName="extract-utilities" Nov 22 07:08:29 crc kubenswrapper[4856]: E1122 07:08:29.957145 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" containerName="registry-server" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957161 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" containerName="registry-server" Nov 22 07:08:29 crc kubenswrapper[4856]: E1122 07:08:29.957188 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52860224-c188-4eda-830e-9101706f4ce2" containerName="extract-utilities" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957204 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="52860224-c188-4eda-830e-9101706f4ce2" containerName="extract-utilities" Nov 22 07:08:29 crc kubenswrapper[4856]: E1122 07:08:29.957224 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa013f01-5701-4d63-bc2c-284f5d4a397f" containerName="marketplace-operator" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957238 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa013f01-5701-4d63-bc2c-284f5d4a397f" containerName="marketplace-operator" Nov 22 07:08:29 crc kubenswrapper[4856]: E1122 07:08:29.957256 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" containerName="extract-content" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957270 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" containerName="extract-content" Nov 22 07:08:29 crc kubenswrapper[4856]: E1122 07:08:29.957285 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" containerName="extract-content" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957297 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" containerName="extract-content" Nov 22 07:08:29 crc kubenswrapper[4856]: E1122 07:08:29.957320 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" containerName="registry-server" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957334 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" containerName="registry-server" Nov 22 07:08:29 crc kubenswrapper[4856]: E1122 07:08:29.957355 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" containerName="extract-content" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957368 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" containerName="extract-content" Nov 22 07:08:29 crc kubenswrapper[4856]: E1122 07:08:29.957384 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" containerName="extract-utilities" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957397 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" containerName="extract-utilities" Nov 22 07:08:29 crc kubenswrapper[4856]: E1122 07:08:29.957422 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52860224-c188-4eda-830e-9101706f4ce2" containerName="extract-content" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957435 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="52860224-c188-4eda-830e-9101706f4ce2" containerName="extract-content" Nov 22 07:08:29 crc kubenswrapper[4856]: E1122 07:08:29.957452 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" containerName="registry-server" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957465 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" containerName="registry-server" Nov 22 07:08:29 crc kubenswrapper[4856]: E1122 07:08:29.957482 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52860224-c188-4eda-830e-9101706f4ce2" containerName="registry-server" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957495 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="52860224-c188-4eda-830e-9101706f4ce2" containerName="registry-server" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957707 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="52860224-c188-4eda-830e-9101706f4ce2" containerName="registry-server" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957728 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb7ea8ac-6ddd-4d0d-9784-95c77d088fc6" containerName="registry-server" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957749 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="02bb740d-242c-4846-8bbf-5fe3e4f1b97a" containerName="registry-server" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957773 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e2c1fa-df0f-4bf0-a8b7-bae9dea24598" containerName="registry-server" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.957789 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa013f01-5701-4d63-bc2c-284f5d4a397f" containerName="marketplace-operator" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.959411 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.963063 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 22 07:08:29 crc kubenswrapper[4856]: I1122 07:08:29.973661 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4jn9"] Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.153978 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-65fqc"] Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.158025 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.158829 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1966788b-abc1-4c4a-a29c-aaeba9a3ca65-catalog-content\") pod \"redhat-operators-g4jn9\" (UID: \"1966788b-abc1-4c4a-a29c-aaeba9a3ca65\") " pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.159098 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1966788b-abc1-4c4a-a29c-aaeba9a3ca65-utilities\") pod \"redhat-operators-g4jn9\" (UID: \"1966788b-abc1-4c4a-a29c-aaeba9a3ca65\") " pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.159148 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrmk\" (UniqueName: \"kubernetes.io/projected/1966788b-abc1-4c4a-a29c-aaeba9a3ca65-kube-api-access-nfrmk\") pod \"redhat-operators-g4jn9\" (UID: \"1966788b-abc1-4c4a-a29c-aaeba9a3ca65\") " pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.160960 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.172604 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-65fqc"] Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.261095 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-utilities\") pod \"community-operators-65fqc\" (UID: \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\") " pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.261169 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-catalog-content\") pod \"community-operators-65fqc\" (UID: \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\") " pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.261205 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pmcb\" (UniqueName: \"kubernetes.io/projected/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-kube-api-access-7pmcb\") pod \"community-operators-65fqc\" (UID: \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\") " pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.261250 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1966788b-abc1-4c4a-a29c-aaeba9a3ca65-utilities\") pod \"redhat-operators-g4jn9\" (UID: \"1966788b-abc1-4c4a-a29c-aaeba9a3ca65\") " pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.261344 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfrmk\" (UniqueName: \"kubernetes.io/projected/1966788b-abc1-4c4a-a29c-aaeba9a3ca65-kube-api-access-nfrmk\") pod \"redhat-operators-g4jn9\" (UID: \"1966788b-abc1-4c4a-a29c-aaeba9a3ca65\") " pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.261431 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1966788b-abc1-4c4a-a29c-aaeba9a3ca65-catalog-content\") pod \"redhat-operators-g4jn9\" (UID: \"1966788b-abc1-4c4a-a29c-aaeba9a3ca65\") " pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.261795 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1966788b-abc1-4c4a-a29c-aaeba9a3ca65-utilities\") pod \"redhat-operators-g4jn9\" (UID: \"1966788b-abc1-4c4a-a29c-aaeba9a3ca65\") " pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.261894 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1966788b-abc1-4c4a-a29c-aaeba9a3ca65-catalog-content\") pod \"redhat-operators-g4jn9\" (UID: \"1966788b-abc1-4c4a-a29c-aaeba9a3ca65\") " pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.282984 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfrmk\" (UniqueName: \"kubernetes.io/projected/1966788b-abc1-4c4a-a29c-aaeba9a3ca65-kube-api-access-nfrmk\") pod \"redhat-operators-g4jn9\" (UID: \"1966788b-abc1-4c4a-a29c-aaeba9a3ca65\") " pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.362522 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-utilities\") pod \"community-operators-65fqc\" (UID: \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\") " pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.362588 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-catalog-content\") pod \"community-operators-65fqc\" (UID: \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\") " pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.362615 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pmcb\" (UniqueName: \"kubernetes.io/projected/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-kube-api-access-7pmcb\") pod \"community-operators-65fqc\" (UID: \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\") " pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.363122 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-utilities\") pod \"community-operators-65fqc\" (UID: \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\") " pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.363487 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-catalog-content\") pod \"community-operators-65fqc\" (UID: \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\") " pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.380596 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pmcb\" (UniqueName: \"kubernetes.io/projected/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-kube-api-access-7pmcb\") pod \"community-operators-65fqc\" (UID: \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\") " pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.473470 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.579805 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.689799 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-65fqc"] Nov 22 07:08:30 crc kubenswrapper[4856]: I1122 07:08:30.785678 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g4jn9"] Nov 22 07:08:30 crc kubenswrapper[4856]: W1122 07:08:30.790722 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1966788b_abc1_4c4a_a29c_aaeba9a3ca65.slice/crio-7c794fcd9e559395ef1a8f45060dd2ebd102674b6c185f9ad4afba5c96aa1c98 WatchSource:0}: Error finding container 7c794fcd9e559395ef1a8f45060dd2ebd102674b6c185f9ad4afba5c96aa1c98: Status 404 returned error can't find the container with id 7c794fcd9e559395ef1a8f45060dd2ebd102674b6c185f9ad4afba5c96aa1c98 Nov 22 07:08:31 crc kubenswrapper[4856]: I1122 07:08:31.224153 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4jn9" event={"ID":"1966788b-abc1-4c4a-a29c-aaeba9a3ca65","Type":"ContainerStarted","Data":"7c794fcd9e559395ef1a8f45060dd2ebd102674b6c185f9ad4afba5c96aa1c98"} Nov 22 07:08:31 crc kubenswrapper[4856]: I1122 07:08:31.227035 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fqc" event={"ID":"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc","Type":"ContainerStarted","Data":"a8cd6d8ed44b7b391a51485bca606151c535d3a1496aa6ad63439acc8e9d8326"} Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.359578 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s8jpj"] Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.363764 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.367985 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8jpj"] Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.368856 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.394382 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b51997-87ba-499c-903d-82c1b85c0968-catalog-content\") pod \"redhat-marketplace-s8jpj\" (UID: \"a8b51997-87ba-499c-903d-82c1b85c0968\") " pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.394451 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b51997-87ba-499c-903d-82c1b85c0968-utilities\") pod \"redhat-marketplace-s8jpj\" (UID: \"a8b51997-87ba-499c-903d-82c1b85c0968\") " pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.394585 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk6sx\" (UniqueName: \"kubernetes.io/projected/a8b51997-87ba-499c-903d-82c1b85c0968-kube-api-access-rk6sx\") pod \"redhat-marketplace-s8jpj\" (UID: \"a8b51997-87ba-499c-903d-82c1b85c0968\") " pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.496536 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b51997-87ba-499c-903d-82c1b85c0968-catalog-content\") pod \"redhat-marketplace-s8jpj\" (UID: \"a8b51997-87ba-499c-903d-82c1b85c0968\") " pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.496980 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b51997-87ba-499c-903d-82c1b85c0968-utilities\") pod \"redhat-marketplace-s8jpj\" (UID: \"a8b51997-87ba-499c-903d-82c1b85c0968\") " pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.497102 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk6sx\" (UniqueName: \"kubernetes.io/projected/a8b51997-87ba-499c-903d-82c1b85c0968-kube-api-access-rk6sx\") pod \"redhat-marketplace-s8jpj\" (UID: \"a8b51997-87ba-499c-903d-82c1b85c0968\") " pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.497522 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8b51997-87ba-499c-903d-82c1b85c0968-catalog-content\") pod \"redhat-marketplace-s8jpj\" (UID: \"a8b51997-87ba-499c-903d-82c1b85c0968\") " pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.501792 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8b51997-87ba-499c-903d-82c1b85c0968-utilities\") pod \"redhat-marketplace-s8jpj\" (UID: \"a8b51997-87ba-499c-903d-82c1b85c0968\") " pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.522832 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk6sx\" (UniqueName: \"kubernetes.io/projected/a8b51997-87ba-499c-903d-82c1b85c0968-kube-api-access-rk6sx\") pod \"redhat-marketplace-s8jpj\" (UID: \"a8b51997-87ba-499c-903d-82c1b85c0968\") " pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.556179 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rqb7t"] Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.559570 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.564266 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.576105 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqb7t"] Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.680737 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.701240 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r444\" (UniqueName: \"kubernetes.io/projected/3c7b0aba-250c-483e-ba94-3dcc4b9c59bb-kube-api-access-4r444\") pod \"certified-operators-rqb7t\" (UID: \"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb\") " pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.701325 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c7b0aba-250c-483e-ba94-3dcc4b9c59bb-utilities\") pod \"certified-operators-rqb7t\" (UID: \"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb\") " pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.701417 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c7b0aba-250c-483e-ba94-3dcc4b9c59bb-catalog-content\") pod \"certified-operators-rqb7t\" (UID: \"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb\") " pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.803380 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r444\" (UniqueName: \"kubernetes.io/projected/3c7b0aba-250c-483e-ba94-3dcc4b9c59bb-kube-api-access-4r444\") pod \"certified-operators-rqb7t\" (UID: \"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb\") " pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.803456 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c7b0aba-250c-483e-ba94-3dcc4b9c59bb-utilities\") pod \"certified-operators-rqb7t\" (UID: \"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb\") " pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.803554 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c7b0aba-250c-483e-ba94-3dcc4b9c59bb-catalog-content\") pod \"certified-operators-rqb7t\" (UID: \"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb\") " pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.804673 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c7b0aba-250c-483e-ba94-3dcc4b9c59bb-utilities\") pod \"certified-operators-rqb7t\" (UID: \"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb\") " pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.806847 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c7b0aba-250c-483e-ba94-3dcc4b9c59bb-catalog-content\") pod \"certified-operators-rqb7t\" (UID: \"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb\") " pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.831630 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r444\" (UniqueName: \"kubernetes.io/projected/3c7b0aba-250c-483e-ba94-3dcc4b9c59bb-kube-api-access-4r444\") pod \"certified-operators-rqb7t\" (UID: \"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb\") " pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.921374 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8jpj"] Nov 22 07:08:32 crc kubenswrapper[4856]: W1122 07:08:32.926844 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8b51997_87ba_499c_903d_82c1b85c0968.slice/crio-966749eff4121f8443ed7a8c766327213c1ac695f9364c6dcd98144b7eb61d14 WatchSource:0}: Error finding container 966749eff4121f8443ed7a8c766327213c1ac695f9364c6dcd98144b7eb61d14: Status 404 returned error can't find the container with id 966749eff4121f8443ed7a8c766327213c1ac695f9364c6dcd98144b7eb61d14 Nov 22 07:08:32 crc kubenswrapper[4856]: I1122 07:08:32.937446 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:08:33 crc kubenswrapper[4856]: I1122 07:08:33.138151 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqb7t"] Nov 22 07:08:33 crc kubenswrapper[4856]: I1122 07:08:33.239812 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqb7t" event={"ID":"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb","Type":"ContainerStarted","Data":"f7faeea4d406112b76f310d2f6c6c2a1a0016ac412ef64bed63a78bb5bb32ae1"} Nov 22 07:08:33 crc kubenswrapper[4856]: I1122 07:08:33.241606 4856 generic.go:334] "Generic (PLEG): container finished" podID="1966788b-abc1-4c4a-a29c-aaeba9a3ca65" containerID="26fc00dd36245d94921dd484deb4ef3510f5e8301d39c33c7a4e0814c399bf52" exitCode=0 Nov 22 07:08:33 crc kubenswrapper[4856]: I1122 07:08:33.242060 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4jn9" event={"ID":"1966788b-abc1-4c4a-a29c-aaeba9a3ca65","Type":"ContainerDied","Data":"26fc00dd36245d94921dd484deb4ef3510f5e8301d39c33c7a4e0814c399bf52"} Nov 22 07:08:33 crc kubenswrapper[4856]: I1122 07:08:33.243911 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8jpj" event={"ID":"a8b51997-87ba-499c-903d-82c1b85c0968","Type":"ContainerStarted","Data":"966749eff4121f8443ed7a8c766327213c1ac695f9364c6dcd98144b7eb61d14"} Nov 22 07:08:33 crc kubenswrapper[4856]: I1122 07:08:33.247165 4856 generic.go:334] "Generic (PLEG): container finished" podID="15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" containerID="bb21a190082d552f6fce36d6bc15c016cd0e681baf92e80b7487bf04d456b816" exitCode=0 Nov 22 07:08:33 crc kubenswrapper[4856]: I1122 07:08:33.247224 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fqc" event={"ID":"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc","Type":"ContainerDied","Data":"bb21a190082d552f6fce36d6bc15c016cd0e681baf92e80b7487bf04d456b816"} Nov 22 07:08:34 crc kubenswrapper[4856]: I1122 07:08:34.257135 4856 generic.go:334] "Generic (PLEG): container finished" podID="3c7b0aba-250c-483e-ba94-3dcc4b9c59bb" containerID="2630b4370006b6f412ac362e2553b44269c4e4ed56945e36d3adba2d01a7d867" exitCode=0 Nov 22 07:08:34 crc kubenswrapper[4856]: I1122 07:08:34.257390 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqb7t" event={"ID":"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb","Type":"ContainerDied","Data":"2630b4370006b6f412ac362e2553b44269c4e4ed56945e36d3adba2d01a7d867"} Nov 22 07:08:34 crc kubenswrapper[4856]: I1122 07:08:34.260391 4856 generic.go:334] "Generic (PLEG): container finished" podID="a8b51997-87ba-499c-903d-82c1b85c0968" containerID="daed2cd78b11a4e585397a2ae63eeb45ca487e6504d807e302d6e70b78e9056c" exitCode=0 Nov 22 07:08:34 crc kubenswrapper[4856]: I1122 07:08:34.260495 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8jpj" event={"ID":"a8b51997-87ba-499c-903d-82c1b85c0968","Type":"ContainerDied","Data":"daed2cd78b11a4e585397a2ae63eeb45ca487e6504d807e302d6e70b78e9056c"} Nov 22 07:08:46 crc kubenswrapper[4856]: I1122 07:08:46.348541 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqb7t" event={"ID":"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb","Type":"ContainerStarted","Data":"ea7d95108f9421ee478f8c99afbd0f10698cdd709458814e01afc1d20304c8ce"} Nov 22 07:08:46 crc kubenswrapper[4856]: I1122 07:08:46.350497 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4jn9" event={"ID":"1966788b-abc1-4c4a-a29c-aaeba9a3ca65","Type":"ContainerStarted","Data":"17b99c3303fc333666968d5e51f49c674fd6ea33e336d35798f195b864685601"} Nov 22 07:08:46 crc kubenswrapper[4856]: I1122 07:08:46.352294 4856 generic.go:334] "Generic (PLEG): container finished" podID="a8b51997-87ba-499c-903d-82c1b85c0968" containerID="4d9442516e5ca742f98ea58792f56ab8728de848151db35fa8c268a63854fc90" exitCode=0 Nov 22 07:08:46 crc kubenswrapper[4856]: I1122 07:08:46.352360 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8jpj" event={"ID":"a8b51997-87ba-499c-903d-82c1b85c0968","Type":"ContainerDied","Data":"4d9442516e5ca742f98ea58792f56ab8728de848151db35fa8c268a63854fc90"} Nov 22 07:08:46 crc kubenswrapper[4856]: I1122 07:08:46.354350 4856 generic.go:334] "Generic (PLEG): container finished" podID="15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" containerID="c4b81cc20c736dfaa8755bdd0d37409fb5de3c16a16f5d73269c379208a166d2" exitCode=0 Nov 22 07:08:46 crc kubenswrapper[4856]: I1122 07:08:46.354400 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fqc" event={"ID":"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc","Type":"ContainerDied","Data":"c4b81cc20c736dfaa8755bdd0d37409fb5de3c16a16f5d73269c379208a166d2"} Nov 22 07:08:47 crc kubenswrapper[4856]: I1122 07:08:47.361668 4856 generic.go:334] "Generic (PLEG): container finished" podID="1966788b-abc1-4c4a-a29c-aaeba9a3ca65" containerID="17b99c3303fc333666968d5e51f49c674fd6ea33e336d35798f195b864685601" exitCode=0 Nov 22 07:08:47 crc kubenswrapper[4856]: I1122 07:08:47.361750 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4jn9" event={"ID":"1966788b-abc1-4c4a-a29c-aaeba9a3ca65","Type":"ContainerDied","Data":"17b99c3303fc333666968d5e51f49c674fd6ea33e336d35798f195b864685601"} Nov 22 07:08:47 crc kubenswrapper[4856]: I1122 07:08:47.364163 4856 generic.go:334] "Generic (PLEG): container finished" podID="3c7b0aba-250c-483e-ba94-3dcc4b9c59bb" containerID="ea7d95108f9421ee478f8c99afbd0f10698cdd709458814e01afc1d20304c8ce" exitCode=0 Nov 22 07:08:47 crc kubenswrapper[4856]: I1122 07:08:47.364228 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqb7t" event={"ID":"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb","Type":"ContainerDied","Data":"ea7d95108f9421ee478f8c99afbd0f10698cdd709458814e01afc1d20304c8ce"} Nov 22 07:08:50 crc kubenswrapper[4856]: I1122 07:08:50.380699 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g4jn9" event={"ID":"1966788b-abc1-4c4a-a29c-aaeba9a3ca65","Type":"ContainerStarted","Data":"92d882839f69678becca2ea3734884c2d2ccd564bcb6cb2b8ce6a9e1f5272d18"} Nov 22 07:08:50 crc kubenswrapper[4856]: I1122 07:08:50.400172 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g4jn9" podStartSLOduration=5.317290201 podStartE2EDuration="21.400152009s" podCreationTimestamp="2025-11-22 07:08:29 +0000 UTC" firstStartedPulling="2025-11-22 07:08:33.243466661 +0000 UTC m=+355.656859919" lastFinishedPulling="2025-11-22 07:08:49.326328469 +0000 UTC m=+371.739721727" observedRunningTime="2025-11-22 07:08:50.396687306 +0000 UTC m=+372.810080564" watchObservedRunningTime="2025-11-22 07:08:50.400152009 +0000 UTC m=+372.813545267" Nov 22 07:08:50 crc kubenswrapper[4856]: I1122 07:08:50.581089 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:08:50 crc kubenswrapper[4856]: I1122 07:08:50.581160 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:08:51 crc kubenswrapper[4856]: I1122 07:08:51.621784 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g4jn9" podUID="1966788b-abc1-4c4a-a29c-aaeba9a3ca65" containerName="registry-server" probeResult="failure" output=< Nov 22 07:08:51 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 07:08:51 crc kubenswrapper[4856]: > Nov 22 07:08:55 crc kubenswrapper[4856]: I1122 07:08:55.422586 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8jpj" event={"ID":"a8b51997-87ba-499c-903d-82c1b85c0968","Type":"ContainerStarted","Data":"b31d6e2a7dcfde8e11aefab6704964eab977f2a02f46b5b78e2aac5c70b84982"} Nov 22 07:08:57 crc kubenswrapper[4856]: I1122 07:08:57.454327 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s8jpj" podStartSLOduration=10.204212763 podStartE2EDuration="25.454308568s" podCreationTimestamp="2025-11-22 07:08:32 +0000 UTC" firstStartedPulling="2025-11-22 07:08:35.283335644 +0000 UTC m=+357.696728942" lastFinishedPulling="2025-11-22 07:08:50.533431489 +0000 UTC m=+372.946824747" observedRunningTime="2025-11-22 07:08:57.450502425 +0000 UTC m=+379.863895693" watchObservedRunningTime="2025-11-22 07:08:57.454308568 +0000 UTC m=+379.867701816" Nov 22 07:08:59 crc kubenswrapper[4856]: I1122 07:08:59.754389 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:08:59 crc kubenswrapper[4856]: I1122 07:08:59.754491 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:09:00 crc kubenswrapper[4856]: I1122 07:09:00.620668 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:09:00 crc kubenswrapper[4856]: I1122 07:09:00.656939 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g4jn9" Nov 22 07:09:02 crc kubenswrapper[4856]: I1122 07:09:02.681115 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:09:02 crc kubenswrapper[4856]: I1122 07:09:02.681174 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:09:02 crc kubenswrapper[4856]: I1122 07:09:02.719052 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:09:03 crc kubenswrapper[4856]: I1122 07:09:03.507502 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s8jpj" Nov 22 07:09:10 crc kubenswrapper[4856]: I1122 07:09:10.509402 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fqc" event={"ID":"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc","Type":"ContainerStarted","Data":"7deb5ef4278accb7655743c98be0550a545d964e873af57bfc07fa0cee3e5d05"} Nov 22 07:09:10 crc kubenswrapper[4856]: I1122 07:09:10.511488 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqb7t" event={"ID":"3c7b0aba-250c-483e-ba94-3dcc4b9c59bb","Type":"ContainerStarted","Data":"41a1705c808adb273142a7eee98b500ab1ada240b7f2e933d2dca39c3f05c48e"} Nov 22 07:09:12 crc kubenswrapper[4856]: I1122 07:09:12.549368 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-65fqc" podStartSLOduration=9.204297183 podStartE2EDuration="42.549345964s" podCreationTimestamp="2025-11-22 07:08:30 +0000 UTC" firstStartedPulling="2025-11-22 07:08:33.249292467 +0000 UTC m=+355.662685725" lastFinishedPulling="2025-11-22 07:09:06.594341238 +0000 UTC m=+389.007734506" observedRunningTime="2025-11-22 07:09:12.544345912 +0000 UTC m=+394.957739170" watchObservedRunningTime="2025-11-22 07:09:12.549345964 +0000 UTC m=+394.962739222" Nov 22 07:09:12 crc kubenswrapper[4856]: I1122 07:09:12.573144 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rqb7t" podStartSLOduration=9.482125084 podStartE2EDuration="40.573115311s" podCreationTimestamp="2025-11-22 07:08:32 +0000 UTC" firstStartedPulling="2025-11-22 07:08:35.283338954 +0000 UTC m=+357.696732222" lastFinishedPulling="2025-11-22 07:09:06.374329191 +0000 UTC m=+388.787722449" observedRunningTime="2025-11-22 07:09:12.567922814 +0000 UTC m=+394.981316072" watchObservedRunningTime="2025-11-22 07:09:12.573115311 +0000 UTC m=+394.986508589" Nov 22 07:09:12 crc kubenswrapper[4856]: I1122 07:09:12.938382 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:09:12 crc kubenswrapper[4856]: I1122 07:09:12.938565 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:09:12 crc kubenswrapper[4856]: I1122 07:09:12.999455 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:09:14 crc kubenswrapper[4856]: I1122 07:09:14.600260 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rqb7t" Nov 22 07:09:20 crc kubenswrapper[4856]: I1122 07:09:20.474536 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:09:20 crc kubenswrapper[4856]: I1122 07:09:20.475500 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:09:20 crc kubenswrapper[4856]: I1122 07:09:20.541023 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:09:20 crc kubenswrapper[4856]: I1122 07:09:20.641911 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:09:29 crc kubenswrapper[4856]: I1122 07:09:29.754406 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:09:29 crc kubenswrapper[4856]: I1122 07:09:29.755380 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.716620 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-l4dp7"] Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.718627 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.736730 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-l4dp7"] Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.781200 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e69a50b8-c236-40e4-a1a0-5234e233ce1a-registry-tls\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.781272 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv2z2\" (UniqueName: \"kubernetes.io/projected/e69a50b8-c236-40e4-a1a0-5234e233ce1a-kube-api-access-cv2z2\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.781371 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.781442 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e69a50b8-c236-40e4-a1a0-5234e233ce1a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.781549 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e69a50b8-c236-40e4-a1a0-5234e233ce1a-bound-sa-token\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.781580 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e69a50b8-c236-40e4-a1a0-5234e233ce1a-registry-certificates\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.781608 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e69a50b8-c236-40e4-a1a0-5234e233ce1a-trusted-ca\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.781636 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e69a50b8-c236-40e4-a1a0-5234e233ce1a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.808095 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.882762 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e69a50b8-c236-40e4-a1a0-5234e233ce1a-bound-sa-token\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.882882 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e69a50b8-c236-40e4-a1a0-5234e233ce1a-registry-certificates\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.884544 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e69a50b8-c236-40e4-a1a0-5234e233ce1a-trusted-ca\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.884582 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e69a50b8-c236-40e4-a1a0-5234e233ce1a-registry-certificates\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.882911 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e69a50b8-c236-40e4-a1a0-5234e233ce1a-trusted-ca\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.884664 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e69a50b8-c236-40e4-a1a0-5234e233ce1a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.885739 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e69a50b8-c236-40e4-a1a0-5234e233ce1a-registry-tls\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.885786 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv2z2\" (UniqueName: \"kubernetes.io/projected/e69a50b8-c236-40e4-a1a0-5234e233ce1a-kube-api-access-cv2z2\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.885887 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e69a50b8-c236-40e4-a1a0-5234e233ce1a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.886265 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e69a50b8-c236-40e4-a1a0-5234e233ce1a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.892419 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e69a50b8-c236-40e4-a1a0-5234e233ce1a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.893168 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e69a50b8-c236-40e4-a1a0-5234e233ce1a-registry-tls\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.905940 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e69a50b8-c236-40e4-a1a0-5234e233ce1a-bound-sa-token\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:31 crc kubenswrapper[4856]: I1122 07:09:31.908747 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv2z2\" (UniqueName: \"kubernetes.io/projected/e69a50b8-c236-40e4-a1a0-5234e233ce1a-kube-api-access-cv2z2\") pod \"image-registry-66df7c8f76-l4dp7\" (UID: \"e69a50b8-c236-40e4-a1a0-5234e233ce1a\") " pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:32 crc kubenswrapper[4856]: I1122 07:09:32.037362 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:32 crc kubenswrapper[4856]: I1122 07:09:32.459839 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-l4dp7"] Nov 22 07:09:32 crc kubenswrapper[4856]: I1122 07:09:32.666088 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" event={"ID":"e69a50b8-c236-40e4-a1a0-5234e233ce1a","Type":"ContainerStarted","Data":"5e1f7e1db582bdfa84599690d413306c088b4fa6785e2117ad8abd06e43e16d2"} Nov 22 07:09:34 crc kubenswrapper[4856]: I1122 07:09:34.683720 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" event={"ID":"e69a50b8-c236-40e4-a1a0-5234e233ce1a","Type":"ContainerStarted","Data":"0945b06e71dea883f73285f641404aa131690b31e3c91fbe42d0fe7c889cec10"} Nov 22 07:09:34 crc kubenswrapper[4856]: I1122 07:09:34.684824 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:34 crc kubenswrapper[4856]: I1122 07:09:34.705347 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" podStartSLOduration=3.705319562 podStartE2EDuration="3.705319562s" podCreationTimestamp="2025-11-22 07:09:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:09:34.701742007 +0000 UTC m=+417.115135295" watchObservedRunningTime="2025-11-22 07:09:34.705319562 +0000 UTC m=+417.118712830" Nov 22 07:09:52 crc kubenswrapper[4856]: I1122 07:09:52.043065 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-l4dp7" Nov 22 07:09:52 crc kubenswrapper[4856]: I1122 07:09:52.089719 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sl25x"] Nov 22 07:09:59 crc kubenswrapper[4856]: I1122 07:09:59.754293 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:09:59 crc kubenswrapper[4856]: I1122 07:09:59.754718 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:09:59 crc kubenswrapper[4856]: I1122 07:09:59.754780 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:09:59 crc kubenswrapper[4856]: I1122 07:09:59.755476 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"446652a4e7a6c1452a08d9219d8000e01189f33aeb22bd2b2862fae72dd9e328"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:09:59 crc kubenswrapper[4856]: I1122 07:09:59.755549 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://446652a4e7a6c1452a08d9219d8000e01189f33aeb22bd2b2862fae72dd9e328" gracePeriod=600 Nov 22 07:10:00 crc kubenswrapper[4856]: I1122 07:10:00.872487 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="446652a4e7a6c1452a08d9219d8000e01189f33aeb22bd2b2862fae72dd9e328" exitCode=0 Nov 22 07:10:00 crc kubenswrapper[4856]: I1122 07:10:00.872584 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"446652a4e7a6c1452a08d9219d8000e01189f33aeb22bd2b2862fae72dd9e328"} Nov 22 07:10:00 crc kubenswrapper[4856]: I1122 07:10:00.872883 4856 scope.go:117] "RemoveContainer" containerID="91742887aaabc84135dc9a74c02571b48cd7e8000891c4eec82878447be6018b" Nov 22 07:10:03 crc kubenswrapper[4856]: I1122 07:10:03.168125 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"97adb7720511ab281b9b6ad25fd800510b058455c6ccc3c71322ef809023ee98"} Nov 22 07:10:17 crc kubenswrapper[4856]: I1122 07:10:17.129070 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" podUID="7faca66b-795d-46b2-aebd-53f45fdb51de" containerName="registry" containerID="cri-o://99214763ec63c9422df1762c62fa78bfb914b9ff912460aa8392b9ba16ed287c" gracePeriod=30 Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.286267 4856 generic.go:334] "Generic (PLEG): container finished" podID="7faca66b-795d-46b2-aebd-53f45fdb51de" containerID="99214763ec63c9422df1762c62fa78bfb914b9ff912460aa8392b9ba16ed287c" exitCode=0 Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.286394 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" event={"ID":"7faca66b-795d-46b2-aebd-53f45fdb51de","Type":"ContainerDied","Data":"99214763ec63c9422df1762c62fa78bfb914b9ff912460aa8392b9ba16ed287c"} Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.375297 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.512430 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-registry-tls\") pod \"7faca66b-795d-46b2-aebd-53f45fdb51de\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.512783 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-bound-sa-token\") pod \"7faca66b-795d-46b2-aebd-53f45fdb51de\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.513363 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"7faca66b-795d-46b2-aebd-53f45fdb51de\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.513615 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7faca66b-795d-46b2-aebd-53f45fdb51de-ca-trust-extracted\") pod \"7faca66b-795d-46b2-aebd-53f45fdb51de\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.513753 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7faca66b-795d-46b2-aebd-53f45fdb51de-registry-certificates\") pod \"7faca66b-795d-46b2-aebd-53f45fdb51de\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.513861 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljg6r\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-kube-api-access-ljg6r\") pod \"7faca66b-795d-46b2-aebd-53f45fdb51de\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.513975 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7faca66b-795d-46b2-aebd-53f45fdb51de-trusted-ca\") pod \"7faca66b-795d-46b2-aebd-53f45fdb51de\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.514051 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7faca66b-795d-46b2-aebd-53f45fdb51de-installation-pull-secrets\") pod \"7faca66b-795d-46b2-aebd-53f45fdb51de\" (UID: \"7faca66b-795d-46b2-aebd-53f45fdb51de\") " Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.516133 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7faca66b-795d-46b2-aebd-53f45fdb51de-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7faca66b-795d-46b2-aebd-53f45fdb51de" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.516847 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7faca66b-795d-46b2-aebd-53f45fdb51de-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7faca66b-795d-46b2-aebd-53f45fdb51de" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.521380 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7faca66b-795d-46b2-aebd-53f45fdb51de" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.522684 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7faca66b-795d-46b2-aebd-53f45fdb51de" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.528592 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-kube-api-access-ljg6r" (OuterVolumeSpecName: "kube-api-access-ljg6r") pod "7faca66b-795d-46b2-aebd-53f45fdb51de" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de"). InnerVolumeSpecName "kube-api-access-ljg6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.529221 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7faca66b-795d-46b2-aebd-53f45fdb51de-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7faca66b-795d-46b2-aebd-53f45fdb51de" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.537298 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "7faca66b-795d-46b2-aebd-53f45fdb51de" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.539221 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7faca66b-795d-46b2-aebd-53f45fdb51de-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7faca66b-795d-46b2-aebd-53f45fdb51de" (UID: "7faca66b-795d-46b2-aebd-53f45fdb51de"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.615977 4856 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7faca66b-795d-46b2-aebd-53f45fdb51de-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.616028 4856 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7faca66b-795d-46b2-aebd-53f45fdb51de-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.616045 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljg6r\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-kube-api-access-ljg6r\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.616060 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7faca66b-795d-46b2-aebd-53f45fdb51de-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.616071 4856 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7faca66b-795d-46b2-aebd-53f45fdb51de-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.616083 4856 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:18 crc kubenswrapper[4856]: I1122 07:10:18.616093 4856 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7faca66b-795d-46b2-aebd-53f45fdb51de-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 22 07:10:19 crc kubenswrapper[4856]: I1122 07:10:19.300543 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" event={"ID":"7faca66b-795d-46b2-aebd-53f45fdb51de","Type":"ContainerDied","Data":"8793c558353f849fe37475b70d9e032a026eea3481c69141adc0619745beffdd"} Nov 22 07:10:19 crc kubenswrapper[4856]: I1122 07:10:19.300629 4856 scope.go:117] "RemoveContainer" containerID="99214763ec63c9422df1762c62fa78bfb914b9ff912460aa8392b9ba16ed287c" Nov 22 07:10:19 crc kubenswrapper[4856]: I1122 07:10:19.300688 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" Nov 22 07:10:19 crc kubenswrapper[4856]: I1122 07:10:19.323435 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sl25x"] Nov 22 07:10:19 crc kubenswrapper[4856]: I1122 07:10:19.330786 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sl25x"] Nov 22 07:10:20 crc kubenswrapper[4856]: I1122 07:10:20.718194 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7faca66b-795d-46b2-aebd-53f45fdb51de" path="/var/lib/kubelet/pods/7faca66b-795d-46b2-aebd-53f45fdb51de/volumes" Nov 22 07:10:23 crc kubenswrapper[4856]: I1122 07:10:23.195461 4856 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-sl25x container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.27:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 07:10:23 crc kubenswrapper[4856]: I1122 07:10:23.196072 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-sl25x" podUID="7faca66b-795d-46b2-aebd-53f45fdb51de" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.27:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:11:44 crc kubenswrapper[4856]: I1122 07:11:44.292468 4856 scope.go:117] "RemoveContainer" containerID="7f0755ec0247e397e9891c13f4eaa9cbc7d9ccae6441700d9a817b3cff437506" Nov 22 07:12:29 crc kubenswrapper[4856]: I1122 07:12:29.755132 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:12:29 crc kubenswrapper[4856]: I1122 07:12:29.756256 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:12:59 crc kubenswrapper[4856]: I1122 07:12:59.754872 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:12:59 crc kubenswrapper[4856]: I1122 07:12:59.755289 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:13:29 crc kubenswrapper[4856]: I1122 07:13:29.754630 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:13:29 crc kubenswrapper[4856]: I1122 07:13:29.756380 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:13:29 crc kubenswrapper[4856]: I1122 07:13:29.756533 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:13:30 crc kubenswrapper[4856]: I1122 07:13:30.508734 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"97adb7720511ab281b9b6ad25fd800510b058455c6ccc3c71322ef809023ee98"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:13:30 crc kubenswrapper[4856]: I1122 07:13:30.509323 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://97adb7720511ab281b9b6ad25fd800510b058455c6ccc3c71322ef809023ee98" gracePeriod=600 Nov 22 07:13:31 crc kubenswrapper[4856]: I1122 07:13:31.515459 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="97adb7720511ab281b9b6ad25fd800510b058455c6ccc3c71322ef809023ee98" exitCode=0 Nov 22 07:13:31 crc kubenswrapper[4856]: I1122 07:13:31.515519 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"97adb7720511ab281b9b6ad25fd800510b058455c6ccc3c71322ef809023ee98"} Nov 22 07:13:31 crc kubenswrapper[4856]: I1122 07:13:31.515822 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"704ded6d89f91ae94e03498e78b0126d0b80a3e0d0c6bf737cb1be33e4a00015"} Nov 22 07:13:31 crc kubenswrapper[4856]: I1122 07:13:31.515872 4856 scope.go:117] "RemoveContainer" containerID="446652a4e7a6c1452a08d9219d8000e01189f33aeb22bd2b2862fae72dd9e328" Nov 22 07:13:44 crc kubenswrapper[4856]: I1122 07:13:44.340832 4856 scope.go:117] "RemoveContainer" containerID="8207b09b5a884d3a53ba25a5bdb4275045b617f883d2be3f134ab1631731eff5" Nov 22 07:13:44 crc kubenswrapper[4856]: I1122 07:13:44.366991 4856 scope.go:117] "RemoveContainer" containerID="22446d1b76478eca78a08dca44b3932bb1fb7a776fc514a49366a626cf06ad5d" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.143137 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77"] Nov 22 07:15:00 crc kubenswrapper[4856]: E1122 07:15:00.144046 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7faca66b-795d-46b2-aebd-53f45fdb51de" containerName="registry" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.144063 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7faca66b-795d-46b2-aebd-53f45fdb51de" containerName="registry" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.144172 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7faca66b-795d-46b2-aebd-53f45fdb51de" containerName="registry" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.144624 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.147802 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.147802 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.158342 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77"] Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.248391 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-secret-volume\") pod \"collect-profiles-29396595-bcg77\" (UID: \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.248473 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-config-volume\") pod \"collect-profiles-29396595-bcg77\" (UID: \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.248502 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72rbf\" (UniqueName: \"kubernetes.io/projected/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-kube-api-access-72rbf\") pod \"collect-profiles-29396595-bcg77\" (UID: \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.350399 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72rbf\" (UniqueName: \"kubernetes.io/projected/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-kube-api-access-72rbf\") pod \"collect-profiles-29396595-bcg77\" (UID: \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.350611 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-secret-volume\") pod \"collect-profiles-29396595-bcg77\" (UID: \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.351741 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-config-volume\") pod \"collect-profiles-29396595-bcg77\" (UID: \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.352873 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-config-volume\") pod \"collect-profiles-29396595-bcg77\" (UID: \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.365838 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-secret-volume\") pod \"collect-profiles-29396595-bcg77\" (UID: \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.373052 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72rbf\" (UniqueName: \"kubernetes.io/projected/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-kube-api-access-72rbf\") pod \"collect-profiles-29396595-bcg77\" (UID: \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.472654 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" Nov 22 07:15:00 crc kubenswrapper[4856]: I1122 07:15:00.688608 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77"] Nov 22 07:15:01 crc kubenswrapper[4856]: I1122 07:15:01.023281 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" event={"ID":"424fa6a2-20eb-46ab-b5df-b67dc5dd211a","Type":"ContainerStarted","Data":"eeb10c18c8617d6927334ecdb218b1b6b37fa85d9142a6c7d9870ec3aecb8ca4"} Nov 22 07:15:02 crc kubenswrapper[4856]: I1122 07:15:02.039563 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" event={"ID":"424fa6a2-20eb-46ab-b5df-b67dc5dd211a","Type":"ContainerStarted","Data":"dec5788054cab634606129d0e0d30843dc7cc305e4d705f334185cd54a09a44d"} Nov 22 07:15:02 crc kubenswrapper[4856]: I1122 07:15:02.054541 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" podStartSLOduration=2.054527977 podStartE2EDuration="2.054527977s" podCreationTimestamp="2025-11-22 07:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:15:02.054306021 +0000 UTC m=+744.467699299" watchObservedRunningTime="2025-11-22 07:15:02.054527977 +0000 UTC m=+744.467921235" Nov 22 07:15:03 crc kubenswrapper[4856]: I1122 07:15:03.047874 4856 generic.go:334] "Generic (PLEG): container finished" podID="424fa6a2-20eb-46ab-b5df-b67dc5dd211a" containerID="dec5788054cab634606129d0e0d30843dc7cc305e4d705f334185cd54a09a44d" exitCode=0 Nov 22 07:15:03 crc kubenswrapper[4856]: I1122 07:15:03.047957 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" event={"ID":"424fa6a2-20eb-46ab-b5df-b67dc5dd211a","Type":"ContainerDied","Data":"dec5788054cab634606129d0e0d30843dc7cc305e4d705f334185cd54a09a44d"} Nov 22 07:15:04 crc kubenswrapper[4856]: I1122 07:15:04.263046 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" Nov 22 07:15:04 crc kubenswrapper[4856]: I1122 07:15:04.297357 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-config-volume\") pod \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\" (UID: \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\") " Nov 22 07:15:04 crc kubenswrapper[4856]: I1122 07:15:04.297444 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72rbf\" (UniqueName: \"kubernetes.io/projected/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-kube-api-access-72rbf\") pod \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\" (UID: \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\") " Nov 22 07:15:04 crc kubenswrapper[4856]: I1122 07:15:04.297498 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-secret-volume\") pod \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\" (UID: \"424fa6a2-20eb-46ab-b5df-b67dc5dd211a\") " Nov 22 07:15:04 crc kubenswrapper[4856]: I1122 07:15:04.298151 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-config-volume" (OuterVolumeSpecName: "config-volume") pod "424fa6a2-20eb-46ab-b5df-b67dc5dd211a" (UID: "424fa6a2-20eb-46ab-b5df-b67dc5dd211a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:04 crc kubenswrapper[4856]: I1122 07:15:04.304385 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-kube-api-access-72rbf" (OuterVolumeSpecName: "kube-api-access-72rbf") pod "424fa6a2-20eb-46ab-b5df-b67dc5dd211a" (UID: "424fa6a2-20eb-46ab-b5df-b67dc5dd211a"). InnerVolumeSpecName "kube-api-access-72rbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:15:04 crc kubenswrapper[4856]: I1122 07:15:04.304930 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "424fa6a2-20eb-46ab-b5df-b67dc5dd211a" (UID: "424fa6a2-20eb-46ab-b5df-b67dc5dd211a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:04 crc kubenswrapper[4856]: I1122 07:15:04.398452 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72rbf\" (UniqueName: \"kubernetes.io/projected/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-kube-api-access-72rbf\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:04 crc kubenswrapper[4856]: I1122 07:15:04.398499 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:04 crc kubenswrapper[4856]: I1122 07:15:04.398528 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/424fa6a2-20eb-46ab-b5df-b67dc5dd211a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:05 crc kubenswrapper[4856]: I1122 07:15:05.058713 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" event={"ID":"424fa6a2-20eb-46ab-b5df-b67dc5dd211a","Type":"ContainerDied","Data":"eeb10c18c8617d6927334ecdb218b1b6b37fa85d9142a6c7d9870ec3aecb8ca4"} Nov 22 07:15:05 crc kubenswrapper[4856]: I1122 07:15:05.058784 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeb10c18c8617d6927334ecdb218b1b6b37fa85d9142a6c7d9870ec3aecb8ca4" Nov 22 07:15:05 crc kubenswrapper[4856]: I1122 07:15:05.058803 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77" Nov 22 07:15:18 crc kubenswrapper[4856]: I1122 07:15:18.715775 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-csttt"] Nov 22 07:15:18 crc kubenswrapper[4856]: I1122 07:15:18.716590 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" podUID="bbedaf28-a7ca-437c-93a8-8c676c7a9f1f" containerName="controller-manager" containerID="cri-o://e79fb505f3364a9b790a6cc07e9dc76dbce00678dc37abb4c4d0b7356e4ba1a5" gracePeriod=30 Nov 22 07:15:18 crc kubenswrapper[4856]: I1122 07:15:18.814327 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42"] Nov 22 07:15:18 crc kubenswrapper[4856]: I1122 07:15:18.814585 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" podUID="a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" containerName="route-controller-manager" containerID="cri-o://1d0fdfbd8ce6eae6514635e829c0064a508db366427cec3a76b24ec5a0145256" gracePeriod=30 Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.133281 4856 generic.go:334] "Generic (PLEG): container finished" podID="bbedaf28-a7ca-437c-93a8-8c676c7a9f1f" containerID="e79fb505f3364a9b790a6cc07e9dc76dbce00678dc37abb4c4d0b7356e4ba1a5" exitCode=0 Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.133348 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" event={"ID":"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f","Type":"ContainerDied","Data":"e79fb505f3364a9b790a6cc07e9dc76dbce00678dc37abb4c4d0b7356e4ba1a5"} Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.136461 4856 generic.go:334] "Generic (PLEG): container finished" podID="a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" containerID="1d0fdfbd8ce6eae6514635e829c0064a508db366427cec3a76b24ec5a0145256" exitCode=0 Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.136549 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" event={"ID":"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0","Type":"ContainerDied","Data":"1d0fdfbd8ce6eae6514635e829c0064a508db366427cec3a76b24ec5a0145256"} Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.560969 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.598064 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-serving-cert\") pod \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.598127 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdw6d\" (UniqueName: \"kubernetes.io/projected/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-kube-api-access-bdw6d\") pod \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.598170 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-config\") pod \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.598241 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-client-ca\") pod \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.598259 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-proxy-ca-bundles\") pod \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\" (UID: \"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f\") " Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.599383 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f" (UID: "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.599901 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-config" (OuterVolumeSpecName: "config") pod "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f" (UID: "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.600166 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-client-ca" (OuterVolumeSpecName: "client-ca") pod "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f" (UID: "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.607294 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-kube-api-access-bdw6d" (OuterVolumeSpecName: "kube-api-access-bdw6d") pod "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f" (UID: "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f"). InnerVolumeSpecName "kube-api-access-bdw6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.608168 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f" (UID: "bbedaf28-a7ca-437c-93a8-8c676c7a9f1f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.640842 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.699764 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-serving-cert\") pod \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.699856 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-client-ca\") pod \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.699920 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb4vj\" (UniqueName: \"kubernetes.io/projected/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-kube-api-access-qb4vj\") pod \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.700036 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-config\") pod \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\" (UID: \"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0\") " Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.700232 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.700252 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdw6d\" (UniqueName: \"kubernetes.io/projected/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-kube-api-access-bdw6d\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.700262 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.700270 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.700278 4856 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.700984 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-client-ca" (OuterVolumeSpecName: "client-ca") pod "a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" (UID: "a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.701009 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-config" (OuterVolumeSpecName: "config") pod "a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" (UID: "a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.704048 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-kube-api-access-qb4vj" (OuterVolumeSpecName: "kube-api-access-qb4vj") pod "a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" (UID: "a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0"). InnerVolumeSpecName "kube-api-access-qb4vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.706812 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" (UID: "a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.801398 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.801436 4856 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.801448 4856 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-client-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:19 crc kubenswrapper[4856]: I1122 07:15:19.801462 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb4vj\" (UniqueName: \"kubernetes.io/projected/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0-kube-api-access-qb4vj\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.143738 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" event={"ID":"bbedaf28-a7ca-437c-93a8-8c676c7a9f1f","Type":"ContainerDied","Data":"aa78fdd83656dbc58edbc662698d19a1d3a0d65ebe62ff659a202ea5ba40d529"} Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.143820 4856 scope.go:117] "RemoveContainer" containerID="e79fb505f3364a9b790a6cc07e9dc76dbce00678dc37abb4c4d0b7356e4ba1a5" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.144030 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-csttt" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.152401 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" event={"ID":"a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0","Type":"ContainerDied","Data":"5b9584540ec95b7ea31e3fd3f0e05d164f1ee4603a3edd2fd69b08a71a0b78ae"} Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.152583 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.172220 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-csttt"] Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.172797 4856 scope.go:117] "RemoveContainer" containerID="1d0fdfbd8ce6eae6514635e829c0064a508db366427cec3a76b24ec5a0145256" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.176063 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-csttt"] Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.187244 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42"] Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.190469 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-s6j42"] Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.717081 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" path="/var/lib/kubelet/pods/a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0/volumes" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.717651 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbedaf28-a7ca-437c-93a8-8c676c7a9f1f" path="/var/lib/kubelet/pods/bbedaf28-a7ca-437c-93a8-8c676c7a9f1f/volumes" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.795899 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm"] Nov 22 07:15:20 crc kubenswrapper[4856]: E1122 07:15:20.796446 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" containerName="route-controller-manager" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.796475 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" containerName="route-controller-manager" Nov 22 07:15:20 crc kubenswrapper[4856]: E1122 07:15:20.796502 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbedaf28-a7ca-437c-93a8-8c676c7a9f1f" containerName="controller-manager" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.796529 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbedaf28-a7ca-437c-93a8-8c676c7a9f1f" containerName="controller-manager" Nov 22 07:15:20 crc kubenswrapper[4856]: E1122 07:15:20.796549 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424fa6a2-20eb-46ab-b5df-b67dc5dd211a" containerName="collect-profiles" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.796558 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="424fa6a2-20eb-46ab-b5df-b67dc5dd211a" containerName="collect-profiles" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.796761 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1fd85de-6ea1-4e8c-a6d9-e6431fb5bec0" containerName="route-controller-manager" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.796785 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="424fa6a2-20eb-46ab-b5df-b67dc5dd211a" containerName="collect-profiles" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.796804 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbedaf28-a7ca-437c-93a8-8c676c7a9f1f" containerName="controller-manager" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.797452 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.800785 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7dbccbc674-pdg58"] Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.802851 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.804314 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.816313 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.816623 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.817299 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ca1343-914b-414b-b1e3-5e4b2a165697-config\") pod \"route-controller-manager-699987fd4b-ggfvm\" (UID: \"76ca1343-914b-414b-b1e3-5e4b2a165697\") " pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.817459 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxntc\" (UniqueName: \"kubernetes.io/projected/76ca1343-914b-414b-b1e3-5e4b2a165697-kube-api-access-nxntc\") pod \"route-controller-manager-699987fd4b-ggfvm\" (UID: \"76ca1343-914b-414b-b1e3-5e4b2a165697\") " pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.817527 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76ca1343-914b-414b-b1e3-5e4b2a165697-client-ca\") pod \"route-controller-manager-699987fd4b-ggfvm\" (UID: \"76ca1343-914b-414b-b1e3-5e4b2a165697\") " pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.817578 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ca1343-914b-414b-b1e3-5e4b2a165697-serving-cert\") pod \"route-controller-manager-699987fd4b-ggfvm\" (UID: \"76ca1343-914b-414b-b1e3-5e4b2a165697\") " pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.819028 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.819341 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.819711 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.819898 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.820197 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.820349 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.820895 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.821037 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.821174 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.824933 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm"] Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.833562 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.840630 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7dbccbc674-pdg58"] Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.918653 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5811da44-659c-4848-9b68-ad12eba34f47-proxy-ca-bundles\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.918726 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76ca1343-914b-414b-b1e3-5e4b2a165697-client-ca\") pod \"route-controller-manager-699987fd4b-ggfvm\" (UID: \"76ca1343-914b-414b-b1e3-5e4b2a165697\") " pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.918763 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5811da44-659c-4848-9b68-ad12eba34f47-config\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.918785 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5811da44-659c-4848-9b68-ad12eba34f47-serving-cert\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.918806 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmlxp\" (UniqueName: \"kubernetes.io/projected/5811da44-659c-4848-9b68-ad12eba34f47-kube-api-access-wmlxp\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.918840 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ca1343-914b-414b-b1e3-5e4b2a165697-serving-cert\") pod \"route-controller-manager-699987fd4b-ggfvm\" (UID: \"76ca1343-914b-414b-b1e3-5e4b2a165697\") " pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.918884 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ca1343-914b-414b-b1e3-5e4b2a165697-config\") pod \"route-controller-manager-699987fd4b-ggfvm\" (UID: \"76ca1343-914b-414b-b1e3-5e4b2a165697\") " pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.918920 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5811da44-659c-4848-9b68-ad12eba34f47-client-ca\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.918952 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxntc\" (UniqueName: \"kubernetes.io/projected/76ca1343-914b-414b-b1e3-5e4b2a165697-kube-api-access-nxntc\") pod \"route-controller-manager-699987fd4b-ggfvm\" (UID: \"76ca1343-914b-414b-b1e3-5e4b2a165697\") " pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.919921 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76ca1343-914b-414b-b1e3-5e4b2a165697-client-ca\") pod \"route-controller-manager-699987fd4b-ggfvm\" (UID: \"76ca1343-914b-414b-b1e3-5e4b2a165697\") " pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.920894 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ca1343-914b-414b-b1e3-5e4b2a165697-config\") pod \"route-controller-manager-699987fd4b-ggfvm\" (UID: \"76ca1343-914b-414b-b1e3-5e4b2a165697\") " pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.922911 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76ca1343-914b-414b-b1e3-5e4b2a165697-serving-cert\") pod \"route-controller-manager-699987fd4b-ggfvm\" (UID: \"76ca1343-914b-414b-b1e3-5e4b2a165697\") " pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:20 crc kubenswrapper[4856]: I1122 07:15:20.948664 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxntc\" (UniqueName: \"kubernetes.io/projected/76ca1343-914b-414b-b1e3-5e4b2a165697-kube-api-access-nxntc\") pod \"route-controller-manager-699987fd4b-ggfvm\" (UID: \"76ca1343-914b-414b-b1e3-5e4b2a165697\") " pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.020334 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5811da44-659c-4848-9b68-ad12eba34f47-client-ca\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.020403 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5811da44-659c-4848-9b68-ad12eba34f47-proxy-ca-bundles\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.020440 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5811da44-659c-4848-9b68-ad12eba34f47-config\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.020457 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5811da44-659c-4848-9b68-ad12eba34f47-serving-cert\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.020475 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmlxp\" (UniqueName: \"kubernetes.io/projected/5811da44-659c-4848-9b68-ad12eba34f47-kube-api-access-wmlxp\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.021482 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5811da44-659c-4848-9b68-ad12eba34f47-client-ca\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.022745 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5811da44-659c-4848-9b68-ad12eba34f47-proxy-ca-bundles\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.024929 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5811da44-659c-4848-9b68-ad12eba34f47-serving-cert\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.025027 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5811da44-659c-4848-9b68-ad12eba34f47-config\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.044075 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmlxp\" (UniqueName: \"kubernetes.io/projected/5811da44-659c-4848-9b68-ad12eba34f47-kube-api-access-wmlxp\") pod \"controller-manager-7dbccbc674-pdg58\" (UID: \"5811da44-659c-4848-9b68-ad12eba34f47\") " pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.129953 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.145332 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.330208 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm"] Nov 22 07:15:21 crc kubenswrapper[4856]: I1122 07:15:21.371272 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7dbccbc674-pdg58"] Nov 22 07:15:21 crc kubenswrapper[4856]: W1122 07:15:21.376480 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5811da44_659c_4848_9b68_ad12eba34f47.slice/crio-3c0c31d2dfc3e01db41502a22c58f28beca2f8e757967309914ed96743702ea0 WatchSource:0}: Error finding container 3c0c31d2dfc3e01db41502a22c58f28beca2f8e757967309914ed96743702ea0: Status 404 returned error can't find the container with id 3c0c31d2dfc3e01db41502a22c58f28beca2f8e757967309914ed96743702ea0 Nov 22 07:15:22 crc kubenswrapper[4856]: I1122 07:15:22.171046 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" event={"ID":"5811da44-659c-4848-9b68-ad12eba34f47","Type":"ContainerStarted","Data":"877280af0a28628ef6a595eb8588075b82245dbf1e5e040c09b85904bfcbbdba"} Nov 22 07:15:22 crc kubenswrapper[4856]: I1122 07:15:22.172337 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:22 crc kubenswrapper[4856]: I1122 07:15:22.172437 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" event={"ID":"5811da44-659c-4848-9b68-ad12eba34f47","Type":"ContainerStarted","Data":"3c0c31d2dfc3e01db41502a22c58f28beca2f8e757967309914ed96743702ea0"} Nov 22 07:15:22 crc kubenswrapper[4856]: I1122 07:15:22.172599 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" event={"ID":"76ca1343-914b-414b-b1e3-5e4b2a165697","Type":"ContainerStarted","Data":"6152b7c9175fee7d3310a6b9d673f6e4cf6aca088ae91f27157ef8f44db083a6"} Nov 22 07:15:22 crc kubenswrapper[4856]: I1122 07:15:22.172646 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" event={"ID":"76ca1343-914b-414b-b1e3-5e4b2a165697","Type":"ContainerStarted","Data":"4b0d118b96e5e29a091130d4c191ff203d0da0c23d9181c414afb61d8bb0d8e5"} Nov 22 07:15:22 crc kubenswrapper[4856]: I1122 07:15:22.172826 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:22 crc kubenswrapper[4856]: I1122 07:15:22.175342 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" Nov 22 07:15:22 crc kubenswrapper[4856]: I1122 07:15:22.183209 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" Nov 22 07:15:22 crc kubenswrapper[4856]: I1122 07:15:22.192429 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7dbccbc674-pdg58" podStartSLOduration=4.192406402 podStartE2EDuration="4.192406402s" podCreationTimestamp="2025-11-22 07:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:15:22.189709906 +0000 UTC m=+764.603103164" watchObservedRunningTime="2025-11-22 07:15:22.192406402 +0000 UTC m=+764.605799660" Nov 22 07:15:22 crc kubenswrapper[4856]: I1122 07:15:22.208635 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" podStartSLOduration=4.208611268 podStartE2EDuration="4.208611268s" podCreationTimestamp="2025-11-22 07:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:15:22.206776347 +0000 UTC m=+764.620169605" watchObservedRunningTime="2025-11-22 07:15:22.208611268 +0000 UTC m=+764.622004526" Nov 22 07:15:25 crc kubenswrapper[4856]: I1122 07:15:25.711054 4856 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 22 07:15:41 crc kubenswrapper[4856]: I1122 07:15:41.598398 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2685z"] Nov 22 07:15:41 crc kubenswrapper[4856]: I1122 07:15:41.599221 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovn-controller" containerID="cri-o://fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b" gracePeriod=30 Nov 22 07:15:41 crc kubenswrapper[4856]: I1122 07:15:41.599301 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="northd" containerID="cri-o://39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0" gracePeriod=30 Nov 22 07:15:41 crc kubenswrapper[4856]: I1122 07:15:41.599364 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417" gracePeriod=30 Nov 22 07:15:41 crc kubenswrapper[4856]: I1122 07:15:41.599425 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="kube-rbac-proxy-node" containerID="cri-o://b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a" gracePeriod=30 Nov 22 07:15:41 crc kubenswrapper[4856]: I1122 07:15:41.599471 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovn-acl-logging" containerID="cri-o://3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908" gracePeriod=30 Nov 22 07:15:41 crc kubenswrapper[4856]: I1122 07:15:41.599542 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="sbdb" containerID="cri-o://833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40" gracePeriod=30 Nov 22 07:15:41 crc kubenswrapper[4856]: I1122 07:15:41.599601 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="nbdb" containerID="cri-o://2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd" gracePeriod=30 Nov 22 07:15:41 crc kubenswrapper[4856]: I1122 07:15:41.630883 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" containerID="cri-o://48539420b5e6d6a577381d7e945bd14c09869ee456dba40d36330cf27bd84070" gracePeriod=30 Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.269632 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovnkube-controller/3.log" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.273809 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovn-acl-logging/0.log" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.274689 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovn-controller/0.log" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275208 4856 generic.go:334] "Generic (PLEG): container finished" podID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerID="48539420b5e6d6a577381d7e945bd14c09869ee456dba40d36330cf27bd84070" exitCode=0 Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275242 4856 generic.go:334] "Generic (PLEG): container finished" podID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerID="833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40" exitCode=0 Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275251 4856 generic.go:334] "Generic (PLEG): container finished" podID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerID="2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd" exitCode=0 Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275258 4856 generic.go:334] "Generic (PLEG): container finished" podID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerID="39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0" exitCode=0 Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275266 4856 generic.go:334] "Generic (PLEG): container finished" podID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerID="000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417" exitCode=0 Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275273 4856 generic.go:334] "Generic (PLEG): container finished" podID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerID="b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a" exitCode=0 Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275281 4856 generic.go:334] "Generic (PLEG): container finished" podID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerID="3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908" exitCode=143 Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275289 4856 generic.go:334] "Generic (PLEG): container finished" podID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerID="fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b" exitCode=143 Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275277 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"48539420b5e6d6a577381d7e945bd14c09869ee456dba40d36330cf27bd84070"} Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275332 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40"} Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275350 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd"} Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275366 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0"} Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275377 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417"} Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275388 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a"} Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275397 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908"} Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275406 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b"} Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.275417 4856 scope.go:117] "RemoveContainer" containerID="31005378d312e7eb0fbec5afe8f46c240aa7062688c991f3174244a329e6f13e" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.277373 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjqpv_59c3498a-6659-454c-9fe0-361fa7a0783c/kube-multus/2.log" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.277892 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjqpv_59c3498a-6659-454c-9fe0-361fa7a0783c/kube-multus/1.log" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.277939 4856 generic.go:334] "Generic (PLEG): container finished" podID="59c3498a-6659-454c-9fe0-361fa7a0783c" containerID="96df3ae9766dbae643106da1572f9d0c1c5787e1e82f6dbb57a18cf7ba6e3c10" exitCode=2 Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.277970 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjqpv" event={"ID":"59c3498a-6659-454c-9fe0-361fa7a0783c","Type":"ContainerDied","Data":"96df3ae9766dbae643106da1572f9d0c1c5787e1e82f6dbb57a18cf7ba6e3c10"} Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.278478 4856 scope.go:117] "RemoveContainer" containerID="96df3ae9766dbae643106da1572f9d0c1c5787e1e82f6dbb57a18cf7ba6e3c10" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.302392 4856 scope.go:117] "RemoveContainer" containerID="85802497d854ef578bcdbf9c5f897f66796b88ff8d65b13ddd9e41b1be93d956" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.447909 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovn-acl-logging/0.log" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.449377 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovn-controller/0.log" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.450045 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.485688 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-ovnkube-script-lib\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.485757 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-run-netns\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.485794 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-cni-netd\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.485816 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-slash\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.485840 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-log-socket\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.485885 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-ovn\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.485913 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-env-overrides\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.485951 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-etc-openvswitch\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.485976 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-kubelet\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.486007 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.486062 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/752eee1c-98a9-4221-88a7-f332f704d4cf-ovn-node-metrics-cert\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.486103 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxgp8\" (UniqueName: \"kubernetes.io/projected/752eee1c-98a9-4221-88a7-f332f704d4cf-kube-api-access-wxgp8\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.486132 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-node-log\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.486163 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-openvswitch\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.486197 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-ovnkube-config\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.486223 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-systemd-units\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.486281 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-var-lib-openvswitch\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.486347 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-run-ovn-kubernetes\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.486376 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-systemd\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.486417 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-cni-bin\") pod \"752eee1c-98a9-4221-88a7-f332f704d4cf\" (UID: \"752eee1c-98a9-4221-88a7-f332f704d4cf\") " Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.486918 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.487587 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.487811 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.487873 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.487903 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-slash" (OuterVolumeSpecName: "host-slash") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.487961 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-log-socket" (OuterVolumeSpecName: "log-socket") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.487989 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.488553 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.488591 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.488785 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.488840 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.492557 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.493685 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-node-log" (OuterVolumeSpecName: "node-log") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.493711 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.493756 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.493778 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.493826 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.501147 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/752eee1c-98a9-4221-88a7-f332f704d4cf-kube-api-access-wxgp8" (OuterVolumeSpecName: "kube-api-access-wxgp8") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "kube-api-access-wxgp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.501190 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/752eee1c-98a9-4221-88a7-f332f704d4cf-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.517451 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-77vxz"] Nov 22 07:15:42 crc kubenswrapper[4856]: E1122 07:15:42.517744 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="northd" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.517763 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="northd" Nov 22 07:15:42 crc kubenswrapper[4856]: E1122 07:15:42.517795 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.517804 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: E1122 07:15:42.517816 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="sbdb" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.517823 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="sbdb" Nov 22 07:15:42 crc kubenswrapper[4856]: E1122 07:15:42.517832 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.517839 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: E1122 07:15:42.517845 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.517851 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: E1122 07:15:42.517859 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovn-acl-logging" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.517865 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovn-acl-logging" Nov 22 07:15:42 crc kubenswrapper[4856]: E1122 07:15:42.517872 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="kubecfg-setup" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.517878 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="kubecfg-setup" Nov 22 07:15:42 crc kubenswrapper[4856]: E1122 07:15:42.517887 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovn-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.517893 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovn-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: E1122 07:15:42.518067 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="kube-rbac-proxy-node" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518093 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="kube-rbac-proxy-node" Nov 22 07:15:42 crc kubenswrapper[4856]: E1122 07:15:42.518100 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518107 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: E1122 07:15:42.518120 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518126 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 07:15:42 crc kubenswrapper[4856]: E1122 07:15:42.518135 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="nbdb" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518163 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="nbdb" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518296 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovn-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518304 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518311 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="kube-rbac-proxy-node" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518318 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518324 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="nbdb" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518336 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518344 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518355 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="sbdb" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518362 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovn-acl-logging" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518372 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="northd" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.518383 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="kube-rbac-proxy-ovn-metrics" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.519146 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "752eee1c-98a9-4221-88a7-f332f704d4cf" (UID: "752eee1c-98a9-4221-88a7-f332f704d4cf"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:15:42 crc kubenswrapper[4856]: E1122 07:15:42.519745 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.519759 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.519873 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" containerName="ovnkube-controller" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.521652 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587439 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e002b170-bf40-4305-8ae3-fa8eed21a17d-ovn-node-metrics-cert\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587493 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-cni-bin\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587538 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e002b170-bf40-4305-8ae3-fa8eed21a17d-ovnkube-config\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587561 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-kubelet\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587584 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-log-socket\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587628 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-slash\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587646 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-run-ovn-kubernetes\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587659 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-run-openvswitch\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587680 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587714 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-run-systemd\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587729 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-var-lib-openvswitch\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587754 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-node-log\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587773 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e002b170-bf40-4305-8ae3-fa8eed21a17d-ovnkube-script-lib\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587792 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-etc-openvswitch\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587811 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e002b170-bf40-4305-8ae3-fa8eed21a17d-env-overrides\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587825 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-run-netns\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587847 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqrpb\" (UniqueName: \"kubernetes.io/projected/e002b170-bf40-4305-8ae3-fa8eed21a17d-kube-api-access-gqrpb\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587865 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-systemd-units\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587883 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-run-ovn\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587899 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-cni-netd\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587936 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587945 4856 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587954 4856 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587963 4856 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587972 4856 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587984 4856 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.587994 4856 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.588006 4856 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.588016 4856 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.588029 4856 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-slash\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.588040 4856 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-log-socket\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.588051 4856 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.588063 4856 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/752eee1c-98a9-4221-88a7-f332f704d4cf-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.588074 4856 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.588082 4856 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.588090 4856 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.588098 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/752eee1c-98a9-4221-88a7-f332f704d4cf-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.588107 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxgp8\" (UniqueName: \"kubernetes.io/projected/752eee1c-98a9-4221-88a7-f332f704d4cf-kube-api-access-wxgp8\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.588115 4856 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-node-log\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.588123 4856 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/752eee1c-98a9-4221-88a7-f332f704d4cf-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689138 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e002b170-bf40-4305-8ae3-fa8eed21a17d-env-overrides\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689179 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-run-netns\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689202 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqrpb\" (UniqueName: \"kubernetes.io/projected/e002b170-bf40-4305-8ae3-fa8eed21a17d-kube-api-access-gqrpb\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689234 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-systemd-units\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689263 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-run-ovn\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689283 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-cni-netd\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689287 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-run-netns\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689315 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e002b170-bf40-4305-8ae3-fa8eed21a17d-ovn-node-metrics-cert\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689337 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-cni-bin\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689345 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-systemd-units\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689357 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e002b170-bf40-4305-8ae3-fa8eed21a17d-ovnkube-config\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689382 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-kubelet\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689409 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-run-ovn\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689412 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-log-socket\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689383 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-cni-netd\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689589 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-kubelet\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689615 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-log-socket\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689658 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-slash\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689686 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-run-ovn-kubernetes\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689707 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-run-openvswitch\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689733 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689762 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-run-systemd\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689796 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-var-lib-openvswitch\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689828 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-node-log\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689854 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e002b170-bf40-4305-8ae3-fa8eed21a17d-ovnkube-script-lib\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689887 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-etc-openvswitch\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689967 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-etc-openvswitch\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.689995 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-slash\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.690017 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-run-ovn-kubernetes\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.690047 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-var-lib-openvswitch\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.690059 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e002b170-bf40-4305-8ae3-fa8eed21a17d-env-overrides\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.690066 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-run-systemd\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.690097 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-run-openvswitch\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.690146 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.690176 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-host-cni-bin\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.690195 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e002b170-bf40-4305-8ae3-fa8eed21a17d-node-log\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.690625 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e002b170-bf40-4305-8ae3-fa8eed21a17d-ovnkube-config\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.690700 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e002b170-bf40-4305-8ae3-fa8eed21a17d-ovnkube-script-lib\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.693411 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e002b170-bf40-4305-8ae3-fa8eed21a17d-ovn-node-metrics-cert\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.706389 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqrpb\" (UniqueName: \"kubernetes.io/projected/e002b170-bf40-4305-8ae3-fa8eed21a17d-kube-api-access-gqrpb\") pod \"ovnkube-node-77vxz\" (UID: \"e002b170-bf40-4305-8ae3-fa8eed21a17d\") " pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:42 crc kubenswrapper[4856]: I1122 07:15:42.837968 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.287112 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovn-acl-logging/0.log" Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.288242 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2685z_752eee1c-98a9-4221-88a7-f332f704d4cf/ovn-controller/0.log" Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.288775 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" event={"ID":"752eee1c-98a9-4221-88a7-f332f704d4cf","Type":"ContainerDied","Data":"2fb6064aa74579d426014990b59839b5244e3e70b91052ef254e2eab72f5f77a"} Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.288805 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2685z" Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.288842 4856 scope.go:117] "RemoveContainer" containerID="48539420b5e6d6a577381d7e945bd14c09869ee456dba40d36330cf27bd84070" Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.292232 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fjqpv_59c3498a-6659-454c-9fe0-361fa7a0783c/kube-multus/2.log" Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.292326 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fjqpv" event={"ID":"59c3498a-6659-454c-9fe0-361fa7a0783c","Type":"ContainerStarted","Data":"7adb78688f9980fcf3f606df453b85fe77669444ac6490a7e7ac8ce667f7f318"} Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.294410 4856 generic.go:334] "Generic (PLEG): container finished" podID="e002b170-bf40-4305-8ae3-fa8eed21a17d" containerID="cdfed56e22f3838f6d912ff4c0e167327ccee537abf6bd92138e38ac59524573" exitCode=0 Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.294462 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" event={"ID":"e002b170-bf40-4305-8ae3-fa8eed21a17d","Type":"ContainerDied","Data":"cdfed56e22f3838f6d912ff4c0e167327ccee537abf6bd92138e38ac59524573"} Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.294529 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" event={"ID":"e002b170-bf40-4305-8ae3-fa8eed21a17d","Type":"ContainerStarted","Data":"0a75c3e550d8fb7a5bc16fba7352b46b731b487fad9d24b34e35ce2cb85bd998"} Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.307010 4856 scope.go:117] "RemoveContainer" containerID="833308ad6cfa55b6259fdcaf2e2c44abbfe9b69d2c5b9910a635da3dab447a40" Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.338040 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2685z"] Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.339522 4856 scope.go:117] "RemoveContainer" containerID="2b3dc1524bca809b324ce4b75a691a5490ff31729cde1bb1044ad2e8f9f8eebd" Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.345041 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2685z"] Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.371201 4856 scope.go:117] "RemoveContainer" containerID="39acb3e1db2dd509333744514beb119693394cc37358e64652dd302f35534be0" Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.396237 4856 scope.go:117] "RemoveContainer" containerID="000cdf36cbee0eff73d2d00acf0bec160c68361baad2768291dcd058b6b9e417" Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.417902 4856 scope.go:117] "RemoveContainer" containerID="b3bd0c800c0add605504f7ff6146b872138bcadfd6b6adb0ea160f43a900062a" Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.432458 4856 scope.go:117] "RemoveContainer" containerID="3e0a7c7e1eb92cab5775b5905dce2f533edcc0e09468f436bb1abe9304603908" Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.448946 4856 scope.go:117] "RemoveContainer" containerID="fc38d6f5b9c901bb9c6e2e04dffbcc541e31aea2b62fa068b30d3463a914d85b" Nov 22 07:15:43 crc kubenswrapper[4856]: I1122 07:15:43.470989 4856 scope.go:117] "RemoveContainer" containerID="608de7e79478ad2712a7e582d4628fbc80851cff11a920ede7adf7430f8c46ec" Nov 22 07:15:44 crc kubenswrapper[4856]: I1122 07:15:44.303104 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" event={"ID":"e002b170-bf40-4305-8ae3-fa8eed21a17d","Type":"ContainerStarted","Data":"9a056df25a3b54f5bd3eb1337f86c99a7d3db8b79e90dee2d89ea52a41624174"} Nov 22 07:15:44 crc kubenswrapper[4856]: I1122 07:15:44.303439 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" event={"ID":"e002b170-bf40-4305-8ae3-fa8eed21a17d","Type":"ContainerStarted","Data":"d0b871a764be7d5e8be5f3e66123268e5338979687f5d4142d912159efe97cad"} Nov 22 07:15:44 crc kubenswrapper[4856]: I1122 07:15:44.303454 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" event={"ID":"e002b170-bf40-4305-8ae3-fa8eed21a17d","Type":"ContainerStarted","Data":"7890d363d9d4a4d9a4dedfe8c373f9d4141aa717c0b0c006f74d9d9065c0249b"} Nov 22 07:15:44 crc kubenswrapper[4856]: I1122 07:15:44.303465 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" event={"ID":"e002b170-bf40-4305-8ae3-fa8eed21a17d","Type":"ContainerStarted","Data":"d8fe4ac25c104c9da2d7a07df8a0a2a09a6b3facadf0ae821d2f4e8cacc8cef2"} Nov 22 07:15:44 crc kubenswrapper[4856]: I1122 07:15:44.303474 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" event={"ID":"e002b170-bf40-4305-8ae3-fa8eed21a17d","Type":"ContainerStarted","Data":"9e54b3133312645dca9e7520c3d91330a545ad4f047542f83e0e5ee2b6ff1e69"} Nov 22 07:15:44 crc kubenswrapper[4856]: I1122 07:15:44.303483 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" event={"ID":"e002b170-bf40-4305-8ae3-fa8eed21a17d","Type":"ContainerStarted","Data":"90480c1f1da622b5dbcf6274a7223ba105770d4dcd6fbbea676dc1466cc65b1c"} Nov 22 07:15:44 crc kubenswrapper[4856]: I1122 07:15:44.716204 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="752eee1c-98a9-4221-88a7-f332f704d4cf" path="/var/lib/kubelet/pods/752eee1c-98a9-4221-88a7-f332f704d4cf/volumes" Nov 22 07:15:46 crc kubenswrapper[4856]: I1122 07:15:46.324907 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" event={"ID":"e002b170-bf40-4305-8ae3-fa8eed21a17d","Type":"ContainerStarted","Data":"2509ba5241ded8c265fffe104bd289db6e385eeab98471e8caca23bb30de1594"} Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.167005 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-v4vg6"] Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.168188 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.170948 4856 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-zw7lf" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.171110 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.171176 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.171261 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.257762 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-crc-storage\") pod \"crc-storage-crc-v4vg6\" (UID: \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\") " pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.257859 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p4wc\" (UniqueName: \"kubernetes.io/projected/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-kube-api-access-9p4wc\") pod \"crc-storage-crc-v4vg6\" (UID: \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\") " pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.258157 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-node-mnt\") pod \"crc-storage-crc-v4vg6\" (UID: \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\") " pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.360059 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-crc-storage\") pod \"crc-storage-crc-v4vg6\" (UID: \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\") " pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.360158 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p4wc\" (UniqueName: \"kubernetes.io/projected/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-kube-api-access-9p4wc\") pod \"crc-storage-crc-v4vg6\" (UID: \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\") " pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.360230 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-node-mnt\") pod \"crc-storage-crc-v4vg6\" (UID: \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\") " pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.360657 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-node-mnt\") pod \"crc-storage-crc-v4vg6\" (UID: \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\") " pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.361065 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-crc-storage\") pod \"crc-storage-crc-v4vg6\" (UID: \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\") " pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.377960 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p4wc\" (UniqueName: \"kubernetes.io/projected/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-kube-api-access-9p4wc\") pod \"crc-storage-crc-v4vg6\" (UID: \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\") " pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:48 crc kubenswrapper[4856]: I1122 07:15:48.487150 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:48 crc kubenswrapper[4856]: E1122 07:15:48.509815 4856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-v4vg6_crc-storage_5c3d51ee-61ce-4d1b-936d-a69a12c83fb5_0(c9207056316c6606876dfe49976de8b4f5ef6ce4fc04d31748a3962a431f3939): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:15:48 crc kubenswrapper[4856]: E1122 07:15:48.509973 4856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-v4vg6_crc-storage_5c3d51ee-61ce-4d1b-936d-a69a12c83fb5_0(c9207056316c6606876dfe49976de8b4f5ef6ce4fc04d31748a3962a431f3939): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:48 crc kubenswrapper[4856]: E1122 07:15:48.510047 4856 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-v4vg6_crc-storage_5c3d51ee-61ce-4d1b-936d-a69a12c83fb5_0(c9207056316c6606876dfe49976de8b4f5ef6ce4fc04d31748a3962a431f3939): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:48 crc kubenswrapper[4856]: E1122 07:15:48.510152 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-v4vg6_crc-storage(5c3d51ee-61ce-4d1b-936d-a69a12c83fb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-v4vg6_crc-storage(5c3d51ee-61ce-4d1b-936d-a69a12c83fb5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-v4vg6_crc-storage_5c3d51ee-61ce-4d1b-936d-a69a12c83fb5_0(c9207056316c6606876dfe49976de8b4f5ef6ce4fc04d31748a3962a431f3939): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-v4vg6" podUID="5c3d51ee-61ce-4d1b-936d-a69a12c83fb5" Nov 22 07:15:50 crc kubenswrapper[4856]: I1122 07:15:50.350455 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" event={"ID":"e002b170-bf40-4305-8ae3-fa8eed21a17d","Type":"ContainerStarted","Data":"f19d2730c7dce7cc50613b8c2502393e48cc881f34c079ac6819b0f01ad97028"} Nov 22 07:15:50 crc kubenswrapper[4856]: I1122 07:15:50.350755 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:50 crc kubenswrapper[4856]: I1122 07:15:50.350882 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:50 crc kubenswrapper[4856]: I1122 07:15:50.375642 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:50 crc kubenswrapper[4856]: I1122 07:15:50.384229 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" podStartSLOduration=8.384201216 podStartE2EDuration="8.384201216s" podCreationTimestamp="2025-11-22 07:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:15:50.382152597 +0000 UTC m=+792.795545875" watchObservedRunningTime="2025-11-22 07:15:50.384201216 +0000 UTC m=+792.797594474" Nov 22 07:15:50 crc kubenswrapper[4856]: I1122 07:15:50.743199 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-v4vg6"] Nov 22 07:15:50 crc kubenswrapper[4856]: I1122 07:15:50.743891 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:50 crc kubenswrapper[4856]: I1122 07:15:50.744429 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:50 crc kubenswrapper[4856]: E1122 07:15:50.788769 4856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-v4vg6_crc-storage_5c3d51ee-61ce-4d1b-936d-a69a12c83fb5_0(748bd8f7d43f14ca0d9d634f7839ba6db967ea7bdd064824ad51fa7cc4d91eb5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 22 07:15:50 crc kubenswrapper[4856]: E1122 07:15:50.788852 4856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-v4vg6_crc-storage_5c3d51ee-61ce-4d1b-936d-a69a12c83fb5_0(748bd8f7d43f14ca0d9d634f7839ba6db967ea7bdd064824ad51fa7cc4d91eb5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:50 crc kubenswrapper[4856]: E1122 07:15:50.788874 4856 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-v4vg6_crc-storage_5c3d51ee-61ce-4d1b-936d-a69a12c83fb5_0(748bd8f7d43f14ca0d9d634f7839ba6db967ea7bdd064824ad51fa7cc4d91eb5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:15:50 crc kubenswrapper[4856]: E1122 07:15:50.788922 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-v4vg6_crc-storage(5c3d51ee-61ce-4d1b-936d-a69a12c83fb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-v4vg6_crc-storage(5c3d51ee-61ce-4d1b-936d-a69a12c83fb5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-v4vg6_crc-storage_5c3d51ee-61ce-4d1b-936d-a69a12c83fb5_0(748bd8f7d43f14ca0d9d634f7839ba6db967ea7bdd064824ad51fa7cc4d91eb5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-v4vg6" podUID="5c3d51ee-61ce-4d1b-936d-a69a12c83fb5" Nov 22 07:15:51 crc kubenswrapper[4856]: I1122 07:15:51.356238 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:51 crc kubenswrapper[4856]: I1122 07:15:51.383886 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:15:59 crc kubenswrapper[4856]: I1122 07:15:59.754983 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:15:59 crc kubenswrapper[4856]: I1122 07:15:59.755430 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:16:02 crc kubenswrapper[4856]: I1122 07:16:02.708878 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:16:02 crc kubenswrapper[4856]: I1122 07:16:02.709787 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:16:03 crc kubenswrapper[4856]: I1122 07:16:03.113178 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-v4vg6"] Nov 22 07:16:03 crc kubenswrapper[4856]: W1122 07:16:03.115795 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c3d51ee_61ce_4d1b_936d_a69a12c83fb5.slice/crio-a3c7eb50bd0db6ef803b14a70f3808a0bb5d12541b76cefd4f15c77e8a287923 WatchSource:0}: Error finding container a3c7eb50bd0db6ef803b14a70f3808a0bb5d12541b76cefd4f15c77e8a287923: Status 404 returned error can't find the container with id a3c7eb50bd0db6ef803b14a70f3808a0bb5d12541b76cefd4f15c77e8a287923 Nov 22 07:16:03 crc kubenswrapper[4856]: I1122 07:16:03.117721 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:16:03 crc kubenswrapper[4856]: I1122 07:16:03.415799 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-v4vg6" event={"ID":"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5","Type":"ContainerStarted","Data":"a3c7eb50bd0db6ef803b14a70f3808a0bb5d12541b76cefd4f15c77e8a287923"} Nov 22 07:16:04 crc kubenswrapper[4856]: I1122 07:16:04.425847 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-v4vg6" event={"ID":"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5","Type":"ContainerStarted","Data":"1b78db4dcb6243b88818e65cb9e71f6ee40dec58af9897cd29c76851b4505745"} Nov 22 07:16:04 crc kubenswrapper[4856]: I1122 07:16:04.440261 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="crc-storage/crc-storage-crc-v4vg6" podStartSLOduration=15.310312773 podStartE2EDuration="16.440239172s" podCreationTimestamp="2025-11-22 07:15:48 +0000 UTC" firstStartedPulling="2025-11-22 07:16:03.11745265 +0000 UTC m=+805.530845918" lastFinishedPulling="2025-11-22 07:16:04.247379059 +0000 UTC m=+806.660772317" observedRunningTime="2025-11-22 07:16:04.439550702 +0000 UTC m=+806.852943960" watchObservedRunningTime="2025-11-22 07:16:04.440239172 +0000 UTC m=+806.853632430" Nov 22 07:16:05 crc kubenswrapper[4856]: I1122 07:16:05.434881 4856 generic.go:334] "Generic (PLEG): container finished" podID="5c3d51ee-61ce-4d1b-936d-a69a12c83fb5" containerID="1b78db4dcb6243b88818e65cb9e71f6ee40dec58af9897cd29c76851b4505745" exitCode=0 Nov 22 07:16:05 crc kubenswrapper[4856]: I1122 07:16:05.434969 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-v4vg6" event={"ID":"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5","Type":"ContainerDied","Data":"1b78db4dcb6243b88818e65cb9e71f6ee40dec58af9897cd29c76851b4505745"} Nov 22 07:16:06 crc kubenswrapper[4856]: I1122 07:16:06.667257 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:16:06 crc kubenswrapper[4856]: I1122 07:16:06.730584 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p4wc\" (UniqueName: \"kubernetes.io/projected/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-kube-api-access-9p4wc\") pod \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\" (UID: \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\") " Nov 22 07:16:06 crc kubenswrapper[4856]: I1122 07:16:06.730758 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-crc-storage\") pod \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\" (UID: \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\") " Nov 22 07:16:06 crc kubenswrapper[4856]: I1122 07:16:06.730827 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-node-mnt\") pod \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\" (UID: \"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5\") " Nov 22 07:16:06 crc kubenswrapper[4856]: I1122 07:16:06.731230 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "5c3d51ee-61ce-4d1b-936d-a69a12c83fb5" (UID: "5c3d51ee-61ce-4d1b-936d-a69a12c83fb5"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:16:06 crc kubenswrapper[4856]: I1122 07:16:06.731846 4856 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:06 crc kubenswrapper[4856]: I1122 07:16:06.736311 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-kube-api-access-9p4wc" (OuterVolumeSpecName: "kube-api-access-9p4wc") pod "5c3d51ee-61ce-4d1b-936d-a69a12c83fb5" (UID: "5c3d51ee-61ce-4d1b-936d-a69a12c83fb5"). InnerVolumeSpecName "kube-api-access-9p4wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:16:06 crc kubenswrapper[4856]: I1122 07:16:06.743442 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "5c3d51ee-61ce-4d1b-936d-a69a12c83fb5" (UID: "5c3d51ee-61ce-4d1b-936d-a69a12c83fb5"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:16:06 crc kubenswrapper[4856]: I1122 07:16:06.833067 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p4wc\" (UniqueName: \"kubernetes.io/projected/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-kube-api-access-9p4wc\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:06 crc kubenswrapper[4856]: I1122 07:16:06.833104 4856 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:07 crc kubenswrapper[4856]: I1122 07:16:07.446253 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-v4vg6" event={"ID":"5c3d51ee-61ce-4d1b-936d-a69a12c83fb5","Type":"ContainerDied","Data":"a3c7eb50bd0db6ef803b14a70f3808a0bb5d12541b76cefd4f15c77e8a287923"} Nov 22 07:16:07 crc kubenswrapper[4856]: I1122 07:16:07.446294 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3c7eb50bd0db6ef803b14a70f3808a0bb5d12541b76cefd4f15c77e8a287923" Nov 22 07:16:07 crc kubenswrapper[4856]: I1122 07:16:07.446649 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-v4vg6" Nov 22 07:16:12 crc kubenswrapper[4856]: I1122 07:16:12.880369 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-77vxz" Nov 22 07:16:13 crc kubenswrapper[4856]: I1122 07:16:13.953604 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt"] Nov 22 07:16:13 crc kubenswrapper[4856]: E1122 07:16:13.953869 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c3d51ee-61ce-4d1b-936d-a69a12c83fb5" containerName="storage" Nov 22 07:16:13 crc kubenswrapper[4856]: I1122 07:16:13.953886 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c3d51ee-61ce-4d1b-936d-a69a12c83fb5" containerName="storage" Nov 22 07:16:13 crc kubenswrapper[4856]: I1122 07:16:13.954034 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c3d51ee-61ce-4d1b-936d-a69a12c83fb5" containerName="storage" Nov 22 07:16:13 crc kubenswrapper[4856]: I1122 07:16:13.954994 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" Nov 22 07:16:13 crc kubenswrapper[4856]: I1122 07:16:13.956729 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 07:16:13 crc kubenswrapper[4856]: I1122 07:16:13.964432 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt"] Nov 22 07:16:14 crc kubenswrapper[4856]: I1122 07:16:14.018905 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30e04f43-7f8f-41bf-9253-8628ff4bd88d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt\" (UID: \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" Nov 22 07:16:14 crc kubenswrapper[4856]: I1122 07:16:14.019312 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30e04f43-7f8f-41bf-9253-8628ff4bd88d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt\" (UID: \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" Nov 22 07:16:14 crc kubenswrapper[4856]: I1122 07:16:14.019377 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4t9g\" (UniqueName: \"kubernetes.io/projected/30e04f43-7f8f-41bf-9253-8628ff4bd88d-kube-api-access-h4t9g\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt\" (UID: \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" Nov 22 07:16:14 crc kubenswrapper[4856]: I1122 07:16:14.120557 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4t9g\" (UniqueName: \"kubernetes.io/projected/30e04f43-7f8f-41bf-9253-8628ff4bd88d-kube-api-access-h4t9g\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt\" (UID: \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" Nov 22 07:16:14 crc kubenswrapper[4856]: I1122 07:16:14.120610 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30e04f43-7f8f-41bf-9253-8628ff4bd88d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt\" (UID: \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" Nov 22 07:16:14 crc kubenswrapper[4856]: I1122 07:16:14.120691 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30e04f43-7f8f-41bf-9253-8628ff4bd88d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt\" (UID: \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" Nov 22 07:16:14 crc kubenswrapper[4856]: I1122 07:16:14.121203 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30e04f43-7f8f-41bf-9253-8628ff4bd88d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt\" (UID: \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" Nov 22 07:16:14 crc kubenswrapper[4856]: I1122 07:16:14.121338 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30e04f43-7f8f-41bf-9253-8628ff4bd88d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt\" (UID: \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" Nov 22 07:16:14 crc kubenswrapper[4856]: I1122 07:16:14.138547 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4t9g\" (UniqueName: \"kubernetes.io/projected/30e04f43-7f8f-41bf-9253-8628ff4bd88d-kube-api-access-h4t9g\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt\" (UID: \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" Nov 22 07:16:14 crc kubenswrapper[4856]: I1122 07:16:14.274659 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" Nov 22 07:16:14 crc kubenswrapper[4856]: I1122 07:16:14.660276 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt"] Nov 22 07:16:15 crc kubenswrapper[4856]: I1122 07:16:15.491287 4856 generic.go:334] "Generic (PLEG): container finished" podID="30e04f43-7f8f-41bf-9253-8628ff4bd88d" containerID="d834c3ee6f279faf0ff4bb53ee6fd09a76bef94c46f637df79e9550e4dbea21d" exitCode=0 Nov 22 07:16:15 crc kubenswrapper[4856]: I1122 07:16:15.491361 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" event={"ID":"30e04f43-7f8f-41bf-9253-8628ff4bd88d","Type":"ContainerDied","Data":"d834c3ee6f279faf0ff4bb53ee6fd09a76bef94c46f637df79e9550e4dbea21d"} Nov 22 07:16:15 crc kubenswrapper[4856]: I1122 07:16:15.491679 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" event={"ID":"30e04f43-7f8f-41bf-9253-8628ff4bd88d","Type":"ContainerStarted","Data":"7b860195e7eaaf363c0d02e6a08defda5f70c88d70516ba138c467c39524227c"} Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.253812 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lfbpr"] Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.254880 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.272575 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lfbpr"] Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.348601 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6gxr\" (UniqueName: \"kubernetes.io/projected/d3e7ba10-389d-422c-b193-b61753dc349a-kube-api-access-c6gxr\") pod \"redhat-operators-lfbpr\" (UID: \"d3e7ba10-389d-422c-b193-b61753dc349a\") " pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.348691 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3e7ba10-389d-422c-b193-b61753dc349a-utilities\") pod \"redhat-operators-lfbpr\" (UID: \"d3e7ba10-389d-422c-b193-b61753dc349a\") " pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.348710 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3e7ba10-389d-422c-b193-b61753dc349a-catalog-content\") pod \"redhat-operators-lfbpr\" (UID: \"d3e7ba10-389d-422c-b193-b61753dc349a\") " pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.449399 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3e7ba10-389d-422c-b193-b61753dc349a-utilities\") pod \"redhat-operators-lfbpr\" (UID: \"d3e7ba10-389d-422c-b193-b61753dc349a\") " pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.449458 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3e7ba10-389d-422c-b193-b61753dc349a-catalog-content\") pod \"redhat-operators-lfbpr\" (UID: \"d3e7ba10-389d-422c-b193-b61753dc349a\") " pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.449545 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6gxr\" (UniqueName: \"kubernetes.io/projected/d3e7ba10-389d-422c-b193-b61753dc349a-kube-api-access-c6gxr\") pod \"redhat-operators-lfbpr\" (UID: \"d3e7ba10-389d-422c-b193-b61753dc349a\") " pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.450050 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3e7ba10-389d-422c-b193-b61753dc349a-utilities\") pod \"redhat-operators-lfbpr\" (UID: \"d3e7ba10-389d-422c-b193-b61753dc349a\") " pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.450100 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3e7ba10-389d-422c-b193-b61753dc349a-catalog-content\") pod \"redhat-operators-lfbpr\" (UID: \"d3e7ba10-389d-422c-b193-b61753dc349a\") " pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.469818 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6gxr\" (UniqueName: \"kubernetes.io/projected/d3e7ba10-389d-422c-b193-b61753dc349a-kube-api-access-c6gxr\") pod \"redhat-operators-lfbpr\" (UID: \"d3e7ba10-389d-422c-b193-b61753dc349a\") " pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.574810 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:16 crc kubenswrapper[4856]: I1122 07:16:16.978937 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lfbpr"] Nov 22 07:16:16 crc kubenswrapper[4856]: W1122 07:16:16.984018 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3e7ba10_389d_422c_b193_b61753dc349a.slice/crio-1d018e4670ad2c68d18906190cb18d959e86177b544c23ac3f1ae0aecd7f9e5e WatchSource:0}: Error finding container 1d018e4670ad2c68d18906190cb18d959e86177b544c23ac3f1ae0aecd7f9e5e: Status 404 returned error can't find the container with id 1d018e4670ad2c68d18906190cb18d959e86177b544c23ac3f1ae0aecd7f9e5e Nov 22 07:16:17 crc kubenswrapper[4856]: I1122 07:16:17.504598 4856 generic.go:334] "Generic (PLEG): container finished" podID="30e04f43-7f8f-41bf-9253-8628ff4bd88d" containerID="98eb89e556bc7f06748344339bd2947ecf8951d756d4e61904202d22ad9bd055" exitCode=0 Nov 22 07:16:17 crc kubenswrapper[4856]: I1122 07:16:17.504647 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" event={"ID":"30e04f43-7f8f-41bf-9253-8628ff4bd88d","Type":"ContainerDied","Data":"98eb89e556bc7f06748344339bd2947ecf8951d756d4e61904202d22ad9bd055"} Nov 22 07:16:17 crc kubenswrapper[4856]: I1122 07:16:17.506037 4856 generic.go:334] "Generic (PLEG): container finished" podID="d3e7ba10-389d-422c-b193-b61753dc349a" containerID="3612e4e1816248d469271e7e476d8cd48ade5c40b6b70f3c4fc8aaf9229deeff" exitCode=0 Nov 22 07:16:17 crc kubenswrapper[4856]: I1122 07:16:17.506072 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbpr" event={"ID":"d3e7ba10-389d-422c-b193-b61753dc349a","Type":"ContainerDied","Data":"3612e4e1816248d469271e7e476d8cd48ade5c40b6b70f3c4fc8aaf9229deeff"} Nov 22 07:16:17 crc kubenswrapper[4856]: I1122 07:16:17.506105 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbpr" event={"ID":"d3e7ba10-389d-422c-b193-b61753dc349a","Type":"ContainerStarted","Data":"1d018e4670ad2c68d18906190cb18d959e86177b544c23ac3f1ae0aecd7f9e5e"} Nov 22 07:16:18 crc kubenswrapper[4856]: I1122 07:16:18.515142 4856 generic.go:334] "Generic (PLEG): container finished" podID="30e04f43-7f8f-41bf-9253-8628ff4bd88d" containerID="f6819d80758da9514e6e9832ed9bb977bd4c95e42eaf9a2fa9c63e3a51f30c24" exitCode=0 Nov 22 07:16:18 crc kubenswrapper[4856]: I1122 07:16:18.515231 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" event={"ID":"30e04f43-7f8f-41bf-9253-8628ff4bd88d","Type":"ContainerDied","Data":"f6819d80758da9514e6e9832ed9bb977bd4c95e42eaf9a2fa9c63e3a51f30c24"} Nov 22 07:16:19 crc kubenswrapper[4856]: I1122 07:16:19.522164 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbpr" event={"ID":"d3e7ba10-389d-422c-b193-b61753dc349a","Type":"ContainerStarted","Data":"44172cf22ca5ed5a2d40a56370cd7927f68868496ad974a1e15938c52d1001c1"} Nov 22 07:16:19 crc kubenswrapper[4856]: I1122 07:16:19.767015 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" Nov 22 07:16:19 crc kubenswrapper[4856]: I1122 07:16:19.791450 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4t9g\" (UniqueName: \"kubernetes.io/projected/30e04f43-7f8f-41bf-9253-8628ff4bd88d-kube-api-access-h4t9g\") pod \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\" (UID: \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\") " Nov 22 07:16:19 crc kubenswrapper[4856]: I1122 07:16:19.791674 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30e04f43-7f8f-41bf-9253-8628ff4bd88d-util\") pod \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\" (UID: \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\") " Nov 22 07:16:19 crc kubenswrapper[4856]: I1122 07:16:19.791783 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30e04f43-7f8f-41bf-9253-8628ff4bd88d-bundle\") pod \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\" (UID: \"30e04f43-7f8f-41bf-9253-8628ff4bd88d\") " Nov 22 07:16:19 crc kubenswrapper[4856]: I1122 07:16:19.792302 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30e04f43-7f8f-41bf-9253-8628ff4bd88d-bundle" (OuterVolumeSpecName: "bundle") pod "30e04f43-7f8f-41bf-9253-8628ff4bd88d" (UID: "30e04f43-7f8f-41bf-9253-8628ff4bd88d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:19 crc kubenswrapper[4856]: I1122 07:16:19.794716 4856 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30e04f43-7f8f-41bf-9253-8628ff4bd88d-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:19 crc kubenswrapper[4856]: I1122 07:16:19.798819 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30e04f43-7f8f-41bf-9253-8628ff4bd88d-kube-api-access-h4t9g" (OuterVolumeSpecName: "kube-api-access-h4t9g") pod "30e04f43-7f8f-41bf-9253-8628ff4bd88d" (UID: "30e04f43-7f8f-41bf-9253-8628ff4bd88d"). InnerVolumeSpecName "kube-api-access-h4t9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:16:19 crc kubenswrapper[4856]: I1122 07:16:19.811424 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30e04f43-7f8f-41bf-9253-8628ff4bd88d-util" (OuterVolumeSpecName: "util") pod "30e04f43-7f8f-41bf-9253-8628ff4bd88d" (UID: "30e04f43-7f8f-41bf-9253-8628ff4bd88d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:19 crc kubenswrapper[4856]: I1122 07:16:19.896550 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4t9g\" (UniqueName: \"kubernetes.io/projected/30e04f43-7f8f-41bf-9253-8628ff4bd88d-kube-api-access-h4t9g\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:19 crc kubenswrapper[4856]: I1122 07:16:19.896590 4856 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30e04f43-7f8f-41bf-9253-8628ff4bd88d-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:20 crc kubenswrapper[4856]: I1122 07:16:20.530460 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" event={"ID":"30e04f43-7f8f-41bf-9253-8628ff4bd88d","Type":"ContainerDied","Data":"7b860195e7eaaf363c0d02e6a08defda5f70c88d70516ba138c467c39524227c"} Nov 22 07:16:20 crc kubenswrapper[4856]: I1122 07:16:20.530490 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt" Nov 22 07:16:20 crc kubenswrapper[4856]: I1122 07:16:20.530504 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b860195e7eaaf363c0d02e6a08defda5f70c88d70516ba138c467c39524227c" Nov 22 07:16:20 crc kubenswrapper[4856]: I1122 07:16:20.533069 4856 generic.go:334] "Generic (PLEG): container finished" podID="d3e7ba10-389d-422c-b193-b61753dc349a" containerID="44172cf22ca5ed5a2d40a56370cd7927f68868496ad974a1e15938c52d1001c1" exitCode=0 Nov 22 07:16:20 crc kubenswrapper[4856]: I1122 07:16:20.533155 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbpr" event={"ID":"d3e7ba10-389d-422c-b193-b61753dc349a","Type":"ContainerDied","Data":"44172cf22ca5ed5a2d40a56370cd7927f68868496ad974a1e15938c52d1001c1"} Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.542776 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbpr" event={"ID":"d3e7ba10-389d-422c-b193-b61753dc349a","Type":"ContainerStarted","Data":"441ffb0b484844a8785ff3222cc3863efaea74a9fb00aeed44e4f6385de75886"} Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.573557 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lfbpr" podStartSLOduration=1.956318719 podStartE2EDuration="5.573494575s" podCreationTimestamp="2025-11-22 07:16:16 +0000 UTC" firstStartedPulling="2025-11-22 07:16:17.507446012 +0000 UTC m=+819.920839270" lastFinishedPulling="2025-11-22 07:16:21.124621868 +0000 UTC m=+823.538015126" observedRunningTime="2025-11-22 07:16:21.567370523 +0000 UTC m=+823.980763801" watchObservedRunningTime="2025-11-22 07:16:21.573494575 +0000 UTC m=+823.986887853" Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.800842 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-wfj97"] Nov 22 07:16:21 crc kubenswrapper[4856]: E1122 07:16:21.801726 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e04f43-7f8f-41bf-9253-8628ff4bd88d" containerName="extract" Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.801820 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e04f43-7f8f-41bf-9253-8628ff4bd88d" containerName="extract" Nov 22 07:16:21 crc kubenswrapper[4856]: E1122 07:16:21.801901 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e04f43-7f8f-41bf-9253-8628ff4bd88d" containerName="pull" Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.801972 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e04f43-7f8f-41bf-9253-8628ff4bd88d" containerName="pull" Nov 22 07:16:21 crc kubenswrapper[4856]: E1122 07:16:21.802053 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30e04f43-7f8f-41bf-9253-8628ff4bd88d" containerName="util" Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.802119 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="30e04f43-7f8f-41bf-9253-8628ff4bd88d" containerName="util" Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.802336 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="30e04f43-7f8f-41bf-9253-8628ff4bd88d" containerName="extract" Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.803059 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-wfj97" Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.805368 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.805384 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.807308 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-xxjw4" Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.815617 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-wfj97"] Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.821916 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9kfk\" (UniqueName: \"kubernetes.io/projected/ded5842b-24c9-4039-ba91-2bed9c39a83b-kube-api-access-z9kfk\") pod \"nmstate-operator-557fdffb88-wfj97\" (UID: \"ded5842b-24c9-4039-ba91-2bed9c39a83b\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-wfj97" Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.923738 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9kfk\" (UniqueName: \"kubernetes.io/projected/ded5842b-24c9-4039-ba91-2bed9c39a83b-kube-api-access-z9kfk\") pod \"nmstate-operator-557fdffb88-wfj97\" (UID: \"ded5842b-24c9-4039-ba91-2bed9c39a83b\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-wfj97" Nov 22 07:16:21 crc kubenswrapper[4856]: I1122 07:16:21.943593 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9kfk\" (UniqueName: \"kubernetes.io/projected/ded5842b-24c9-4039-ba91-2bed9c39a83b-kube-api-access-z9kfk\") pod \"nmstate-operator-557fdffb88-wfj97\" (UID: \"ded5842b-24c9-4039-ba91-2bed9c39a83b\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-wfj97" Nov 22 07:16:22 crc kubenswrapper[4856]: I1122 07:16:22.122699 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-wfj97" Nov 22 07:16:22 crc kubenswrapper[4856]: I1122 07:16:22.364947 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-wfj97"] Nov 22 07:16:22 crc kubenswrapper[4856]: I1122 07:16:22.549826 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-wfj97" event={"ID":"ded5842b-24c9-4039-ba91-2bed9c39a83b","Type":"ContainerStarted","Data":"f9982b985fae88b61684363139bd24ca4c26e5855166005136926b32d9f537a5"} Nov 22 07:16:25 crc kubenswrapper[4856]: I1122 07:16:25.575129 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-wfj97" event={"ID":"ded5842b-24c9-4039-ba91-2bed9c39a83b","Type":"ContainerStarted","Data":"f5217352801f72a6cdde0991e7574295abe9fb8e124d673bb2dc0284a859fd54"} Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.548253 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-wfj97" podStartSLOduration=2.842225163 podStartE2EDuration="5.548228363s" podCreationTimestamp="2025-11-22 07:16:21 +0000 UTC" firstStartedPulling="2025-11-22 07:16:22.379632206 +0000 UTC m=+824.793025464" lastFinishedPulling="2025-11-22 07:16:25.085635406 +0000 UTC m=+827.499028664" observedRunningTime="2025-11-22 07:16:25.59814544 +0000 UTC m=+828.011538698" watchObservedRunningTime="2025-11-22 07:16:26.548228363 +0000 UTC m=+828.961621631" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.551151 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-gkdzn"] Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.552422 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gkdzn" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.554772 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-glx8c" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.563831 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-gkdzn"] Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.575616 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.575662 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.576112 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q"] Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.576985 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.581254 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.585711 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wrrsc"] Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.586413 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.596483 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsqg9\" (UniqueName: \"kubernetes.io/projected/01cfaa66-61e4-414d-b456-8a6c64a2ed5a-kube-api-access-fsqg9\") pod \"nmstate-metrics-5dcf9c57c5-gkdzn\" (UID: \"01cfaa66-61e4-414d-b456-8a6c64a2ed5a\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gkdzn" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.612636 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q"] Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.697804 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsqg9\" (UniqueName: \"kubernetes.io/projected/01cfaa66-61e4-414d-b456-8a6c64a2ed5a-kube-api-access-fsqg9\") pod \"nmstate-metrics-5dcf9c57c5-gkdzn\" (UID: \"01cfaa66-61e4-414d-b456-8a6c64a2ed5a\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gkdzn" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.698331 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/902c7237-e48c-4e23-a3fa-88b76d745120-nmstate-lock\") pod \"nmstate-handler-wrrsc\" (UID: \"902c7237-e48c-4e23-a3fa-88b76d745120\") " pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.698382 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/902c7237-e48c-4e23-a3fa-88b76d745120-ovs-socket\") pod \"nmstate-handler-wrrsc\" (UID: \"902c7237-e48c-4e23-a3fa-88b76d745120\") " pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.698409 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/902c7237-e48c-4e23-a3fa-88b76d745120-dbus-socket\") pod \"nmstate-handler-wrrsc\" (UID: \"902c7237-e48c-4e23-a3fa-88b76d745120\") " pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.698426 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frrl2\" (UniqueName: \"kubernetes.io/projected/902c7237-e48c-4e23-a3fa-88b76d745120-kube-api-access-frrl2\") pod \"nmstate-handler-wrrsc\" (UID: \"902c7237-e48c-4e23-a3fa-88b76d745120\") " pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.698760 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjm7d\" (UniqueName: \"kubernetes.io/projected/9fb0076b-cac2-41cc-aa7b-a02bb1e64c28-kube-api-access-xjm7d\") pod \"nmstate-webhook-6b89b748d8-pnd6q\" (UID: \"9fb0076b-cac2-41cc-aa7b-a02bb1e64c28\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.698883 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9fb0076b-cac2-41cc-aa7b-a02bb1e64c28-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-pnd6q\" (UID: \"9fb0076b-cac2-41cc-aa7b-a02bb1e64c28\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.706944 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc"] Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.708312 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.716719 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-569lj" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.719475 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.719981 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.735176 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsqg9\" (UniqueName: \"kubernetes.io/projected/01cfaa66-61e4-414d-b456-8a6c64a2ed5a-kube-api-access-fsqg9\") pod \"nmstate-metrics-5dcf9c57c5-gkdzn\" (UID: \"01cfaa66-61e4-414d-b456-8a6c64a2ed5a\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gkdzn" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.763905 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc"] Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.800417 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9fb0076b-cac2-41cc-aa7b-a02bb1e64c28-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-pnd6q\" (UID: \"9fb0076b-cac2-41cc-aa7b-a02bb1e64c28\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.800479 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbxpb\" (UniqueName: \"kubernetes.io/projected/3cb14566-2d38-4393-bdf4-cf9d06a764fd-kube-api-access-hbxpb\") pod \"nmstate-console-plugin-5874bd7bc5-7qvtc\" (UID: \"3cb14566-2d38-4393-bdf4-cf9d06a764fd\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.800588 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3cb14566-2d38-4393-bdf4-cf9d06a764fd-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-7qvtc\" (UID: \"3cb14566-2d38-4393-bdf4-cf9d06a764fd\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.800651 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3cb14566-2d38-4393-bdf4-cf9d06a764fd-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-7qvtc\" (UID: \"3cb14566-2d38-4393-bdf4-cf9d06a764fd\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.800689 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/902c7237-e48c-4e23-a3fa-88b76d745120-nmstate-lock\") pod \"nmstate-handler-wrrsc\" (UID: \"902c7237-e48c-4e23-a3fa-88b76d745120\") " pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: E1122 07:16:26.800709 4856 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.800768 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/902c7237-e48c-4e23-a3fa-88b76d745120-ovs-socket\") pod \"nmstate-handler-wrrsc\" (UID: \"902c7237-e48c-4e23-a3fa-88b76d745120\") " pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.800789 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/902c7237-e48c-4e23-a3fa-88b76d745120-dbus-socket\") pod \"nmstate-handler-wrrsc\" (UID: \"902c7237-e48c-4e23-a3fa-88b76d745120\") " pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: E1122 07:16:26.800875 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fb0076b-cac2-41cc-aa7b-a02bb1e64c28-tls-key-pair podName:9fb0076b-cac2-41cc-aa7b-a02bb1e64c28 nodeName:}" failed. No retries permitted until 2025-11-22 07:16:27.300802902 +0000 UTC m=+829.714196340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/9fb0076b-cac2-41cc-aa7b-a02bb1e64c28-tls-key-pair") pod "nmstate-webhook-6b89b748d8-pnd6q" (UID: "9fb0076b-cac2-41cc-aa7b-a02bb1e64c28") : secret "openshift-nmstate-webhook" not found Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.800807 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frrl2\" (UniqueName: \"kubernetes.io/projected/902c7237-e48c-4e23-a3fa-88b76d745120-kube-api-access-frrl2\") pod \"nmstate-handler-wrrsc\" (UID: \"902c7237-e48c-4e23-a3fa-88b76d745120\") " pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.800982 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjm7d\" (UniqueName: \"kubernetes.io/projected/9fb0076b-cac2-41cc-aa7b-a02bb1e64c28-kube-api-access-xjm7d\") pod \"nmstate-webhook-6b89b748d8-pnd6q\" (UID: \"9fb0076b-cac2-41cc-aa7b-a02bb1e64c28\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.801322 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/902c7237-e48c-4e23-a3fa-88b76d745120-nmstate-lock\") pod \"nmstate-handler-wrrsc\" (UID: \"902c7237-e48c-4e23-a3fa-88b76d745120\") " pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.801419 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/902c7237-e48c-4e23-a3fa-88b76d745120-ovs-socket\") pod \"nmstate-handler-wrrsc\" (UID: \"902c7237-e48c-4e23-a3fa-88b76d745120\") " pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.801992 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/902c7237-e48c-4e23-a3fa-88b76d745120-dbus-socket\") pod \"nmstate-handler-wrrsc\" (UID: \"902c7237-e48c-4e23-a3fa-88b76d745120\") " pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.822702 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frrl2\" (UniqueName: \"kubernetes.io/projected/902c7237-e48c-4e23-a3fa-88b76d745120-kube-api-access-frrl2\") pod \"nmstate-handler-wrrsc\" (UID: \"902c7237-e48c-4e23-a3fa-88b76d745120\") " pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.832379 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjm7d\" (UniqueName: \"kubernetes.io/projected/9fb0076b-cac2-41cc-aa7b-a02bb1e64c28-kube-api-access-xjm7d\") pod \"nmstate-webhook-6b89b748d8-pnd6q\" (UID: \"9fb0076b-cac2-41cc-aa7b-a02bb1e64c28\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.869540 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gkdzn" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.903298 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbxpb\" (UniqueName: \"kubernetes.io/projected/3cb14566-2d38-4393-bdf4-cf9d06a764fd-kube-api-access-hbxpb\") pod \"nmstate-console-plugin-5874bd7bc5-7qvtc\" (UID: \"3cb14566-2d38-4393-bdf4-cf9d06a764fd\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.903782 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3cb14566-2d38-4393-bdf4-cf9d06a764fd-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-7qvtc\" (UID: \"3cb14566-2d38-4393-bdf4-cf9d06a764fd\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" Nov 22 07:16:26 crc kubenswrapper[4856]: E1122 07:16:26.903942 4856 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 22 07:16:26 crc kubenswrapper[4856]: E1122 07:16:26.904033 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cb14566-2d38-4393-bdf4-cf9d06a764fd-plugin-serving-cert podName:3cb14566-2d38-4393-bdf4-cf9d06a764fd nodeName:}" failed. No retries permitted until 2025-11-22 07:16:27.404008294 +0000 UTC m=+829.817401552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/3cb14566-2d38-4393-bdf4-cf9d06a764fd-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-7qvtc" (UID: "3cb14566-2d38-4393-bdf4-cf9d06a764fd") : secret "plugin-serving-cert" not found Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.905307 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3cb14566-2d38-4393-bdf4-cf9d06a764fd-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-7qvtc\" (UID: \"3cb14566-2d38-4393-bdf4-cf9d06a764fd\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.905459 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3cb14566-2d38-4393-bdf4-cf9d06a764fd-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-7qvtc\" (UID: \"3cb14566-2d38-4393-bdf4-cf9d06a764fd\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.907230 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-77b44546b8-xkpms"] Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.908477 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.925151 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77b44546b8-xkpms"] Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.928628 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:26 crc kubenswrapper[4856]: I1122 07:16:26.940849 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbxpb\" (UniqueName: \"kubernetes.io/projected/3cb14566-2d38-4393-bdf4-cf9d06a764fd-kube-api-access-hbxpb\") pod \"nmstate-console-plugin-5874bd7bc5-7qvtc\" (UID: \"3cb14566-2d38-4393-bdf4-cf9d06a764fd\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.008486 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/960c6722-0690-4678-a033-2a3b7ac15394-oauth-serving-cert\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.008929 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tdw6\" (UniqueName: \"kubernetes.io/projected/960c6722-0690-4678-a033-2a3b7ac15394-kube-api-access-8tdw6\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.009030 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/960c6722-0690-4678-a033-2a3b7ac15394-console-oauth-config\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.009198 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960c6722-0690-4678-a033-2a3b7ac15394-trusted-ca-bundle\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.009316 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/960c6722-0690-4678-a033-2a3b7ac15394-service-ca\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.009405 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/960c6722-0690-4678-a033-2a3b7ac15394-console-config\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.009492 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/960c6722-0690-4678-a033-2a3b7ac15394-console-serving-cert\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.110477 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/960c6722-0690-4678-a033-2a3b7ac15394-service-ca\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.110857 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/960c6722-0690-4678-a033-2a3b7ac15394-console-config\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.110889 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/960c6722-0690-4678-a033-2a3b7ac15394-console-serving-cert\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.110906 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/960c6722-0690-4678-a033-2a3b7ac15394-oauth-serving-cert\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.110923 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tdw6\" (UniqueName: \"kubernetes.io/projected/960c6722-0690-4678-a033-2a3b7ac15394-kube-api-access-8tdw6\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.110943 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/960c6722-0690-4678-a033-2a3b7ac15394-console-oauth-config\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.111016 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960c6722-0690-4678-a033-2a3b7ac15394-trusted-ca-bundle\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.111966 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/960c6722-0690-4678-a033-2a3b7ac15394-service-ca\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.112734 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/960c6722-0690-4678-a033-2a3b7ac15394-trusted-ca-bundle\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.112905 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/960c6722-0690-4678-a033-2a3b7ac15394-oauth-serving-cert\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.113776 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/960c6722-0690-4678-a033-2a3b7ac15394-console-config\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.118848 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/960c6722-0690-4678-a033-2a3b7ac15394-console-serving-cert\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.120065 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/960c6722-0690-4678-a033-2a3b7ac15394-console-oauth-config\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.136705 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tdw6\" (UniqueName: \"kubernetes.io/projected/960c6722-0690-4678-a033-2a3b7ac15394-kube-api-access-8tdw6\") pod \"console-77b44546b8-xkpms\" (UID: \"960c6722-0690-4678-a033-2a3b7ac15394\") " pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.243998 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.314075 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9fb0076b-cac2-41cc-aa7b-a02bb1e64c28-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-pnd6q\" (UID: \"9fb0076b-cac2-41cc-aa7b-a02bb1e64c28\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.318192 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9fb0076b-cac2-41cc-aa7b-a02bb1e64c28-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-pnd6q\" (UID: \"9fb0076b-cac2-41cc-aa7b-a02bb1e64c28\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.415383 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3cb14566-2d38-4393-bdf4-cf9d06a764fd-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-7qvtc\" (UID: \"3cb14566-2d38-4393-bdf4-cf9d06a764fd\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.418426 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-gkdzn"] Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.420743 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3cb14566-2d38-4393-bdf4-cf9d06a764fd-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-7qvtc\" (UID: \"3cb14566-2d38-4393-bdf4-cf9d06a764fd\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" Nov 22 07:16:27 crc kubenswrapper[4856]: W1122 07:16:27.423843 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01cfaa66_61e4_414d_b456_8a6c64a2ed5a.slice/crio-381e51dda68f70dea41e974555ccf99987dd369fc3134af44cada6d8d0f2df68 WatchSource:0}: Error finding container 381e51dda68f70dea41e974555ccf99987dd369fc3134af44cada6d8d0f2df68: Status 404 returned error can't find the container with id 381e51dda68f70dea41e974555ccf99987dd369fc3134af44cada6d8d0f2df68 Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.494043 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-77b44546b8-xkpms"] Nov 22 07:16:27 crc kubenswrapper[4856]: W1122 07:16:27.498162 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod960c6722_0690_4678_a033_2a3b7ac15394.slice/crio-28baa158fd543bb78be573fb2f65cc6b3022085ce7e383b0d89e34a8f0bd9743 WatchSource:0}: Error finding container 28baa158fd543bb78be573fb2f65cc6b3022085ce7e383b0d89e34a8f0bd9743: Status 404 returned error can't find the container with id 28baa158fd543bb78be573fb2f65cc6b3022085ce7e383b0d89e34a8f0bd9743 Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.513915 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.589484 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wrrsc" event={"ID":"902c7237-e48c-4e23-a3fa-88b76d745120","Type":"ContainerStarted","Data":"f7703ea276d91323bd47ecc191c170810761b35976dfdab553e6e09dd4a854d4"} Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.592762 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gkdzn" event={"ID":"01cfaa66-61e4-414d-b456-8a6c64a2ed5a","Type":"ContainerStarted","Data":"381e51dda68f70dea41e974555ccf99987dd369fc3134af44cada6d8d0f2df68"} Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.596203 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77b44546b8-xkpms" event={"ID":"960c6722-0690-4678-a033-2a3b7ac15394","Type":"ContainerStarted","Data":"28baa158fd543bb78be573fb2f65cc6b3022085ce7e383b0d89e34a8f0bd9743"} Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.642089 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lfbpr" podUID="d3e7ba10-389d-422c-b193-b61753dc349a" containerName="registry-server" probeResult="failure" output=< Nov 22 07:16:27 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 07:16:27 crc kubenswrapper[4856]: > Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.658091 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" Nov 22 07:16:27 crc kubenswrapper[4856]: I1122 07:16:27.930979 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q"] Nov 22 07:16:27 crc kubenswrapper[4856]: W1122 07:16:27.936454 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fb0076b_cac2_41cc_aa7b_a02bb1e64c28.slice/crio-cb5d254e46bd3bff5045d306fa467d7a46456538005dee6bc52ef7584d758786 WatchSource:0}: Error finding container cb5d254e46bd3bff5045d306fa467d7a46456538005dee6bc52ef7584d758786: Status 404 returned error can't find the container with id cb5d254e46bd3bff5045d306fa467d7a46456538005dee6bc52ef7584d758786 Nov 22 07:16:28 crc kubenswrapper[4856]: I1122 07:16:28.060586 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc"] Nov 22 07:16:28 crc kubenswrapper[4856]: W1122 07:16:28.068168 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cb14566_2d38_4393_bdf4_cf9d06a764fd.slice/crio-d2171eba00e051d7b7f9bab56ca15a3de682d5428be70c051fc9ee1cfd3bdd4f WatchSource:0}: Error finding container d2171eba00e051d7b7f9bab56ca15a3de682d5428be70c051fc9ee1cfd3bdd4f: Status 404 returned error can't find the container with id d2171eba00e051d7b7f9bab56ca15a3de682d5428be70c051fc9ee1cfd3bdd4f Nov 22 07:16:28 crc kubenswrapper[4856]: I1122 07:16:28.603317 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-77b44546b8-xkpms" event={"ID":"960c6722-0690-4678-a033-2a3b7ac15394","Type":"ContainerStarted","Data":"16b8b561f46ae2aa28531b448f58cafa03c7af876dd354bb84e6ea5a547a5ab8"} Nov 22 07:16:28 crc kubenswrapper[4856]: I1122 07:16:28.604839 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" event={"ID":"9fb0076b-cac2-41cc-aa7b-a02bb1e64c28","Type":"ContainerStarted","Data":"cb5d254e46bd3bff5045d306fa467d7a46456538005dee6bc52ef7584d758786"} Nov 22 07:16:28 crc kubenswrapper[4856]: I1122 07:16:28.606082 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" event={"ID":"3cb14566-2d38-4393-bdf4-cf9d06a764fd","Type":"ContainerStarted","Data":"d2171eba00e051d7b7f9bab56ca15a3de682d5428be70c051fc9ee1cfd3bdd4f"} Nov 22 07:16:28 crc kubenswrapper[4856]: I1122 07:16:28.629862 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-77b44546b8-xkpms" podStartSLOduration=2.629841791 podStartE2EDuration="2.629841791s" podCreationTimestamp="2025-11-22 07:16:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:16:28.624544892 +0000 UTC m=+831.037938150" watchObservedRunningTime="2025-11-22 07:16:28.629841791 +0000 UTC m=+831.043235059" Nov 22 07:16:29 crc kubenswrapper[4856]: I1122 07:16:29.754459 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:16:29 crc kubenswrapper[4856]: I1122 07:16:29.754906 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:16:30 crc kubenswrapper[4856]: I1122 07:16:30.623714 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wrrsc" event={"ID":"902c7237-e48c-4e23-a3fa-88b76d745120","Type":"ContainerStarted","Data":"99bd078450ba221ec20167017acb79f3097194e0bbfb9286508de8ee98db5f00"} Nov 22 07:16:30 crc kubenswrapper[4856]: I1122 07:16:30.624213 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:30 crc kubenswrapper[4856]: I1122 07:16:30.627416 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" event={"ID":"9fb0076b-cac2-41cc-aa7b-a02bb1e64c28","Type":"ContainerStarted","Data":"aac07605746c7d3efba4a09cf3d8397b5deb1cf41b486751c87256a5fea92baf"} Nov 22 07:16:30 crc kubenswrapper[4856]: I1122 07:16:30.627560 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" Nov 22 07:16:30 crc kubenswrapper[4856]: I1122 07:16:30.628702 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gkdzn" event={"ID":"01cfaa66-61e4-414d-b456-8a6c64a2ed5a","Type":"ContainerStarted","Data":"9e1dd7bcc0887d244a5ee4cc60aaf7d5a18a219866700f7549fd240a6fbf4168"} Nov 22 07:16:30 crc kubenswrapper[4856]: I1122 07:16:30.645013 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wrrsc" podStartSLOduration=2.095731877 podStartE2EDuration="4.644990563s" podCreationTimestamp="2025-11-22 07:16:26 +0000 UTC" firstStartedPulling="2025-11-22 07:16:26.967878707 +0000 UTC m=+829.381271965" lastFinishedPulling="2025-11-22 07:16:29.517137393 +0000 UTC m=+831.930530651" observedRunningTime="2025-11-22 07:16:30.637825231 +0000 UTC m=+833.051218499" watchObservedRunningTime="2025-11-22 07:16:30.644990563 +0000 UTC m=+833.058383821" Nov 22 07:16:30 crc kubenswrapper[4856]: I1122 07:16:30.668110 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" podStartSLOduration=3.080873481 podStartE2EDuration="4.668093705s" podCreationTimestamp="2025-11-22 07:16:26 +0000 UTC" firstStartedPulling="2025-11-22 07:16:27.93919873 +0000 UTC m=+830.352591988" lastFinishedPulling="2025-11-22 07:16:29.526418954 +0000 UTC m=+831.939812212" observedRunningTime="2025-11-22 07:16:30.665173693 +0000 UTC m=+833.078566961" watchObservedRunningTime="2025-11-22 07:16:30.668093705 +0000 UTC m=+833.081486963" Nov 22 07:16:33 crc kubenswrapper[4856]: I1122 07:16:33.645932 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" event={"ID":"3cb14566-2d38-4393-bdf4-cf9d06a764fd","Type":"ContainerStarted","Data":"f17f1af1262457b2561780440d86581d9857875f7b1516c8f50c31a34981192b"} Nov 22 07:16:33 crc kubenswrapper[4856]: I1122 07:16:33.659160 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-7qvtc" podStartSLOduration=2.82925529 podStartE2EDuration="7.659136769s" podCreationTimestamp="2025-11-22 07:16:26 +0000 UTC" firstStartedPulling="2025-11-22 07:16:28.070993299 +0000 UTC m=+830.484386557" lastFinishedPulling="2025-11-22 07:16:32.900874778 +0000 UTC m=+835.314268036" observedRunningTime="2025-11-22 07:16:33.658757178 +0000 UTC m=+836.072150446" watchObservedRunningTime="2025-11-22 07:16:33.659136769 +0000 UTC m=+836.072530027" Nov 22 07:16:34 crc kubenswrapper[4856]: I1122 07:16:34.654406 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gkdzn" event={"ID":"01cfaa66-61e4-414d-b456-8a6c64a2ed5a","Type":"ContainerStarted","Data":"aa9695c69b361532d271be67ae8ceecbb8d0a47b274a98216ee3e2a3c86924ce"} Nov 22 07:16:34 crc kubenswrapper[4856]: I1122 07:16:34.675763 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-gkdzn" podStartSLOduration=2.143868146 podStartE2EDuration="8.67573998s" podCreationTimestamp="2025-11-22 07:16:26 +0000 UTC" firstStartedPulling="2025-11-22 07:16:27.430697239 +0000 UTC m=+829.844090497" lastFinishedPulling="2025-11-22 07:16:33.962569073 +0000 UTC m=+836.375962331" observedRunningTime="2025-11-22 07:16:34.670192673 +0000 UTC m=+837.083585941" watchObservedRunningTime="2025-11-22 07:16:34.67573998 +0000 UTC m=+837.089133238" Nov 22 07:16:36 crc kubenswrapper[4856]: I1122 07:16:36.616008 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:36 crc kubenswrapper[4856]: I1122 07:16:36.655864 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:36 crc kubenswrapper[4856]: I1122 07:16:36.846043 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lfbpr"] Nov 22 07:16:36 crc kubenswrapper[4856]: I1122 07:16:36.950978 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wrrsc" Nov 22 07:16:37 crc kubenswrapper[4856]: I1122 07:16:37.244410 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:37 crc kubenswrapper[4856]: I1122 07:16:37.244487 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:37 crc kubenswrapper[4856]: I1122 07:16:37.249071 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:37 crc kubenswrapper[4856]: I1122 07:16:37.668740 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lfbpr" podUID="d3e7ba10-389d-422c-b193-b61753dc349a" containerName="registry-server" containerID="cri-o://441ffb0b484844a8785ff3222cc3863efaea74a9fb00aeed44e4f6385de75886" gracePeriod=2 Nov 22 07:16:37 crc kubenswrapper[4856]: I1122 07:16:37.673098 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-77b44546b8-xkpms" Nov 22 07:16:37 crc kubenswrapper[4856]: I1122 07:16:37.723450 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-57k7r"] Nov 22 07:16:38 crc kubenswrapper[4856]: E1122 07:16:38.347126 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3e7ba10_389d_422c_b193_b61753dc349a.slice/crio-conmon-441ffb0b484844a8785ff3222cc3863efaea74a9fb00aeed44e4f6385de75886.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:16:38 crc kubenswrapper[4856]: I1122 07:16:38.676247 4856 generic.go:334] "Generic (PLEG): container finished" podID="d3e7ba10-389d-422c-b193-b61753dc349a" containerID="441ffb0b484844a8785ff3222cc3863efaea74a9fb00aeed44e4f6385de75886" exitCode=0 Nov 22 07:16:38 crc kubenswrapper[4856]: I1122 07:16:38.676323 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbpr" event={"ID":"d3e7ba10-389d-422c-b193-b61753dc349a","Type":"ContainerDied","Data":"441ffb0b484844a8785ff3222cc3863efaea74a9fb00aeed44e4f6385de75886"} Nov 22 07:16:38 crc kubenswrapper[4856]: I1122 07:16:38.899379 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:38 crc kubenswrapper[4856]: I1122 07:16:38.991420 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3e7ba10-389d-422c-b193-b61753dc349a-catalog-content\") pod \"d3e7ba10-389d-422c-b193-b61753dc349a\" (UID: \"d3e7ba10-389d-422c-b193-b61753dc349a\") " Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.077397 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3e7ba10-389d-422c-b193-b61753dc349a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3e7ba10-389d-422c-b193-b61753dc349a" (UID: "d3e7ba10-389d-422c-b193-b61753dc349a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.092345 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3e7ba10-389d-422c-b193-b61753dc349a-utilities\") pod \"d3e7ba10-389d-422c-b193-b61753dc349a\" (UID: \"d3e7ba10-389d-422c-b193-b61753dc349a\") " Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.092395 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6gxr\" (UniqueName: \"kubernetes.io/projected/d3e7ba10-389d-422c-b193-b61753dc349a-kube-api-access-c6gxr\") pod \"d3e7ba10-389d-422c-b193-b61753dc349a\" (UID: \"d3e7ba10-389d-422c-b193-b61753dc349a\") " Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.092633 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3e7ba10-389d-422c-b193-b61753dc349a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.093290 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3e7ba10-389d-422c-b193-b61753dc349a-utilities" (OuterVolumeSpecName: "utilities") pod "d3e7ba10-389d-422c-b193-b61753dc349a" (UID: "d3e7ba10-389d-422c-b193-b61753dc349a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.098264 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e7ba10-389d-422c-b193-b61753dc349a-kube-api-access-c6gxr" (OuterVolumeSpecName: "kube-api-access-c6gxr") pod "d3e7ba10-389d-422c-b193-b61753dc349a" (UID: "d3e7ba10-389d-422c-b193-b61753dc349a"). InnerVolumeSpecName "kube-api-access-c6gxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.193574 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3e7ba10-389d-422c-b193-b61753dc349a-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.193619 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6gxr\" (UniqueName: \"kubernetes.io/projected/d3e7ba10-389d-422c-b193-b61753dc349a-kube-api-access-c6gxr\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.684697 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lfbpr" Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.684707 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lfbpr" event={"ID":"d3e7ba10-389d-422c-b193-b61753dc349a","Type":"ContainerDied","Data":"1d018e4670ad2c68d18906190cb18d959e86177b544c23ac3f1ae0aecd7f9e5e"} Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.684799 4856 scope.go:117] "RemoveContainer" containerID="441ffb0b484844a8785ff3222cc3863efaea74a9fb00aeed44e4f6385de75886" Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.711718 4856 scope.go:117] "RemoveContainer" containerID="44172cf22ca5ed5a2d40a56370cd7927f68868496ad974a1e15938c52d1001c1" Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.717374 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lfbpr"] Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.720794 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lfbpr"] Nov 22 07:16:39 crc kubenswrapper[4856]: I1122 07:16:39.744499 4856 scope.go:117] "RemoveContainer" containerID="3612e4e1816248d469271e7e476d8cd48ade5c40b6b70f3c4fc8aaf9229deeff" Nov 22 07:16:40 crc kubenswrapper[4856]: I1122 07:16:40.726109 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e7ba10-389d-422c-b193-b61753dc349a" path="/var/lib/kubelet/pods/d3e7ba10-389d-422c-b193-b61753dc349a/volumes" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.303315 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cgks6"] Nov 22 07:16:42 crc kubenswrapper[4856]: E1122 07:16:42.303601 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e7ba10-389d-422c-b193-b61753dc349a" containerName="extract-utilities" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.305206 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e7ba10-389d-422c-b193-b61753dc349a" containerName="extract-utilities" Nov 22 07:16:42 crc kubenswrapper[4856]: E1122 07:16:42.305308 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e7ba10-389d-422c-b193-b61753dc349a" containerName="registry-server" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.305319 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e7ba10-389d-422c-b193-b61753dc349a" containerName="registry-server" Nov 22 07:16:42 crc kubenswrapper[4856]: E1122 07:16:42.305346 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e7ba10-389d-422c-b193-b61753dc349a" containerName="extract-content" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.305354 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e7ba10-389d-422c-b193-b61753dc349a" containerName="extract-content" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.305694 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e7ba10-389d-422c-b193-b61753dc349a" containerName="registry-server" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.308227 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.314824 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgks6"] Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.435237 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cghd\" (UniqueName: \"kubernetes.io/projected/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-kube-api-access-7cghd\") pod \"community-operators-cgks6\" (UID: \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\") " pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.435293 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-catalog-content\") pod \"community-operators-cgks6\" (UID: \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\") " pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.435401 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-utilities\") pod \"community-operators-cgks6\" (UID: \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\") " pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.537149 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cghd\" (UniqueName: \"kubernetes.io/projected/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-kube-api-access-7cghd\") pod \"community-operators-cgks6\" (UID: \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\") " pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.537222 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-catalog-content\") pod \"community-operators-cgks6\" (UID: \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\") " pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.537281 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-utilities\") pod \"community-operators-cgks6\" (UID: \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\") " pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.537864 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-catalog-content\") pod \"community-operators-cgks6\" (UID: \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\") " pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.537941 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-utilities\") pod \"community-operators-cgks6\" (UID: \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\") " pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.557545 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cghd\" (UniqueName: \"kubernetes.io/projected/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-kube-api-access-7cghd\") pod \"community-operators-cgks6\" (UID: \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\") " pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:42 crc kubenswrapper[4856]: I1122 07:16:42.636011 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:43 crc kubenswrapper[4856]: I1122 07:16:43.115799 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cgks6"] Nov 22 07:16:43 crc kubenswrapper[4856]: W1122 07:16:43.122809 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f1f24e9_6c84_412a_a0c5_7ec78c7ef055.slice/crio-a95fa2af63877b40e9d563883a3f6dcd8cec510eb7ed398dfc92a572b113480a WatchSource:0}: Error finding container a95fa2af63877b40e9d563883a3f6dcd8cec510eb7ed398dfc92a572b113480a: Status 404 returned error can't find the container with id a95fa2af63877b40e9d563883a3f6dcd8cec510eb7ed398dfc92a572b113480a Nov 22 07:16:43 crc kubenswrapper[4856]: I1122 07:16:43.714462 4856 generic.go:334] "Generic (PLEG): container finished" podID="1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" containerID="2eeab21273aa8105ccf74dbd8bf1d1ce0f90d48baacb9a79abee2c46d078505c" exitCode=0 Nov 22 07:16:43 crc kubenswrapper[4856]: I1122 07:16:43.714576 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgks6" event={"ID":"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055","Type":"ContainerDied","Data":"2eeab21273aa8105ccf74dbd8bf1d1ce0f90d48baacb9a79abee2c46d078505c"} Nov 22 07:16:43 crc kubenswrapper[4856]: I1122 07:16:43.714772 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgks6" event={"ID":"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055","Type":"ContainerStarted","Data":"a95fa2af63877b40e9d563883a3f6dcd8cec510eb7ed398dfc92a572b113480a"} Nov 22 07:16:45 crc kubenswrapper[4856]: I1122 07:16:45.727459 4856 generic.go:334] "Generic (PLEG): container finished" podID="1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" containerID="c936476cc62023b6e747a10867bb092fc89b66d1f24bc75638d1457fcc817259" exitCode=0 Nov 22 07:16:45 crc kubenswrapper[4856]: I1122 07:16:45.727870 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgks6" event={"ID":"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055","Type":"ContainerDied","Data":"c936476cc62023b6e747a10867bb092fc89b66d1f24bc75638d1457fcc817259"} Nov 22 07:16:46 crc kubenswrapper[4856]: I1122 07:16:46.734615 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgks6" event={"ID":"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055","Type":"ContainerStarted","Data":"730b6713108d823fe9856a178a09e527bb529a54634ea6a74d2f73632590fe91"} Nov 22 07:16:46 crc kubenswrapper[4856]: I1122 07:16:46.752145 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cgks6" podStartSLOduration=2.184167726 podStartE2EDuration="4.752126856s" podCreationTimestamp="2025-11-22 07:16:42 +0000 UTC" firstStartedPulling="2025-11-22 07:16:43.716055198 +0000 UTC m=+846.129448476" lastFinishedPulling="2025-11-22 07:16:46.284014348 +0000 UTC m=+848.697407606" observedRunningTime="2025-11-22 07:16:46.750297667 +0000 UTC m=+849.163690935" watchObservedRunningTime="2025-11-22 07:16:46.752126856 +0000 UTC m=+849.165520114" Nov 22 07:16:47 crc kubenswrapper[4856]: I1122 07:16:47.519533 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-pnd6q" Nov 22 07:16:52 crc kubenswrapper[4856]: I1122 07:16:52.636946 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:52 crc kubenswrapper[4856]: I1122 07:16:52.639090 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:52 crc kubenswrapper[4856]: I1122 07:16:52.683185 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:52 crc kubenswrapper[4856]: I1122 07:16:52.800884 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:52 crc kubenswrapper[4856]: I1122 07:16:52.910229 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgks6"] Nov 22 07:16:54 crc kubenswrapper[4856]: I1122 07:16:54.780702 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cgks6" podUID="1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" containerName="registry-server" containerID="cri-o://730b6713108d823fe9856a178a09e527bb529a54634ea6a74d2f73632590fe91" gracePeriod=2 Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.138498 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.210859 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cghd\" (UniqueName: \"kubernetes.io/projected/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-kube-api-access-7cghd\") pod \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\" (UID: \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\") " Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.210934 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-catalog-content\") pod \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\" (UID: \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\") " Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.210958 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-utilities\") pod \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\" (UID: \"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055\") " Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.212131 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-utilities" (OuterVolumeSpecName: "utilities") pod "1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" (UID: "1f1f24e9-6c84-412a-a0c5-7ec78c7ef055"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.222897 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-kube-api-access-7cghd" (OuterVolumeSpecName: "kube-api-access-7cghd") pod "1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" (UID: "1f1f24e9-6c84-412a-a0c5-7ec78c7ef055"). InnerVolumeSpecName "kube-api-access-7cghd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.275634 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" (UID: "1f1f24e9-6c84-412a-a0c5-7ec78c7ef055"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.312034 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.312079 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.312091 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cghd\" (UniqueName: \"kubernetes.io/projected/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055-kube-api-access-7cghd\") on node \"crc\" DevicePath \"\"" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.790313 4856 generic.go:334] "Generic (PLEG): container finished" podID="1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" containerID="730b6713108d823fe9856a178a09e527bb529a54634ea6a74d2f73632590fe91" exitCode=0 Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.790376 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgks6" event={"ID":"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055","Type":"ContainerDied","Data":"730b6713108d823fe9856a178a09e527bb529a54634ea6a74d2f73632590fe91"} Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.790413 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cgks6" event={"ID":"1f1f24e9-6c84-412a-a0c5-7ec78c7ef055","Type":"ContainerDied","Data":"a95fa2af63877b40e9d563883a3f6dcd8cec510eb7ed398dfc92a572b113480a"} Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.790439 4856 scope.go:117] "RemoveContainer" containerID="730b6713108d823fe9856a178a09e527bb529a54634ea6a74d2f73632590fe91" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.790558 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cgks6" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.812663 4856 scope.go:117] "RemoveContainer" containerID="c936476cc62023b6e747a10867bb092fc89b66d1f24bc75638d1457fcc817259" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.820043 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cgks6"] Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.823268 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cgks6"] Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.852446 4856 scope.go:117] "RemoveContainer" containerID="2eeab21273aa8105ccf74dbd8bf1d1ce0f90d48baacb9a79abee2c46d078505c" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.869111 4856 scope.go:117] "RemoveContainer" containerID="730b6713108d823fe9856a178a09e527bb529a54634ea6a74d2f73632590fe91" Nov 22 07:16:55 crc kubenswrapper[4856]: E1122 07:16:55.869760 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"730b6713108d823fe9856a178a09e527bb529a54634ea6a74d2f73632590fe91\": container with ID starting with 730b6713108d823fe9856a178a09e527bb529a54634ea6a74d2f73632590fe91 not found: ID does not exist" containerID="730b6713108d823fe9856a178a09e527bb529a54634ea6a74d2f73632590fe91" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.869808 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730b6713108d823fe9856a178a09e527bb529a54634ea6a74d2f73632590fe91"} err="failed to get container status \"730b6713108d823fe9856a178a09e527bb529a54634ea6a74d2f73632590fe91\": rpc error: code = NotFound desc = could not find container \"730b6713108d823fe9856a178a09e527bb529a54634ea6a74d2f73632590fe91\": container with ID starting with 730b6713108d823fe9856a178a09e527bb529a54634ea6a74d2f73632590fe91 not found: ID does not exist" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.869841 4856 scope.go:117] "RemoveContainer" containerID="c936476cc62023b6e747a10867bb092fc89b66d1f24bc75638d1457fcc817259" Nov 22 07:16:55 crc kubenswrapper[4856]: E1122 07:16:55.870224 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c936476cc62023b6e747a10867bb092fc89b66d1f24bc75638d1457fcc817259\": container with ID starting with c936476cc62023b6e747a10867bb092fc89b66d1f24bc75638d1457fcc817259 not found: ID does not exist" containerID="c936476cc62023b6e747a10867bb092fc89b66d1f24bc75638d1457fcc817259" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.870278 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c936476cc62023b6e747a10867bb092fc89b66d1f24bc75638d1457fcc817259"} err="failed to get container status \"c936476cc62023b6e747a10867bb092fc89b66d1f24bc75638d1457fcc817259\": rpc error: code = NotFound desc = could not find container \"c936476cc62023b6e747a10867bb092fc89b66d1f24bc75638d1457fcc817259\": container with ID starting with c936476cc62023b6e747a10867bb092fc89b66d1f24bc75638d1457fcc817259 not found: ID does not exist" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.870325 4856 scope.go:117] "RemoveContainer" containerID="2eeab21273aa8105ccf74dbd8bf1d1ce0f90d48baacb9a79abee2c46d078505c" Nov 22 07:16:55 crc kubenswrapper[4856]: E1122 07:16:55.871115 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eeab21273aa8105ccf74dbd8bf1d1ce0f90d48baacb9a79abee2c46d078505c\": container with ID starting with 2eeab21273aa8105ccf74dbd8bf1d1ce0f90d48baacb9a79abee2c46d078505c not found: ID does not exist" containerID="2eeab21273aa8105ccf74dbd8bf1d1ce0f90d48baacb9a79abee2c46d078505c" Nov 22 07:16:55 crc kubenswrapper[4856]: I1122 07:16:55.871163 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eeab21273aa8105ccf74dbd8bf1d1ce0f90d48baacb9a79abee2c46d078505c"} err="failed to get container status \"2eeab21273aa8105ccf74dbd8bf1d1ce0f90d48baacb9a79abee2c46d078505c\": rpc error: code = NotFound desc = could not find container \"2eeab21273aa8105ccf74dbd8bf1d1ce0f90d48baacb9a79abee2c46d078505c\": container with ID starting with 2eeab21273aa8105ccf74dbd8bf1d1ce0f90d48baacb9a79abee2c46d078505c not found: ID does not exist" Nov 22 07:16:56 crc kubenswrapper[4856]: I1122 07:16:56.725950 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" path="/var/lib/kubelet/pods/1f1f24e9-6c84-412a-a0c5-7ec78c7ef055/volumes" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.020312 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k"] Nov 22 07:16:59 crc kubenswrapper[4856]: E1122 07:16:59.021208 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" containerName="extract-content" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.021237 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" containerName="extract-content" Nov 22 07:16:59 crc kubenswrapper[4856]: E1122 07:16:59.021249 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" containerName="registry-server" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.021257 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" containerName="registry-server" Nov 22 07:16:59 crc kubenswrapper[4856]: E1122 07:16:59.021305 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" containerName="extract-utilities" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.021316 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" containerName="extract-utilities" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.021441 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f1f24e9-6c84-412a-a0c5-7ec78c7ef055" containerName="registry-server" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.022440 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.024552 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.029386 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k"] Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.157770 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2efd150-a416-4567-8919-bfc240a93eb0-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k\" (UID: \"f2efd150-a416-4567-8919-bfc240a93eb0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.157835 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czlq4\" (UniqueName: \"kubernetes.io/projected/f2efd150-a416-4567-8919-bfc240a93eb0-kube-api-access-czlq4\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k\" (UID: \"f2efd150-a416-4567-8919-bfc240a93eb0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.158158 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2efd150-a416-4567-8919-bfc240a93eb0-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k\" (UID: \"f2efd150-a416-4567-8919-bfc240a93eb0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.259189 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2efd150-a416-4567-8919-bfc240a93eb0-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k\" (UID: \"f2efd150-a416-4567-8919-bfc240a93eb0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.259247 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czlq4\" (UniqueName: \"kubernetes.io/projected/f2efd150-a416-4567-8919-bfc240a93eb0-kube-api-access-czlq4\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k\" (UID: \"f2efd150-a416-4567-8919-bfc240a93eb0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.259294 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2efd150-a416-4567-8919-bfc240a93eb0-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k\" (UID: \"f2efd150-a416-4567-8919-bfc240a93eb0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.259869 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2efd150-a416-4567-8919-bfc240a93eb0-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k\" (UID: \"f2efd150-a416-4567-8919-bfc240a93eb0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.260648 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2efd150-a416-4567-8919-bfc240a93eb0-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k\" (UID: \"f2efd150-a416-4567-8919-bfc240a93eb0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.282945 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czlq4\" (UniqueName: \"kubernetes.io/projected/f2efd150-a416-4567-8919-bfc240a93eb0-kube-api-access-czlq4\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k\" (UID: \"f2efd150-a416-4567-8919-bfc240a93eb0\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.374059 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.753717 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k"] Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.754153 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.754519 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.754608 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.755290 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"704ded6d89f91ae94e03498e78b0126d0b80a3e0d0c6bf737cb1be33e4a00015"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.755355 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://704ded6d89f91ae94e03498e78b0126d0b80a3e0d0c6bf737cb1be33e4a00015" gracePeriod=600 Nov 22 07:16:59 crc kubenswrapper[4856]: I1122 07:16:59.813996 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" event={"ID":"f2efd150-a416-4567-8919-bfc240a93eb0","Type":"ContainerStarted","Data":"d90075618fc6095efca4337be2c1830d478fb1fe8d3624525f40344f10daf915"} Nov 22 07:17:00 crc kubenswrapper[4856]: I1122 07:17:00.821983 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="704ded6d89f91ae94e03498e78b0126d0b80a3e0d0c6bf737cb1be33e4a00015" exitCode=0 Nov 22 07:17:00 crc kubenswrapper[4856]: I1122 07:17:00.822049 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"704ded6d89f91ae94e03498e78b0126d0b80a3e0d0c6bf737cb1be33e4a00015"} Nov 22 07:17:00 crc kubenswrapper[4856]: I1122 07:17:00.822396 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"b2ea5ccf83836498246295e06fea7da0e6ecc690c06aeac649547d0e64344abd"} Nov 22 07:17:00 crc kubenswrapper[4856]: I1122 07:17:00.822427 4856 scope.go:117] "RemoveContainer" containerID="97adb7720511ab281b9b6ad25fd800510b058455c6ccc3c71322ef809023ee98" Nov 22 07:17:00 crc kubenswrapper[4856]: I1122 07:17:00.823939 4856 generic.go:334] "Generic (PLEG): container finished" podID="f2efd150-a416-4567-8919-bfc240a93eb0" containerID="ffa51a3913f9efea444a637089d71d062c3ca00bfa0418d790f67257957c67c8" exitCode=0 Nov 22 07:17:00 crc kubenswrapper[4856]: I1122 07:17:00.823991 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" event={"ID":"f2efd150-a416-4567-8919-bfc240a93eb0","Type":"ContainerDied","Data":"ffa51a3913f9efea444a637089d71d062c3ca00bfa0418d790f67257957c67c8"} Nov 22 07:17:02 crc kubenswrapper[4856]: I1122 07:17:02.764751 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-57k7r" podUID="2cb75722-66d1-46a3-b867-1cab32f01ede" containerName="console" containerID="cri-o://6ed845ac13902ff87a4299a4235d256425213ffc39ccd65a2aa264e5dd07fccb" gracePeriod=15 Nov 22 07:17:02 crc kubenswrapper[4856]: I1122 07:17:02.843711 4856 generic.go:334] "Generic (PLEG): container finished" podID="f2efd150-a416-4567-8919-bfc240a93eb0" containerID="d6084a04fd1d4fa205e83e39ff2197a98631d6e875f72578482297a87e71f4c4" exitCode=0 Nov 22 07:17:02 crc kubenswrapper[4856]: I1122 07:17:02.843761 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" event={"ID":"f2efd150-a416-4567-8919-bfc240a93eb0","Type":"ContainerDied","Data":"d6084a04fd1d4fa205e83e39ff2197a98631d6e875f72578482297a87e71f4c4"} Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.205234 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-57k7r_2cb75722-66d1-46a3-b867-1cab32f01ede/console/0.log" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.205618 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.322163 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-console-config\") pod \"2cb75722-66d1-46a3-b867-1cab32f01ede\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.322248 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb75722-66d1-46a3-b867-1cab32f01ede-console-serving-cert\") pod \"2cb75722-66d1-46a3-b867-1cab32f01ede\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.322290 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-service-ca\") pod \"2cb75722-66d1-46a3-b867-1cab32f01ede\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.322309 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45xtv\" (UniqueName: \"kubernetes.io/projected/2cb75722-66d1-46a3-b867-1cab32f01ede-kube-api-access-45xtv\") pod \"2cb75722-66d1-46a3-b867-1cab32f01ede\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.322404 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2cb75722-66d1-46a3-b867-1cab32f01ede-console-oauth-config\") pod \"2cb75722-66d1-46a3-b867-1cab32f01ede\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.322445 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-oauth-serving-cert\") pod \"2cb75722-66d1-46a3-b867-1cab32f01ede\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.322491 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-trusted-ca-bundle\") pod \"2cb75722-66d1-46a3-b867-1cab32f01ede\" (UID: \"2cb75722-66d1-46a3-b867-1cab32f01ede\") " Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.323004 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-console-config" (OuterVolumeSpecName: "console-config") pod "2cb75722-66d1-46a3-b867-1cab32f01ede" (UID: "2cb75722-66d1-46a3-b867-1cab32f01ede"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.323043 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-service-ca" (OuterVolumeSpecName: "service-ca") pod "2cb75722-66d1-46a3-b867-1cab32f01ede" (UID: "2cb75722-66d1-46a3-b867-1cab32f01ede"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.323064 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2cb75722-66d1-46a3-b867-1cab32f01ede" (UID: "2cb75722-66d1-46a3-b867-1cab32f01ede"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.323083 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2cb75722-66d1-46a3-b867-1cab32f01ede" (UID: "2cb75722-66d1-46a3-b867-1cab32f01ede"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.327488 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb75722-66d1-46a3-b867-1cab32f01ede-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2cb75722-66d1-46a3-b867-1cab32f01ede" (UID: "2cb75722-66d1-46a3-b867-1cab32f01ede"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.327866 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb75722-66d1-46a3-b867-1cab32f01ede-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2cb75722-66d1-46a3-b867-1cab32f01ede" (UID: "2cb75722-66d1-46a3-b867-1cab32f01ede"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.328837 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb75722-66d1-46a3-b867-1cab32f01ede-kube-api-access-45xtv" (OuterVolumeSpecName: "kube-api-access-45xtv") pod "2cb75722-66d1-46a3-b867-1cab32f01ede" (UID: "2cb75722-66d1-46a3-b867-1cab32f01ede"). InnerVolumeSpecName "kube-api-access-45xtv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.423258 4856 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb75722-66d1-46a3-b867-1cab32f01ede-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.423304 4856 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-service-ca\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.423318 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45xtv\" (UniqueName: \"kubernetes.io/projected/2cb75722-66d1-46a3-b867-1cab32f01ede-kube-api-access-45xtv\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.423327 4856 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2cb75722-66d1-46a3-b867-1cab32f01ede-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.423336 4856 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.423345 4856 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.423353 4856 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2cb75722-66d1-46a3-b867-1cab32f01ede-console-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.852641 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-57k7r_2cb75722-66d1-46a3-b867-1cab32f01ede/console/0.log" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.852692 4856 generic.go:334] "Generic (PLEG): container finished" podID="2cb75722-66d1-46a3-b867-1cab32f01ede" containerID="6ed845ac13902ff87a4299a4235d256425213ffc39ccd65a2aa264e5dd07fccb" exitCode=2 Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.852755 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-57k7r" event={"ID":"2cb75722-66d1-46a3-b867-1cab32f01ede","Type":"ContainerDied","Data":"6ed845ac13902ff87a4299a4235d256425213ffc39ccd65a2aa264e5dd07fccb"} Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.852761 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-57k7r" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.852786 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-57k7r" event={"ID":"2cb75722-66d1-46a3-b867-1cab32f01ede","Type":"ContainerDied","Data":"66eda2f63d1b3fe39ffc6c3506c0c2e04e81446e43d279ab5bb82e89ccdf1a9b"} Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.852804 4856 scope.go:117] "RemoveContainer" containerID="6ed845ac13902ff87a4299a4235d256425213ffc39ccd65a2aa264e5dd07fccb" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.855250 4856 generic.go:334] "Generic (PLEG): container finished" podID="f2efd150-a416-4567-8919-bfc240a93eb0" containerID="469a7e24a2278c528f14e55c322a035a0fd34a362a98a0ca3b4db1932c8ac7d1" exitCode=0 Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.855310 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" event={"ID":"f2efd150-a416-4567-8919-bfc240a93eb0","Type":"ContainerDied","Data":"469a7e24a2278c528f14e55c322a035a0fd34a362a98a0ca3b4db1932c8ac7d1"} Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.876328 4856 scope.go:117] "RemoveContainer" containerID="6ed845ac13902ff87a4299a4235d256425213ffc39ccd65a2aa264e5dd07fccb" Nov 22 07:17:03 crc kubenswrapper[4856]: E1122 07:17:03.877878 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed845ac13902ff87a4299a4235d256425213ffc39ccd65a2aa264e5dd07fccb\": container with ID starting with 6ed845ac13902ff87a4299a4235d256425213ffc39ccd65a2aa264e5dd07fccb not found: ID does not exist" containerID="6ed845ac13902ff87a4299a4235d256425213ffc39ccd65a2aa264e5dd07fccb" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.877921 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed845ac13902ff87a4299a4235d256425213ffc39ccd65a2aa264e5dd07fccb"} err="failed to get container status \"6ed845ac13902ff87a4299a4235d256425213ffc39ccd65a2aa264e5dd07fccb\": rpc error: code = NotFound desc = could not find container \"6ed845ac13902ff87a4299a4235d256425213ffc39ccd65a2aa264e5dd07fccb\": container with ID starting with 6ed845ac13902ff87a4299a4235d256425213ffc39ccd65a2aa264e5dd07fccb not found: ID does not exist" Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.894309 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-57k7r"] Nov 22 07:17:03 crc kubenswrapper[4856]: I1122 07:17:03.897464 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-57k7r"] Nov 22 07:17:04 crc kubenswrapper[4856]: I1122 07:17:04.716180 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb75722-66d1-46a3-b867-1cab32f01ede" path="/var/lib/kubelet/pods/2cb75722-66d1-46a3-b867-1cab32f01ede/volumes" Nov 22 07:17:05 crc kubenswrapper[4856]: I1122 07:17:05.089913 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" Nov 22 07:17:05 crc kubenswrapper[4856]: I1122 07:17:05.144812 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2efd150-a416-4567-8919-bfc240a93eb0-util\") pod \"f2efd150-a416-4567-8919-bfc240a93eb0\" (UID: \"f2efd150-a416-4567-8919-bfc240a93eb0\") " Nov 22 07:17:05 crc kubenswrapper[4856]: I1122 07:17:05.144886 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2efd150-a416-4567-8919-bfc240a93eb0-bundle\") pod \"f2efd150-a416-4567-8919-bfc240a93eb0\" (UID: \"f2efd150-a416-4567-8919-bfc240a93eb0\") " Nov 22 07:17:05 crc kubenswrapper[4856]: I1122 07:17:05.144945 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czlq4\" (UniqueName: \"kubernetes.io/projected/f2efd150-a416-4567-8919-bfc240a93eb0-kube-api-access-czlq4\") pod \"f2efd150-a416-4567-8919-bfc240a93eb0\" (UID: \"f2efd150-a416-4567-8919-bfc240a93eb0\") " Nov 22 07:17:05 crc kubenswrapper[4856]: I1122 07:17:05.145908 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2efd150-a416-4567-8919-bfc240a93eb0-bundle" (OuterVolumeSpecName: "bundle") pod "f2efd150-a416-4567-8919-bfc240a93eb0" (UID: "f2efd150-a416-4567-8919-bfc240a93eb0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:17:05 crc kubenswrapper[4856]: I1122 07:17:05.151762 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2efd150-a416-4567-8919-bfc240a93eb0-kube-api-access-czlq4" (OuterVolumeSpecName: "kube-api-access-czlq4") pod "f2efd150-a416-4567-8919-bfc240a93eb0" (UID: "f2efd150-a416-4567-8919-bfc240a93eb0"). InnerVolumeSpecName "kube-api-access-czlq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:17:05 crc kubenswrapper[4856]: I1122 07:17:05.158941 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2efd150-a416-4567-8919-bfc240a93eb0-util" (OuterVolumeSpecName: "util") pod "f2efd150-a416-4567-8919-bfc240a93eb0" (UID: "f2efd150-a416-4567-8919-bfc240a93eb0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:17:05 crc kubenswrapper[4856]: I1122 07:17:05.246164 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czlq4\" (UniqueName: \"kubernetes.io/projected/f2efd150-a416-4567-8919-bfc240a93eb0-kube-api-access-czlq4\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:05 crc kubenswrapper[4856]: I1122 07:17:05.246199 4856 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f2efd150-a416-4567-8919-bfc240a93eb0-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:05 crc kubenswrapper[4856]: I1122 07:17:05.246209 4856 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f2efd150-a416-4567-8919-bfc240a93eb0-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:05 crc kubenswrapper[4856]: I1122 07:17:05.867637 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" event={"ID":"f2efd150-a416-4567-8919-bfc240a93eb0","Type":"ContainerDied","Data":"d90075618fc6095efca4337be2c1830d478fb1fe8d3624525f40344f10daf915"} Nov 22 07:17:05 crc kubenswrapper[4856]: I1122 07:17:05.867981 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d90075618fc6095efca4337be2c1830d478fb1fe8d3624525f40344f10daf915" Nov 22 07:17:05 crc kubenswrapper[4856]: I1122 07:17:05.867733 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.457025 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm"] Nov 22 07:17:15 crc kubenswrapper[4856]: E1122 07:17:15.457701 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2efd150-a416-4567-8919-bfc240a93eb0" containerName="util" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.457719 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2efd150-a416-4567-8919-bfc240a93eb0" containerName="util" Nov 22 07:17:15 crc kubenswrapper[4856]: E1122 07:17:15.457739 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb75722-66d1-46a3-b867-1cab32f01ede" containerName="console" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.457747 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb75722-66d1-46a3-b867-1cab32f01ede" containerName="console" Nov 22 07:17:15 crc kubenswrapper[4856]: E1122 07:17:15.457760 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2efd150-a416-4567-8919-bfc240a93eb0" containerName="extract" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.457768 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2efd150-a416-4567-8919-bfc240a93eb0" containerName="extract" Nov 22 07:17:15 crc kubenswrapper[4856]: E1122 07:17:15.457776 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2efd150-a416-4567-8919-bfc240a93eb0" containerName="pull" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.457783 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2efd150-a416-4567-8919-bfc240a93eb0" containerName="pull" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.457900 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2efd150-a416-4567-8919-bfc240a93eb0" containerName="extract" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.457914 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb75722-66d1-46a3-b867-1cab32f01ede" containerName="console" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.458267 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.462876 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.462895 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.463193 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-892x2" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.463284 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.463395 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.470825 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cae02c81-3bae-4eb3-a934-f66f9e4c3ce2-webhook-cert\") pod \"metallb-operator-controller-manager-579ff74fd9-zgszm\" (UID: \"cae02c81-3bae-4eb3-a934-f66f9e4c3ce2\") " pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.470876 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cae02c81-3bae-4eb3-a934-f66f9e4c3ce2-apiservice-cert\") pod \"metallb-operator-controller-manager-579ff74fd9-zgszm\" (UID: \"cae02c81-3bae-4eb3-a934-f66f9e4c3ce2\") " pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.470923 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6zd2\" (UniqueName: \"kubernetes.io/projected/cae02c81-3bae-4eb3-a934-f66f9e4c3ce2-kube-api-access-j6zd2\") pod \"metallb-operator-controller-manager-579ff74fd9-zgszm\" (UID: \"cae02c81-3bae-4eb3-a934-f66f9e4c3ce2\") " pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.474322 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm"] Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.572380 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6zd2\" (UniqueName: \"kubernetes.io/projected/cae02c81-3bae-4eb3-a934-f66f9e4c3ce2-kube-api-access-j6zd2\") pod \"metallb-operator-controller-manager-579ff74fd9-zgszm\" (UID: \"cae02c81-3bae-4eb3-a934-f66f9e4c3ce2\") " pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.572673 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cae02c81-3bae-4eb3-a934-f66f9e4c3ce2-webhook-cert\") pod \"metallb-operator-controller-manager-579ff74fd9-zgszm\" (UID: \"cae02c81-3bae-4eb3-a934-f66f9e4c3ce2\") " pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.572708 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cae02c81-3bae-4eb3-a934-f66f9e4c3ce2-apiservice-cert\") pod \"metallb-operator-controller-manager-579ff74fd9-zgszm\" (UID: \"cae02c81-3bae-4eb3-a934-f66f9e4c3ce2\") " pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.578680 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cae02c81-3bae-4eb3-a934-f66f9e4c3ce2-webhook-cert\") pod \"metallb-operator-controller-manager-579ff74fd9-zgszm\" (UID: \"cae02c81-3bae-4eb3-a934-f66f9e4c3ce2\") " pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.579964 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cae02c81-3bae-4eb3-a934-f66f9e4c3ce2-apiservice-cert\") pod \"metallb-operator-controller-manager-579ff74fd9-zgszm\" (UID: \"cae02c81-3bae-4eb3-a934-f66f9e4c3ce2\") " pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.592223 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6zd2\" (UniqueName: \"kubernetes.io/projected/cae02c81-3bae-4eb3-a934-f66f9e4c3ce2-kube-api-access-j6zd2\") pod \"metallb-operator-controller-manager-579ff74fd9-zgszm\" (UID: \"cae02c81-3bae-4eb3-a934-f66f9e4c3ce2\") " pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.707856 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7"] Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.708739 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.712308 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.714686 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.719551 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-jwr6j" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.767830 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7"] Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.775956 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.876835 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b8a077c-4fa3-419a-bcd1-12bd366a1ef8-webhook-cert\") pod \"metallb-operator-webhook-server-77bfffbc85-hkqb7\" (UID: \"9b8a077c-4fa3-419a-bcd1-12bd366a1ef8\") " pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.877048 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9b8a077c-4fa3-419a-bcd1-12bd366a1ef8-apiservice-cert\") pod \"metallb-operator-webhook-server-77bfffbc85-hkqb7\" (UID: \"9b8a077c-4fa3-419a-bcd1-12bd366a1ef8\") " pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.877152 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bbpz\" (UniqueName: \"kubernetes.io/projected/9b8a077c-4fa3-419a-bcd1-12bd366a1ef8-kube-api-access-8bbpz\") pod \"metallb-operator-webhook-server-77bfffbc85-hkqb7\" (UID: \"9b8a077c-4fa3-419a-bcd1-12bd366a1ef8\") " pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.978427 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b8a077c-4fa3-419a-bcd1-12bd366a1ef8-webhook-cert\") pod \"metallb-operator-webhook-server-77bfffbc85-hkqb7\" (UID: \"9b8a077c-4fa3-419a-bcd1-12bd366a1ef8\") " pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.978523 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9b8a077c-4fa3-419a-bcd1-12bd366a1ef8-apiservice-cert\") pod \"metallb-operator-webhook-server-77bfffbc85-hkqb7\" (UID: \"9b8a077c-4fa3-419a-bcd1-12bd366a1ef8\") " pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.978558 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bbpz\" (UniqueName: \"kubernetes.io/projected/9b8a077c-4fa3-419a-bcd1-12bd366a1ef8-kube-api-access-8bbpz\") pod \"metallb-operator-webhook-server-77bfffbc85-hkqb7\" (UID: \"9b8a077c-4fa3-419a-bcd1-12bd366a1ef8\") " pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.984184 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b8a077c-4fa3-419a-bcd1-12bd366a1ef8-webhook-cert\") pod \"metallb-operator-webhook-server-77bfffbc85-hkqb7\" (UID: \"9b8a077c-4fa3-419a-bcd1-12bd366a1ef8\") " pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" Nov 22 07:17:15 crc kubenswrapper[4856]: I1122 07:17:15.988260 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9b8a077c-4fa3-419a-bcd1-12bd366a1ef8-apiservice-cert\") pod \"metallb-operator-webhook-server-77bfffbc85-hkqb7\" (UID: \"9b8a077c-4fa3-419a-bcd1-12bd366a1ef8\") " pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" Nov 22 07:17:16 crc kubenswrapper[4856]: I1122 07:17:16.026009 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bbpz\" (UniqueName: \"kubernetes.io/projected/9b8a077c-4fa3-419a-bcd1-12bd366a1ef8-kube-api-access-8bbpz\") pod \"metallb-operator-webhook-server-77bfffbc85-hkqb7\" (UID: \"9b8a077c-4fa3-419a-bcd1-12bd366a1ef8\") " pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" Nov 22 07:17:16 crc kubenswrapper[4856]: I1122 07:17:16.216264 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm"] Nov 22 07:17:16 crc kubenswrapper[4856]: W1122 07:17:16.222638 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcae02c81_3bae_4eb3_a934_f66f9e4c3ce2.slice/crio-45bd07eb206fb91a15724fa703cbca966e6c652f732071b89aeafd19f8aa1cb2 WatchSource:0}: Error finding container 45bd07eb206fb91a15724fa703cbca966e6c652f732071b89aeafd19f8aa1cb2: Status 404 returned error can't find the container with id 45bd07eb206fb91a15724fa703cbca966e6c652f732071b89aeafd19f8aa1cb2 Nov 22 07:17:16 crc kubenswrapper[4856]: I1122 07:17:16.323876 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" Nov 22 07:17:16 crc kubenswrapper[4856]: I1122 07:17:16.510639 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7"] Nov 22 07:17:16 crc kubenswrapper[4856]: W1122 07:17:16.519577 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b8a077c_4fa3_419a_bcd1_12bd366a1ef8.slice/crio-4ff95b9049cfaf0fcc3ae41be6747458cb910cf5fd61ea5a8181d6425f8255db WatchSource:0}: Error finding container 4ff95b9049cfaf0fcc3ae41be6747458cb910cf5fd61ea5a8181d6425f8255db: Status 404 returned error can't find the container with id 4ff95b9049cfaf0fcc3ae41be6747458cb910cf5fd61ea5a8181d6425f8255db Nov 22 07:17:16 crc kubenswrapper[4856]: I1122 07:17:16.927309 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" event={"ID":"cae02c81-3bae-4eb3-a934-f66f9e4c3ce2","Type":"ContainerStarted","Data":"45bd07eb206fb91a15724fa703cbca966e6c652f732071b89aeafd19f8aa1cb2"} Nov 22 07:17:16 crc kubenswrapper[4856]: I1122 07:17:16.928605 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" event={"ID":"9b8a077c-4fa3-419a-bcd1-12bd366a1ef8","Type":"ContainerStarted","Data":"4ff95b9049cfaf0fcc3ae41be6747458cb910cf5fd61ea5a8181d6425f8255db"} Nov 22 07:17:21 crc kubenswrapper[4856]: I1122 07:17:21.967718 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" event={"ID":"cae02c81-3bae-4eb3-a934-f66f9e4c3ce2","Type":"ContainerStarted","Data":"1d0086be30f3b59aee00ca4ad8ccc85d25b75289f715441be7e2ef00278e212f"} Nov 22 07:17:21 crc kubenswrapper[4856]: I1122 07:17:21.968396 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" Nov 22 07:17:21 crc kubenswrapper[4856]: I1122 07:17:21.969729 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" event={"ID":"9b8a077c-4fa3-419a-bcd1-12bd366a1ef8","Type":"ContainerStarted","Data":"7e969b2713d936e7cb8c08a58574dd52b2208fe9672bfc8419aa6bd739759952"} Nov 22 07:17:21 crc kubenswrapper[4856]: I1122 07:17:21.969839 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" Nov 22 07:17:21 crc kubenswrapper[4856]: I1122 07:17:21.989128 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" podStartSLOduration=1.814152918 podStartE2EDuration="6.989105271s" podCreationTimestamp="2025-11-22 07:17:15 +0000 UTC" firstStartedPulling="2025-11-22 07:17:16.224669918 +0000 UTC m=+878.638063176" lastFinishedPulling="2025-11-22 07:17:21.399622271 +0000 UTC m=+883.813015529" observedRunningTime="2025-11-22 07:17:21.986440559 +0000 UTC m=+884.399833817" watchObservedRunningTime="2025-11-22 07:17:21.989105271 +0000 UTC m=+884.402498519" Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.133425 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" podStartSLOduration=3.060578195 podStartE2EDuration="8.133396825s" podCreationTimestamp="2025-11-22 07:17:15 +0000 UTC" firstStartedPulling="2025-11-22 07:17:16.52261831 +0000 UTC m=+878.936011568" lastFinishedPulling="2025-11-22 07:17:21.59543694 +0000 UTC m=+884.008830198" observedRunningTime="2025-11-22 07:17:22.009684086 +0000 UTC m=+884.423077354" watchObservedRunningTime="2025-11-22 07:17:23.133396825 +0000 UTC m=+885.546790083" Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.135732 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xbmz4"] Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.137158 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.148214 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbmz4"] Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.306216 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-utilities\") pod \"redhat-marketplace-xbmz4\" (UID: \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\") " pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.306318 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-catalog-content\") pod \"redhat-marketplace-xbmz4\" (UID: \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\") " pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.306344 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp8tv\" (UniqueName: \"kubernetes.io/projected/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-kube-api-access-tp8tv\") pod \"redhat-marketplace-xbmz4\" (UID: \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\") " pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.407280 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-catalog-content\") pod \"redhat-marketplace-xbmz4\" (UID: \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\") " pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.407338 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp8tv\" (UniqueName: \"kubernetes.io/projected/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-kube-api-access-tp8tv\") pod \"redhat-marketplace-xbmz4\" (UID: \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\") " pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.407396 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-utilities\") pod \"redhat-marketplace-xbmz4\" (UID: \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\") " pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.408187 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-catalog-content\") pod \"redhat-marketplace-xbmz4\" (UID: \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\") " pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.408245 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-utilities\") pod \"redhat-marketplace-xbmz4\" (UID: \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\") " pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.433954 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp8tv\" (UniqueName: \"kubernetes.io/projected/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-kube-api-access-tp8tv\") pod \"redhat-marketplace-xbmz4\" (UID: \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\") " pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.455538 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.900954 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbmz4"] Nov 22 07:17:23 crc kubenswrapper[4856]: I1122 07:17:23.982556 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbmz4" event={"ID":"e0e89e9e-ecf0-471e-8de7-de394c06d7e0","Type":"ContainerStarted","Data":"ad2d91a37ae7a1585aaaf0abfafa48fc430de47c77605115b2d8275c35b8e804"} Nov 22 07:17:24 crc kubenswrapper[4856]: I1122 07:17:24.990014 4856 generic.go:334] "Generic (PLEG): container finished" podID="e0e89e9e-ecf0-471e-8de7-de394c06d7e0" containerID="87155b43e2c9ae2628f39805465e11892c849ca9ca1b8e588efe111c592cde78" exitCode=0 Nov 22 07:17:24 crc kubenswrapper[4856]: I1122 07:17:24.990091 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbmz4" event={"ID":"e0e89e9e-ecf0-471e-8de7-de394c06d7e0","Type":"ContainerDied","Data":"87155b43e2c9ae2628f39805465e11892c849ca9ca1b8e588efe111c592cde78"} Nov 22 07:17:27 crc kubenswrapper[4856]: I1122 07:17:27.003360 4856 generic.go:334] "Generic (PLEG): container finished" podID="e0e89e9e-ecf0-471e-8de7-de394c06d7e0" containerID="9b453b5f18f9bbc990eb04db3039d7b756aba6c27e5220536cf4852093893442" exitCode=0 Nov 22 07:17:27 crc kubenswrapper[4856]: I1122 07:17:27.003428 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbmz4" event={"ID":"e0e89e9e-ecf0-471e-8de7-de394c06d7e0","Type":"ContainerDied","Data":"9b453b5f18f9bbc990eb04db3039d7b756aba6c27e5220536cf4852093893442"} Nov 22 07:17:29 crc kubenswrapper[4856]: I1122 07:17:29.023601 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbmz4" event={"ID":"e0e89e9e-ecf0-471e-8de7-de394c06d7e0","Type":"ContainerStarted","Data":"de94a8f43dd7388167ba2589c55fa96ae052eba2f9863225e6957519146f0ec7"} Nov 22 07:17:29 crc kubenswrapper[4856]: I1122 07:17:29.046049 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xbmz4" podStartSLOduration=2.940908034 podStartE2EDuration="6.046022832s" podCreationTimestamp="2025-11-22 07:17:23 +0000 UTC" firstStartedPulling="2025-11-22 07:17:24.9921842 +0000 UTC m=+887.405577458" lastFinishedPulling="2025-11-22 07:17:28.097298998 +0000 UTC m=+890.510692256" observedRunningTime="2025-11-22 07:17:29.044045709 +0000 UTC m=+891.457438977" watchObservedRunningTime="2025-11-22 07:17:29.046022832 +0000 UTC m=+891.459416100" Nov 22 07:17:33 crc kubenswrapper[4856]: I1122 07:17:33.456554 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:33 crc kubenswrapper[4856]: I1122 07:17:33.457094 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:33 crc kubenswrapper[4856]: I1122 07:17:33.493221 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:34 crc kubenswrapper[4856]: I1122 07:17:34.088883 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:35 crc kubenswrapper[4856]: I1122 07:17:35.906209 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbmz4"] Nov 22 07:17:36 crc kubenswrapper[4856]: I1122 07:17:36.055883 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xbmz4" podUID="e0e89e9e-ecf0-471e-8de7-de394c06d7e0" containerName="registry-server" containerID="cri-o://de94a8f43dd7388167ba2589c55fa96ae052eba2f9863225e6957519146f0ec7" gracePeriod=2 Nov 22 07:17:36 crc kubenswrapper[4856]: I1122 07:17:36.328103 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-77bfffbc85-hkqb7" Nov 22 07:17:36 crc kubenswrapper[4856]: I1122 07:17:36.441749 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:36 crc kubenswrapper[4856]: I1122 07:17:36.491298 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-utilities\") pod \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\" (UID: \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\") " Nov 22 07:17:36 crc kubenswrapper[4856]: I1122 07:17:36.491398 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-catalog-content\") pod \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\" (UID: \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\") " Nov 22 07:17:36 crc kubenswrapper[4856]: I1122 07:17:36.491438 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tp8tv\" (UniqueName: \"kubernetes.io/projected/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-kube-api-access-tp8tv\") pod \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\" (UID: \"e0e89e9e-ecf0-471e-8de7-de394c06d7e0\") " Nov 22 07:17:36 crc kubenswrapper[4856]: I1122 07:17:36.493294 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-utilities" (OuterVolumeSpecName: "utilities") pod "e0e89e9e-ecf0-471e-8de7-de394c06d7e0" (UID: "e0e89e9e-ecf0-471e-8de7-de394c06d7e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:17:36 crc kubenswrapper[4856]: I1122 07:17:36.498557 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-kube-api-access-tp8tv" (OuterVolumeSpecName: "kube-api-access-tp8tv") pod "e0e89e9e-ecf0-471e-8de7-de394c06d7e0" (UID: "e0e89e9e-ecf0-471e-8de7-de394c06d7e0"). InnerVolumeSpecName "kube-api-access-tp8tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:17:36 crc kubenswrapper[4856]: I1122 07:17:36.513821 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0e89e9e-ecf0-471e-8de7-de394c06d7e0" (UID: "e0e89e9e-ecf0-471e-8de7-de394c06d7e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:17:36 crc kubenswrapper[4856]: I1122 07:17:36.593582 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:36 crc kubenswrapper[4856]: I1122 07:17:36.593639 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tp8tv\" (UniqueName: \"kubernetes.io/projected/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-kube-api-access-tp8tv\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:36 crc kubenswrapper[4856]: I1122 07:17:36.593652 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0e89e9e-ecf0-471e-8de7-de394c06d7e0-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.063170 4856 generic.go:334] "Generic (PLEG): container finished" podID="e0e89e9e-ecf0-471e-8de7-de394c06d7e0" containerID="de94a8f43dd7388167ba2589c55fa96ae052eba2f9863225e6957519146f0ec7" exitCode=0 Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.063233 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbmz4" Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.063245 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbmz4" event={"ID":"e0e89e9e-ecf0-471e-8de7-de394c06d7e0","Type":"ContainerDied","Data":"de94a8f43dd7388167ba2589c55fa96ae052eba2f9863225e6957519146f0ec7"} Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.063577 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbmz4" event={"ID":"e0e89e9e-ecf0-471e-8de7-de394c06d7e0","Type":"ContainerDied","Data":"ad2d91a37ae7a1585aaaf0abfafa48fc430de47c77605115b2d8275c35b8e804"} Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.063599 4856 scope.go:117] "RemoveContainer" containerID="de94a8f43dd7388167ba2589c55fa96ae052eba2f9863225e6957519146f0ec7" Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.080877 4856 scope.go:117] "RemoveContainer" containerID="9b453b5f18f9bbc990eb04db3039d7b756aba6c27e5220536cf4852093893442" Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.086973 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbmz4"] Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.090917 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbmz4"] Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.097648 4856 scope.go:117] "RemoveContainer" containerID="87155b43e2c9ae2628f39805465e11892c849ca9ca1b8e588efe111c592cde78" Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.160866 4856 scope.go:117] "RemoveContainer" containerID="de94a8f43dd7388167ba2589c55fa96ae052eba2f9863225e6957519146f0ec7" Nov 22 07:17:37 crc kubenswrapper[4856]: E1122 07:17:37.161297 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de94a8f43dd7388167ba2589c55fa96ae052eba2f9863225e6957519146f0ec7\": container with ID starting with de94a8f43dd7388167ba2589c55fa96ae052eba2f9863225e6957519146f0ec7 not found: ID does not exist" containerID="de94a8f43dd7388167ba2589c55fa96ae052eba2f9863225e6957519146f0ec7" Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.161340 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de94a8f43dd7388167ba2589c55fa96ae052eba2f9863225e6957519146f0ec7"} err="failed to get container status \"de94a8f43dd7388167ba2589c55fa96ae052eba2f9863225e6957519146f0ec7\": rpc error: code = NotFound desc = could not find container \"de94a8f43dd7388167ba2589c55fa96ae052eba2f9863225e6957519146f0ec7\": container with ID starting with de94a8f43dd7388167ba2589c55fa96ae052eba2f9863225e6957519146f0ec7 not found: ID does not exist" Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.161367 4856 scope.go:117] "RemoveContainer" containerID="9b453b5f18f9bbc990eb04db3039d7b756aba6c27e5220536cf4852093893442" Nov 22 07:17:37 crc kubenswrapper[4856]: E1122 07:17:37.161684 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b453b5f18f9bbc990eb04db3039d7b756aba6c27e5220536cf4852093893442\": container with ID starting with 9b453b5f18f9bbc990eb04db3039d7b756aba6c27e5220536cf4852093893442 not found: ID does not exist" containerID="9b453b5f18f9bbc990eb04db3039d7b756aba6c27e5220536cf4852093893442" Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.161707 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b453b5f18f9bbc990eb04db3039d7b756aba6c27e5220536cf4852093893442"} err="failed to get container status \"9b453b5f18f9bbc990eb04db3039d7b756aba6c27e5220536cf4852093893442\": rpc error: code = NotFound desc = could not find container \"9b453b5f18f9bbc990eb04db3039d7b756aba6c27e5220536cf4852093893442\": container with ID starting with 9b453b5f18f9bbc990eb04db3039d7b756aba6c27e5220536cf4852093893442 not found: ID does not exist" Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.161720 4856 scope.go:117] "RemoveContainer" containerID="87155b43e2c9ae2628f39805465e11892c849ca9ca1b8e588efe111c592cde78" Nov 22 07:17:37 crc kubenswrapper[4856]: E1122 07:17:37.161957 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87155b43e2c9ae2628f39805465e11892c849ca9ca1b8e588efe111c592cde78\": container with ID starting with 87155b43e2c9ae2628f39805465e11892c849ca9ca1b8e588efe111c592cde78 not found: ID does not exist" containerID="87155b43e2c9ae2628f39805465e11892c849ca9ca1b8e588efe111c592cde78" Nov 22 07:17:37 crc kubenswrapper[4856]: I1122 07:17:37.161981 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87155b43e2c9ae2628f39805465e11892c849ca9ca1b8e588efe111c592cde78"} err="failed to get container status \"87155b43e2c9ae2628f39805465e11892c849ca9ca1b8e588efe111c592cde78\": rpc error: code = NotFound desc = could not find container \"87155b43e2c9ae2628f39805465e11892c849ca9ca1b8e588efe111c592cde78\": container with ID starting with 87155b43e2c9ae2628f39805465e11892c849ca9ca1b8e588efe111c592cde78 not found: ID does not exist" Nov 22 07:17:38 crc kubenswrapper[4856]: I1122 07:17:38.718413 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0e89e9e-ecf0-471e-8de7-de394c06d7e0" path="/var/lib/kubelet/pods/e0e89e9e-ecf0-471e-8de7-de394c06d7e0/volumes" Nov 22 07:17:55 crc kubenswrapper[4856]: I1122 07:17:55.778784 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-579ff74fd9-zgszm" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.452093 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-rdwqk"] Nov 22 07:17:56 crc kubenswrapper[4856]: E1122 07:17:56.452463 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e89e9e-ecf0-471e-8de7-de394c06d7e0" containerName="registry-server" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.452479 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e89e9e-ecf0-471e-8de7-de394c06d7e0" containerName="registry-server" Nov 22 07:17:56 crc kubenswrapper[4856]: E1122 07:17:56.452488 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e89e9e-ecf0-471e-8de7-de394c06d7e0" containerName="extract-content" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.452495 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e89e9e-ecf0-471e-8de7-de394c06d7e0" containerName="extract-content" Nov 22 07:17:56 crc kubenswrapper[4856]: E1122 07:17:56.452548 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e89e9e-ecf0-471e-8de7-de394c06d7e0" containerName="extract-utilities" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.452555 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e89e9e-ecf0-471e-8de7-de394c06d7e0" containerName="extract-utilities" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.452722 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e89e9e-ecf0-471e-8de7-de394c06d7e0" containerName="registry-server" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.454929 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.455579 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl"] Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.456400 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.458049 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.458395 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-g2kgr" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.458528 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.460490 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.467567 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl"] Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.510057 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47dda6c4-0264-433f-9edd-4599ee978799-metrics-certs\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.510127 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d96263d-56d8-4b14-a4ab-a5cd75432de3-cert\") pod \"frr-k8s-webhook-server-6998585d5-zwkjl\" (UID: \"7d96263d-56d8-4b14-a4ab-a5cd75432de3\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.510161 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/47dda6c4-0264-433f-9edd-4599ee978799-frr-sockets\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.510209 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/47dda6c4-0264-433f-9edd-4599ee978799-frr-conf\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.510255 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/47dda6c4-0264-433f-9edd-4599ee978799-metrics\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.510281 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgc75\" (UniqueName: \"kubernetes.io/projected/47dda6c4-0264-433f-9edd-4599ee978799-kube-api-access-rgc75\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.510304 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/47dda6c4-0264-433f-9edd-4599ee978799-reloader\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.510363 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/47dda6c4-0264-433f-9edd-4599ee978799-frr-startup\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.510391 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv5c4\" (UniqueName: \"kubernetes.io/projected/7d96263d-56d8-4b14-a4ab-a5cd75432de3-kube-api-access-gv5c4\") pod \"frr-k8s-webhook-server-6998585d5-zwkjl\" (UID: \"7d96263d-56d8-4b14-a4ab-a5cd75432de3\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.547498 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pk8b9"] Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.548339 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pk8b9" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.551833 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.553368 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.553589 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-vzw7v" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.557681 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.558194 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-fbctt"] Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.559413 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.560558 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.586948 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-fbctt"] Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.613418 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/47dda6c4-0264-433f-9edd-4599ee978799-frr-startup\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.613477 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92-metrics-certs\") pod \"controller-6c7b4b5f48-fbctt\" (UID: \"f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92\") " pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.613585 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv5c4\" (UniqueName: \"kubernetes.io/projected/7d96263d-56d8-4b14-a4ab-a5cd75432de3-kube-api-access-gv5c4\") pod \"frr-k8s-webhook-server-6998585d5-zwkjl\" (UID: \"7d96263d-56d8-4b14-a4ab-a5cd75432de3\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.613625 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b45b69d0-d481-44ef-a766-6c43dc57be23-metallb-excludel2\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.613650 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47dda6c4-0264-433f-9edd-4599ee978799-metrics-certs\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: E1122 07:17:56.613786 4856 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 22 07:17:56 crc kubenswrapper[4856]: E1122 07:17:56.613848 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47dda6c4-0264-433f-9edd-4599ee978799-metrics-certs podName:47dda6c4-0264-433f-9edd-4599ee978799 nodeName:}" failed. No retries permitted until 2025-11-22 07:17:57.113826491 +0000 UTC m=+919.527219759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/47dda6c4-0264-433f-9edd-4599ee978799-metrics-certs") pod "frr-k8s-rdwqk" (UID: "47dda6c4-0264-433f-9edd-4599ee978799") : secret "frr-k8s-certs-secret" not found Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.614192 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-metrics-certs\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.614270 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d96263d-56d8-4b14-a4ab-a5cd75432de3-cert\") pod \"frr-k8s-webhook-server-6998585d5-zwkjl\" (UID: \"7d96263d-56d8-4b14-a4ab-a5cd75432de3\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.614306 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-memberlist\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.614332 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/47dda6c4-0264-433f-9edd-4599ee978799-frr-sockets\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.614358 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/47dda6c4-0264-433f-9edd-4599ee978799-frr-conf\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.614403 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92-cert\") pod \"controller-6c7b4b5f48-fbctt\" (UID: \"f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92\") " pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.614433 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqjpg\" (UniqueName: \"kubernetes.io/projected/f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92-kube-api-access-nqjpg\") pod \"controller-6c7b4b5f48-fbctt\" (UID: \"f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92\") " pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.614457 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/47dda6c4-0264-433f-9edd-4599ee978799-metrics\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.614482 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgc75\" (UniqueName: \"kubernetes.io/projected/47dda6c4-0264-433f-9edd-4599ee978799-kube-api-access-rgc75\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.614525 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/47dda6c4-0264-433f-9edd-4599ee978799-reloader\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.614553 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg564\" (UniqueName: \"kubernetes.io/projected/b45b69d0-d481-44ef-a766-6c43dc57be23-kube-api-access-hg564\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.615320 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/47dda6c4-0264-433f-9edd-4599ee978799-frr-startup\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: E1122 07:17:56.615412 4856 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 22 07:17:56 crc kubenswrapper[4856]: E1122 07:17:56.615452 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d96263d-56d8-4b14-a4ab-a5cd75432de3-cert podName:7d96263d-56d8-4b14-a4ab-a5cd75432de3 nodeName:}" failed. No retries permitted until 2025-11-22 07:17:57.115441055 +0000 UTC m=+919.528834313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7d96263d-56d8-4b14-a4ab-a5cd75432de3-cert") pod "frr-k8s-webhook-server-6998585d5-zwkjl" (UID: "7d96263d-56d8-4b14-a4ab-a5cd75432de3") : secret "frr-k8s-webhook-server-cert" not found Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.617810 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/47dda6c4-0264-433f-9edd-4599ee978799-frr-conf\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.618068 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/47dda6c4-0264-433f-9edd-4599ee978799-reloader\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.618081 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/47dda6c4-0264-433f-9edd-4599ee978799-frr-sockets\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.621579 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/47dda6c4-0264-433f-9edd-4599ee978799-metrics\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.641355 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgc75\" (UniqueName: \"kubernetes.io/projected/47dda6c4-0264-433f-9edd-4599ee978799-kube-api-access-rgc75\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.643134 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv5c4\" (UniqueName: \"kubernetes.io/projected/7d96263d-56d8-4b14-a4ab-a5cd75432de3-kube-api-access-gv5c4\") pod \"frr-k8s-webhook-server-6998585d5-zwkjl\" (UID: \"7d96263d-56d8-4b14-a4ab-a5cd75432de3\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.716000 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92-cert\") pod \"controller-6c7b4b5f48-fbctt\" (UID: \"f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92\") " pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.716355 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqjpg\" (UniqueName: \"kubernetes.io/projected/f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92-kube-api-access-nqjpg\") pod \"controller-6c7b4b5f48-fbctt\" (UID: \"f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92\") " pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.716385 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg564\" (UniqueName: \"kubernetes.io/projected/b45b69d0-d481-44ef-a766-6c43dc57be23-kube-api-access-hg564\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.716407 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92-metrics-certs\") pod \"controller-6c7b4b5f48-fbctt\" (UID: \"f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92\") " pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.716433 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b45b69d0-d481-44ef-a766-6c43dc57be23-metallb-excludel2\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.717304 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-metrics-certs\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:56 crc kubenswrapper[4856]: E1122 07:17:56.716529 4856 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Nov 22 07:17:56 crc kubenswrapper[4856]: E1122 07:17:56.717385 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92-metrics-certs podName:f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92 nodeName:}" failed. No retries permitted until 2025-11-22 07:17:57.217366132 +0000 UTC m=+919.630759390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92-metrics-certs") pod "controller-6c7b4b5f48-fbctt" (UID: "f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92") : secret "controller-certs-secret" not found Nov 22 07:17:56 crc kubenswrapper[4856]: E1122 07:17:56.717604 4856 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 22 07:17:56 crc kubenswrapper[4856]: E1122 07:17:56.717648 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-metrics-certs podName:b45b69d0-d481-44ef-a766-6c43dc57be23 nodeName:}" failed. No retries permitted until 2025-11-22 07:17:57.217637579 +0000 UTC m=+919.631030837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-metrics-certs") pod "speaker-pk8b9" (UID: "b45b69d0-d481-44ef-a766-6c43dc57be23") : secret "speaker-certs-secret" not found Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.717180 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b45b69d0-d481-44ef-a766-6c43dc57be23-metallb-excludel2\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.717772 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-memberlist\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:56 crc kubenswrapper[4856]: E1122 07:17:56.717854 4856 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 22 07:17:56 crc kubenswrapper[4856]: E1122 07:17:56.717883 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-memberlist podName:b45b69d0-d481-44ef-a766-6c43dc57be23 nodeName:}" failed. No retries permitted until 2025-11-22 07:17:57.217876557 +0000 UTC m=+919.631269815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-memberlist") pod "speaker-pk8b9" (UID: "b45b69d0-d481-44ef-a766-6c43dc57be23") : secret "metallb-memberlist" not found Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.717925 4856 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.735766 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92-cert\") pod \"controller-6c7b4b5f48-fbctt\" (UID: \"f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92\") " pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.742138 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg564\" (UniqueName: \"kubernetes.io/projected/b45b69d0-d481-44ef-a766-6c43dc57be23-kube-api-access-hg564\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:56 crc kubenswrapper[4856]: I1122 07:17:56.748323 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqjpg\" (UniqueName: \"kubernetes.io/projected/f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92-kube-api-access-nqjpg\") pod \"controller-6c7b4b5f48-fbctt\" (UID: \"f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92\") " pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.123366 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d96263d-56d8-4b14-a4ab-a5cd75432de3-cert\") pod \"frr-k8s-webhook-server-6998585d5-zwkjl\" (UID: \"7d96263d-56d8-4b14-a4ab-a5cd75432de3\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.123544 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47dda6c4-0264-433f-9edd-4599ee978799-metrics-certs\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.127362 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47dda6c4-0264-433f-9edd-4599ee978799-metrics-certs\") pod \"frr-k8s-rdwqk\" (UID: \"47dda6c4-0264-433f-9edd-4599ee978799\") " pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.127943 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7d96263d-56d8-4b14-a4ab-a5cd75432de3-cert\") pod \"frr-k8s-webhook-server-6998585d5-zwkjl\" (UID: \"7d96263d-56d8-4b14-a4ab-a5cd75432de3\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.224630 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92-metrics-certs\") pod \"controller-6c7b4b5f48-fbctt\" (UID: \"f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92\") " pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.224740 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-metrics-certs\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.224775 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-memberlist\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:57 crc kubenswrapper[4856]: E1122 07:17:57.224951 4856 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 22 07:17:57 crc kubenswrapper[4856]: E1122 07:17:57.225028 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-memberlist podName:b45b69d0-d481-44ef-a766-6c43dc57be23 nodeName:}" failed. No retries permitted until 2025-11-22 07:17:58.225007746 +0000 UTC m=+920.638401004 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-memberlist") pod "speaker-pk8b9" (UID: "b45b69d0-d481-44ef-a766-6c43dc57be23") : secret "metallb-memberlist" not found Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.228700 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92-metrics-certs\") pod \"controller-6c7b4b5f48-fbctt\" (UID: \"f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92\") " pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.230143 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-metrics-certs\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.386272 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.403803 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.475342 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.687675 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl"] Nov 22 07:17:57 crc kubenswrapper[4856]: I1122 07:17:57.951229 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-fbctt"] Nov 22 07:17:57 crc kubenswrapper[4856]: W1122 07:17:57.958110 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1afef1a_d731_41ed_a7fe_e4e0dcf7ca92.slice/crio-df9040ef651c683227a4909606da480dfc7d0e7e9435a2026ae5bd1ff93e7a6c WatchSource:0}: Error finding container df9040ef651c683227a4909606da480dfc7d0e7e9435a2026ae5bd1ff93e7a6c: Status 404 returned error can't find the container with id df9040ef651c683227a4909606da480dfc7d0e7e9435a2026ae5bd1ff93e7a6c Nov 22 07:17:58 crc kubenswrapper[4856]: I1122 07:17:58.185638 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" event={"ID":"7d96263d-56d8-4b14-a4ab-a5cd75432de3","Type":"ContainerStarted","Data":"7b1634696b19b8b3d7d9b94b6615d323e0bb3c93d27f56e204133f307963ce89"} Nov 22 07:17:58 crc kubenswrapper[4856]: I1122 07:17:58.187292 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rdwqk" event={"ID":"47dda6c4-0264-433f-9edd-4599ee978799","Type":"ContainerStarted","Data":"c852fdac32a02662eff63dbc5c2f2397a76ff3bd64fd6ab1a7a9a30b7544a9d7"} Nov 22 07:17:58 crc kubenswrapper[4856]: I1122 07:17:58.188789 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fbctt" event={"ID":"f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92","Type":"ContainerStarted","Data":"36374c91a012a9c6d086315f81fc22b7a5d2bcefc32cce861fed46657baaec7c"} Nov 22 07:17:58 crc kubenswrapper[4856]: I1122 07:17:58.188812 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fbctt" event={"ID":"f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92","Type":"ContainerStarted","Data":"df9040ef651c683227a4909606da480dfc7d0e7e9435a2026ae5bd1ff93e7a6c"} Nov 22 07:17:58 crc kubenswrapper[4856]: I1122 07:17:58.241779 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-memberlist\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:58 crc kubenswrapper[4856]: I1122 07:17:58.249210 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b45b69d0-d481-44ef-a766-6c43dc57be23-memberlist\") pod \"speaker-pk8b9\" (UID: \"b45b69d0-d481-44ef-a766-6c43dc57be23\") " pod="metallb-system/speaker-pk8b9" Nov 22 07:17:58 crc kubenswrapper[4856]: I1122 07:17:58.367663 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pk8b9" Nov 22 07:17:58 crc kubenswrapper[4856]: W1122 07:17:58.393289 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb45b69d0_d481_44ef_a766_6c43dc57be23.slice/crio-8d1714bc54feb8ccb143d9fd80042ac15eacbb834b588450b4a77df6fc684ada WatchSource:0}: Error finding container 8d1714bc54feb8ccb143d9fd80042ac15eacbb834b588450b4a77df6fc684ada: Status 404 returned error can't find the container with id 8d1714bc54feb8ccb143d9fd80042ac15eacbb834b588450b4a77df6fc684ada Nov 22 07:17:59 crc kubenswrapper[4856]: I1122 07:17:59.208726 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pk8b9" event={"ID":"b45b69d0-d481-44ef-a766-6c43dc57be23","Type":"ContainerStarted","Data":"2fa2257180dade60f8ecad36e622718683093322d9bdaaba1e685926a1cd30a1"} Nov 22 07:17:59 crc kubenswrapper[4856]: I1122 07:17:59.209049 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pk8b9" event={"ID":"b45b69d0-d481-44ef-a766-6c43dc57be23","Type":"ContainerStarted","Data":"3f1b18baf697e5f961206ed04ada5e4e1e8b3698920f47d373b8bffbb71b92a8"} Nov 22 07:17:59 crc kubenswrapper[4856]: I1122 07:17:59.209059 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pk8b9" event={"ID":"b45b69d0-d481-44ef-a766-6c43dc57be23","Type":"ContainerStarted","Data":"8d1714bc54feb8ccb143d9fd80042ac15eacbb834b588450b4a77df6fc684ada"} Nov 22 07:17:59 crc kubenswrapper[4856]: I1122 07:17:59.209228 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pk8b9" Nov 22 07:17:59 crc kubenswrapper[4856]: I1122 07:17:59.211582 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fbctt" event={"ID":"f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92","Type":"ContainerStarted","Data":"c6c3de67b88f5aee343aac6e4d0d522c15f61ae94d846b42f9fcd41b15ab66c2"} Nov 22 07:17:59 crc kubenswrapper[4856]: I1122 07:17:59.211720 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:17:59 crc kubenswrapper[4856]: I1122 07:17:59.283724 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pk8b9" podStartSLOduration=3.283701759 podStartE2EDuration="3.283701759s" podCreationTimestamp="2025-11-22 07:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:17:59.266370962 +0000 UTC m=+921.679764220" watchObservedRunningTime="2025-11-22 07:17:59.283701759 +0000 UTC m=+921.697095007" Nov 22 07:17:59 crc kubenswrapper[4856]: I1122 07:17:59.291745 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-fbctt" podStartSLOduration=3.291728475 podStartE2EDuration="3.291728475s" podCreationTimestamp="2025-11-22 07:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:17:59.289885735 +0000 UTC m=+921.703278993" watchObservedRunningTime="2025-11-22 07:17:59.291728475 +0000 UTC m=+921.705121733" Nov 22 07:18:08 crc kubenswrapper[4856]: I1122 07:18:08.371371 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pk8b9" Nov 22 07:18:10 crc kubenswrapper[4856]: I1122 07:18:10.284677 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" event={"ID":"7d96263d-56d8-4b14-a4ab-a5cd75432de3","Type":"ContainerStarted","Data":"c9c12d0263d317b2d419df9c6bbd377c32e8cded7eb7bb05fe2b13a1497bf9ed"} Nov 22 07:18:10 crc kubenswrapper[4856]: I1122 07:18:10.284978 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" Nov 22 07:18:10 crc kubenswrapper[4856]: I1122 07:18:10.286717 4856 generic.go:334] "Generic (PLEG): container finished" podID="47dda6c4-0264-433f-9edd-4599ee978799" containerID="bc89daa30c28ce26c4c9d990de5f8a0f8c98054d804e855ee340309ecc242c63" exitCode=0 Nov 22 07:18:10 crc kubenswrapper[4856]: I1122 07:18:10.286763 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rdwqk" event={"ID":"47dda6c4-0264-433f-9edd-4599ee978799","Type":"ContainerDied","Data":"bc89daa30c28ce26c4c9d990de5f8a0f8c98054d804e855ee340309ecc242c63"} Nov 22 07:18:10 crc kubenswrapper[4856]: I1122 07:18:10.302467 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" podStartSLOduration=2.90832944 podStartE2EDuration="14.302445362s" podCreationTimestamp="2025-11-22 07:17:56 +0000 UTC" firstStartedPulling="2025-11-22 07:17:57.695721744 +0000 UTC m=+920.109115012" lastFinishedPulling="2025-11-22 07:18:09.089837676 +0000 UTC m=+931.503230934" observedRunningTime="2025-11-22 07:18:10.29905042 +0000 UTC m=+932.712443688" watchObservedRunningTime="2025-11-22 07:18:10.302445362 +0000 UTC m=+932.715838620" Nov 22 07:18:11 crc kubenswrapper[4856]: I1122 07:18:11.295461 4856 generic.go:334] "Generic (PLEG): container finished" podID="47dda6c4-0264-433f-9edd-4599ee978799" containerID="a965a7de153227c84bd00b3565ffc110c2d37914c2233054006dac174e5f1607" exitCode=0 Nov 22 07:18:11 crc kubenswrapper[4856]: I1122 07:18:11.295590 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rdwqk" event={"ID":"47dda6c4-0264-433f-9edd-4599ee978799","Type":"ContainerDied","Data":"a965a7de153227c84bd00b3565ffc110c2d37914c2233054006dac174e5f1607"} Nov 22 07:18:12 crc kubenswrapper[4856]: I1122 07:18:12.303440 4856 generic.go:334] "Generic (PLEG): container finished" podID="47dda6c4-0264-433f-9edd-4599ee978799" containerID="66f30ca7d40b4f57711e359f5d4f24213ef958b521fb5eff1b2088252904d99b" exitCode=0 Nov 22 07:18:12 crc kubenswrapper[4856]: I1122 07:18:12.303475 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rdwqk" event={"ID":"47dda6c4-0264-433f-9edd-4599ee978799","Type":"ContainerDied","Data":"66f30ca7d40b4f57711e359f5d4f24213ef958b521fb5eff1b2088252904d99b"} Nov 22 07:18:13 crc kubenswrapper[4856]: I1122 07:18:13.312630 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rdwqk" event={"ID":"47dda6c4-0264-433f-9edd-4599ee978799","Type":"ContainerStarted","Data":"9c739bd9988a0938372a4e7367d4b63d7856833081b5dd869bf95415995f82b3"} Nov 22 07:18:13 crc kubenswrapper[4856]: I1122 07:18:13.312948 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rdwqk" event={"ID":"47dda6c4-0264-433f-9edd-4599ee978799","Type":"ContainerStarted","Data":"a48c1d14fdde772ffcd20c728ceecd60a7313c2ef5fe5e0e766e9921fbbfc5a1"} Nov 22 07:18:13 crc kubenswrapper[4856]: I1122 07:18:13.312958 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rdwqk" event={"ID":"47dda6c4-0264-433f-9edd-4599ee978799","Type":"ContainerStarted","Data":"a8ebec1241762ce59cda9a46a71e026806528402572455330d51bd76422facd1"} Nov 22 07:18:14 crc kubenswrapper[4856]: I1122 07:18:14.324726 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rdwqk" event={"ID":"47dda6c4-0264-433f-9edd-4599ee978799","Type":"ContainerStarted","Data":"c28c9ede0a5eb9f3cf476ffe7f4fac513ef0df7b379d138e12cf2f7a383cf116"} Nov 22 07:18:14 crc kubenswrapper[4856]: I1122 07:18:14.873746 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln"] Nov 22 07:18:14 crc kubenswrapper[4856]: I1122 07:18:14.875539 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" Nov 22 07:18:14 crc kubenswrapper[4856]: I1122 07:18:14.878150 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 07:18:14 crc kubenswrapper[4856]: I1122 07:18:14.882241 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln"] Nov 22 07:18:14 crc kubenswrapper[4856]: I1122 07:18:14.988255 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln\" (UID: \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" Nov 22 07:18:14 crc kubenswrapper[4856]: I1122 07:18:14.988335 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln\" (UID: \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" Nov 22 07:18:14 crc kubenswrapper[4856]: I1122 07:18:14.988521 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxnxl\" (UniqueName: \"kubernetes.io/projected/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-kube-api-access-kxnxl\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln\" (UID: \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" Nov 22 07:18:15 crc kubenswrapper[4856]: I1122 07:18:15.089818 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln\" (UID: \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" Nov 22 07:18:15 crc kubenswrapper[4856]: I1122 07:18:15.089896 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln\" (UID: \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" Nov 22 07:18:15 crc kubenswrapper[4856]: I1122 07:18:15.089938 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxnxl\" (UniqueName: \"kubernetes.io/projected/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-kube-api-access-kxnxl\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln\" (UID: \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" Nov 22 07:18:15 crc kubenswrapper[4856]: I1122 07:18:15.090590 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln\" (UID: \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" Nov 22 07:18:15 crc kubenswrapper[4856]: I1122 07:18:15.090666 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln\" (UID: \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" Nov 22 07:18:15 crc kubenswrapper[4856]: I1122 07:18:15.109881 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxnxl\" (UniqueName: \"kubernetes.io/projected/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-kube-api-access-kxnxl\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln\" (UID: \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" Nov 22 07:18:15 crc kubenswrapper[4856]: I1122 07:18:15.195089 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" Nov 22 07:18:15 crc kubenswrapper[4856]: I1122 07:18:15.645758 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln"] Nov 22 07:18:16 crc kubenswrapper[4856]: I1122 07:18:16.341960 4856 generic.go:334] "Generic (PLEG): container finished" podID="53e0b47b-1bfb-4207-bcbe-37ab71f5a642" containerID="caa25a21ff5eb083252f17f3630700a5f4cd7c03c0f84177817aca3b4be5cbec" exitCode=0 Nov 22 07:18:16 crc kubenswrapper[4856]: I1122 07:18:16.342353 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" event={"ID":"53e0b47b-1bfb-4207-bcbe-37ab71f5a642","Type":"ContainerDied","Data":"caa25a21ff5eb083252f17f3630700a5f4cd7c03c0f84177817aca3b4be5cbec"} Nov 22 07:18:16 crc kubenswrapper[4856]: I1122 07:18:16.342387 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" event={"ID":"53e0b47b-1bfb-4207-bcbe-37ab71f5a642","Type":"ContainerStarted","Data":"64f32a499cb3a4e3b794fa4d3da2e1e3ea99de370297f0a27e7083894d0f1ac1"} Nov 22 07:18:16 crc kubenswrapper[4856]: I1122 07:18:16.348676 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rdwqk" event={"ID":"47dda6c4-0264-433f-9edd-4599ee978799","Type":"ContainerStarted","Data":"4c1b1ee834a02e6549322f9c0508041bfe8748a79fd02363b2b92bfe24638c19"} Nov 22 07:18:17 crc kubenswrapper[4856]: I1122 07:18:17.357813 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rdwqk" event={"ID":"47dda6c4-0264-433f-9edd-4599ee978799","Type":"ContainerStarted","Data":"b40267cee06ef1cf5cf4f0318ad07fb15aa7113e1d5a8afad8b2315e1299b3df"} Nov 22 07:18:17 crc kubenswrapper[4856]: I1122 07:18:17.358808 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:18:17 crc kubenswrapper[4856]: I1122 07:18:17.381660 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-rdwqk" podStartSLOduration=9.831935288 podStartE2EDuration="21.381640174s" podCreationTimestamp="2025-11-22 07:17:56 +0000 UTC" firstStartedPulling="2025-11-22 07:17:57.562342879 +0000 UTC m=+919.975736137" lastFinishedPulling="2025-11-22 07:18:09.112047765 +0000 UTC m=+931.525441023" observedRunningTime="2025-11-22 07:18:17.378827478 +0000 UTC m=+939.792220736" watchObservedRunningTime="2025-11-22 07:18:17.381640174 +0000 UTC m=+939.795033432" Nov 22 07:18:17 crc kubenswrapper[4856]: I1122 07:18:17.386571 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:18:17 crc kubenswrapper[4856]: I1122 07:18:17.425950 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:18:17 crc kubenswrapper[4856]: I1122 07:18:17.481882 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-fbctt" Nov 22 07:18:22 crc kubenswrapper[4856]: I1122 07:18:22.387000 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" event={"ID":"53e0b47b-1bfb-4207-bcbe-37ab71f5a642","Type":"ContainerStarted","Data":"accd3a839d67e3eb193387701475c43fc712dd4818ba938d15a86989f61a27e4"} Nov 22 07:18:23 crc kubenswrapper[4856]: I1122 07:18:23.395595 4856 generic.go:334] "Generic (PLEG): container finished" podID="53e0b47b-1bfb-4207-bcbe-37ab71f5a642" containerID="accd3a839d67e3eb193387701475c43fc712dd4818ba938d15a86989f61a27e4" exitCode=0 Nov 22 07:18:23 crc kubenswrapper[4856]: I1122 07:18:23.395639 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" event={"ID":"53e0b47b-1bfb-4207-bcbe-37ab71f5a642","Type":"ContainerDied","Data":"accd3a839d67e3eb193387701475c43fc712dd4818ba938d15a86989f61a27e4"} Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.156371 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2w22p"] Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.157991 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.167752 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2w22p"] Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.317695 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddw5w\" (UniqueName: \"kubernetes.io/projected/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-kube-api-access-ddw5w\") pod \"certified-operators-2w22p\" (UID: \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\") " pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.317823 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-utilities\") pod \"certified-operators-2w22p\" (UID: \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\") " pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.317986 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-catalog-content\") pod \"certified-operators-2w22p\" (UID: \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\") " pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.403107 4856 generic.go:334] "Generic (PLEG): container finished" podID="53e0b47b-1bfb-4207-bcbe-37ab71f5a642" containerID="731ff0b6a33274ae3a247e3ca890bf91c663c948b121b29947179141f089cb42" exitCode=0 Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.403146 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" event={"ID":"53e0b47b-1bfb-4207-bcbe-37ab71f5a642","Type":"ContainerDied","Data":"731ff0b6a33274ae3a247e3ca890bf91c663c948b121b29947179141f089cb42"} Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.418938 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-utilities\") pod \"certified-operators-2w22p\" (UID: \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\") " pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.418999 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-catalog-content\") pod \"certified-operators-2w22p\" (UID: \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\") " pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.419041 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddw5w\" (UniqueName: \"kubernetes.io/projected/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-kube-api-access-ddw5w\") pod \"certified-operators-2w22p\" (UID: \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\") " pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.419437 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-utilities\") pod \"certified-operators-2w22p\" (UID: \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\") " pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.419539 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-catalog-content\") pod \"certified-operators-2w22p\" (UID: \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\") " pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.478658 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddw5w\" (UniqueName: \"kubernetes.io/projected/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-kube-api-access-ddw5w\") pod \"certified-operators-2w22p\" (UID: \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\") " pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:24 crc kubenswrapper[4856]: I1122 07:18:24.774773 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:25 crc kubenswrapper[4856]: I1122 07:18:25.202288 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2w22p"] Nov 22 07:18:25 crc kubenswrapper[4856]: I1122 07:18:25.409574 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w22p" event={"ID":"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994","Type":"ContainerStarted","Data":"670a210a452c0b1018177f8be17ca290f9eb9b9cd3be27047a48dfe0e7a612a1"} Nov 22 07:18:25 crc kubenswrapper[4856]: I1122 07:18:25.632825 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" Nov 22 07:18:25 crc kubenswrapper[4856]: I1122 07:18:25.740018 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxnxl\" (UniqueName: \"kubernetes.io/projected/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-kube-api-access-kxnxl\") pod \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\" (UID: \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\") " Nov 22 07:18:25 crc kubenswrapper[4856]: I1122 07:18:25.740120 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-util\") pod \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\" (UID: \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\") " Nov 22 07:18:25 crc kubenswrapper[4856]: I1122 07:18:25.740185 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-bundle\") pod \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\" (UID: \"53e0b47b-1bfb-4207-bcbe-37ab71f5a642\") " Nov 22 07:18:25 crc kubenswrapper[4856]: I1122 07:18:25.741354 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-bundle" (OuterVolumeSpecName: "bundle") pod "53e0b47b-1bfb-4207-bcbe-37ab71f5a642" (UID: "53e0b47b-1bfb-4207-bcbe-37ab71f5a642"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:18:25 crc kubenswrapper[4856]: I1122 07:18:25.749918 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-kube-api-access-kxnxl" (OuterVolumeSpecName: "kube-api-access-kxnxl") pod "53e0b47b-1bfb-4207-bcbe-37ab71f5a642" (UID: "53e0b47b-1bfb-4207-bcbe-37ab71f5a642"). InnerVolumeSpecName "kube-api-access-kxnxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:18:25 crc kubenswrapper[4856]: I1122 07:18:25.752699 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-util" (OuterVolumeSpecName: "util") pod "53e0b47b-1bfb-4207-bcbe-37ab71f5a642" (UID: "53e0b47b-1bfb-4207-bcbe-37ab71f5a642"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:18:25 crc kubenswrapper[4856]: I1122 07:18:25.842241 4856 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:25 crc kubenswrapper[4856]: I1122 07:18:25.842267 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxnxl\" (UniqueName: \"kubernetes.io/projected/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-kube-api-access-kxnxl\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:25 crc kubenswrapper[4856]: I1122 07:18:25.842278 4856 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53e0b47b-1bfb-4207-bcbe-37ab71f5a642-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:26 crc kubenswrapper[4856]: I1122 07:18:26.416219 4856 generic.go:334] "Generic (PLEG): container finished" podID="1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" containerID="7eed5742a247dba7962a6ed3fe37d66bdb5a9ce7411624bbfcb903a0f9f7bd63" exitCode=0 Nov 22 07:18:26 crc kubenswrapper[4856]: I1122 07:18:26.416314 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w22p" event={"ID":"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994","Type":"ContainerDied","Data":"7eed5742a247dba7962a6ed3fe37d66bdb5a9ce7411624bbfcb903a0f9f7bd63"} Nov 22 07:18:26 crc kubenswrapper[4856]: I1122 07:18:26.423407 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" event={"ID":"53e0b47b-1bfb-4207-bcbe-37ab71f5a642","Type":"ContainerDied","Data":"64f32a499cb3a4e3b794fa4d3da2e1e3ea99de370297f0a27e7083894d0f1ac1"} Nov 22 07:18:26 crc kubenswrapper[4856]: I1122 07:18:26.423457 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64f32a499cb3a4e3b794fa4d3da2e1e3ea99de370297f0a27e7083894d0f1ac1" Nov 22 07:18:26 crc kubenswrapper[4856]: I1122 07:18:26.423554 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln" Nov 22 07:18:27 crc kubenswrapper[4856]: I1122 07:18:27.390618 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-rdwqk" Nov 22 07:18:27 crc kubenswrapper[4856]: I1122 07:18:27.410087 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-zwkjl" Nov 22 07:18:28 crc kubenswrapper[4856]: I1122 07:18:28.442168 4856 generic.go:334] "Generic (PLEG): container finished" podID="1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" containerID="0574bdf597b64fb6f4e3495c7af4c27eba6df879f1ad9a91a69a3350adf02c4b" exitCode=0 Nov 22 07:18:28 crc kubenswrapper[4856]: I1122 07:18:28.442264 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w22p" event={"ID":"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994","Type":"ContainerDied","Data":"0574bdf597b64fb6f4e3495c7af4c27eba6df879f1ad9a91a69a3350adf02c4b"} Nov 22 07:18:29 crc kubenswrapper[4856]: I1122 07:18:29.450215 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w22p" event={"ID":"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994","Type":"ContainerStarted","Data":"8fa9d0fbe6604ab86da745f232791d7e63c3ccb79873f9131d887588d3524e09"} Nov 22 07:18:29 crc kubenswrapper[4856]: I1122 07:18:29.478033 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2w22p" podStartSLOduration=2.980076562 podStartE2EDuration="5.478006286s" podCreationTimestamp="2025-11-22 07:18:24 +0000 UTC" firstStartedPulling="2025-11-22 07:18:26.417787086 +0000 UTC m=+948.831180344" lastFinishedPulling="2025-11-22 07:18:28.91571681 +0000 UTC m=+951.329110068" observedRunningTime="2025-11-22 07:18:29.473975367 +0000 UTC m=+951.887368645" watchObservedRunningTime="2025-11-22 07:18:29.478006286 +0000 UTC m=+951.891399544" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.176734 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89"] Nov 22 07:18:32 crc kubenswrapper[4856]: E1122 07:18:32.177407 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e0b47b-1bfb-4207-bcbe-37ab71f5a642" containerName="pull" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.177424 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e0b47b-1bfb-4207-bcbe-37ab71f5a642" containerName="pull" Nov 22 07:18:32 crc kubenswrapper[4856]: E1122 07:18:32.177448 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e0b47b-1bfb-4207-bcbe-37ab71f5a642" containerName="extract" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.177458 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e0b47b-1bfb-4207-bcbe-37ab71f5a642" containerName="extract" Nov 22 07:18:32 crc kubenswrapper[4856]: E1122 07:18:32.177474 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e0b47b-1bfb-4207-bcbe-37ab71f5a642" containerName="util" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.177483 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e0b47b-1bfb-4207-bcbe-37ab71f5a642" containerName="util" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.177633 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e0b47b-1bfb-4207-bcbe-37ab71f5a642" containerName="extract" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.178215 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.180443 4856 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-nvpwf" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.180699 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.181141 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.194019 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89"] Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.336421 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmflb\" (UniqueName: \"kubernetes.io/projected/61864e42-8719-40b5-b1cc-4202c27be724-kube-api-access-bmflb\") pod \"cert-manager-operator-controller-manager-64cf6dff88-wmn89\" (UID: \"61864e42-8719-40b5-b1cc-4202c27be724\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.336489 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/61864e42-8719-40b5-b1cc-4202c27be724-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-wmn89\" (UID: \"61864e42-8719-40b5-b1cc-4202c27be724\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.437926 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmflb\" (UniqueName: \"kubernetes.io/projected/61864e42-8719-40b5-b1cc-4202c27be724-kube-api-access-bmflb\") pod \"cert-manager-operator-controller-manager-64cf6dff88-wmn89\" (UID: \"61864e42-8719-40b5-b1cc-4202c27be724\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.438005 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/61864e42-8719-40b5-b1cc-4202c27be724-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-wmn89\" (UID: \"61864e42-8719-40b5-b1cc-4202c27be724\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.438581 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/61864e42-8719-40b5-b1cc-4202c27be724-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-wmn89\" (UID: \"61864e42-8719-40b5-b1cc-4202c27be724\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.458856 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmflb\" (UniqueName: \"kubernetes.io/projected/61864e42-8719-40b5-b1cc-4202c27be724-kube-api-access-bmflb\") pod \"cert-manager-operator-controller-manager-64cf6dff88-wmn89\" (UID: \"61864e42-8719-40b5-b1cc-4202c27be724\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.501794 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89" Nov 22 07:18:32 crc kubenswrapper[4856]: I1122 07:18:32.754169 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89"] Nov 22 07:18:33 crc kubenswrapper[4856]: I1122 07:18:33.472584 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89" event={"ID":"61864e42-8719-40b5-b1cc-4202c27be724","Type":"ContainerStarted","Data":"abd444bec99a6104868d1536437283d6263a0aa398aee9e0a5c2a5f14a2b4b17"} Nov 22 07:18:34 crc kubenswrapper[4856]: I1122 07:18:34.775316 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:34 crc kubenswrapper[4856]: I1122 07:18:34.775584 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:34 crc kubenswrapper[4856]: I1122 07:18:34.846656 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:35 crc kubenswrapper[4856]: I1122 07:18:35.562019 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:35 crc kubenswrapper[4856]: I1122 07:18:35.941167 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2w22p"] Nov 22 07:18:37 crc kubenswrapper[4856]: I1122 07:18:37.519023 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2w22p" podUID="1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" containerName="registry-server" containerID="cri-o://8fa9d0fbe6604ab86da745f232791d7e63c3ccb79873f9131d887588d3524e09" gracePeriod=2 Nov 22 07:18:38 crc kubenswrapper[4856]: I1122 07:18:38.527102 4856 generic.go:334] "Generic (PLEG): container finished" podID="1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" containerID="8fa9d0fbe6604ab86da745f232791d7e63c3ccb79873f9131d887588d3524e09" exitCode=0 Nov 22 07:18:38 crc kubenswrapper[4856]: I1122 07:18:38.527145 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w22p" event={"ID":"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994","Type":"ContainerDied","Data":"8fa9d0fbe6604ab86da745f232791d7e63c3ccb79873f9131d887588d3524e09"} Nov 22 07:18:44 crc kubenswrapper[4856]: E1122 07:18:44.776257 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8fa9d0fbe6604ab86da745f232791d7e63c3ccb79873f9131d887588d3524e09 is running failed: container process not found" containerID="8fa9d0fbe6604ab86da745f232791d7e63c3ccb79873f9131d887588d3524e09" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:18:44 crc kubenswrapper[4856]: E1122 07:18:44.778140 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8fa9d0fbe6604ab86da745f232791d7e63c3ccb79873f9131d887588d3524e09 is running failed: container process not found" containerID="8fa9d0fbe6604ab86da745f232791d7e63c3ccb79873f9131d887588d3524e09" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:18:44 crc kubenswrapper[4856]: E1122 07:18:44.778587 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8fa9d0fbe6604ab86da745f232791d7e63c3ccb79873f9131d887588d3524e09 is running failed: container process not found" containerID="8fa9d0fbe6604ab86da745f232791d7e63c3ccb79873f9131d887588d3524e09" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:18:44 crc kubenswrapper[4856]: E1122 07:18:44.778642 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8fa9d0fbe6604ab86da745f232791d7e63c3ccb79873f9131d887588d3524e09 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-2w22p" podUID="1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" containerName="registry-server" Nov 22 07:18:52 crc kubenswrapper[4856]: E1122 07:18:52.440176 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911" Nov 22 07:18:52 crc kubenswrapper[4856]: E1122 07:18:52.441022 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-operator,Image:registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911,Command:[/usr/bin/cert-manager-operator],Args:[start --v=$(OPERATOR_LOG_LEVEL) --trusted-ca-configmap=$(TRUSTED_CA_CONFIGMAP_NAME) --cloud-credentials-secret=$(CLOUD_CREDENTIALS_SECRET_NAME) --unsupported-addon-features=$(UNSUPPORTED_ADDON_FEATURES)],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cert-manager-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_WEBHOOK,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CA_INJECTOR,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CONTROLLER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ACMESOLVER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-acmesolver-rhel9@sha256:ba937fc4b9eee31422914352c11a45b90754ba4fbe490ea45249b90afdc4e0a7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ISTIOCSR,Value:registry.redhat.io/cert-manager/cert-manager-istio-csr-rhel9@sha256:af1ac813b8ee414ef215936f05197bc498bccbd540f3e2a93cb522221ba112bc,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.18.3,ValueFrom:nil,},EnvVar{Name:ISTIOCSR_OPERAND_IMAGE_VERSION,Value:0.14.2,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:1.18.0,ValueFrom:nil,},EnvVar{Name:OPERATOR_LOG_LEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:TRUSTED_CA_CONFIGMAP_NAME,Value:,ValueFrom:nil,},EnvVar{Name:CLOUD_CREDENTIALS_SECRET_NAME,Value:,ValueFrom:nil,},EnvVar{Name:UNSUPPORTED_ADDON_FEATURES,Value:,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cert-manager-operator.v1.18.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{33554432 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmflb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000700000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-operator-controller-manager-64cf6dff88-wmn89_cert-manager-operator(61864e42-8719-40b5-b1cc-4202c27be724): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:18:52 crc kubenswrapper[4856]: E1122 07:18:52.442522 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89" podUID="61864e42-8719-40b5-b1cc-4202c27be724" Nov 22 07:18:52 crc kubenswrapper[4856]: I1122 07:18:52.619373 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2w22p" event={"ID":"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994","Type":"ContainerDied","Data":"670a210a452c0b1018177f8be17ca290f9eb9b9cd3be27047a48dfe0e7a612a1"} Nov 22 07:18:52 crc kubenswrapper[4856]: I1122 07:18:52.619760 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="670a210a452c0b1018177f8be17ca290f9eb9b9cd3be27047a48dfe0e7a612a1" Nov 22 07:18:52 crc kubenswrapper[4856]: E1122 07:18:52.621617 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911\\\"\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89" podUID="61864e42-8719-40b5-b1cc-4202c27be724" Nov 22 07:18:52 crc kubenswrapper[4856]: I1122 07:18:52.631195 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:52 crc kubenswrapper[4856]: I1122 07:18:52.746838 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-utilities\") pod \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\" (UID: \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\") " Nov 22 07:18:52 crc kubenswrapper[4856]: I1122 07:18:52.746943 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddw5w\" (UniqueName: \"kubernetes.io/projected/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-kube-api-access-ddw5w\") pod \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\" (UID: \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\") " Nov 22 07:18:52 crc kubenswrapper[4856]: I1122 07:18:52.746985 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-catalog-content\") pod \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\" (UID: \"1b9d085d-9e65-4c72-81b6-0d2c1fa3e994\") " Nov 22 07:18:52 crc kubenswrapper[4856]: I1122 07:18:52.748032 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-utilities" (OuterVolumeSpecName: "utilities") pod "1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" (UID: "1b9d085d-9e65-4c72-81b6-0d2c1fa3e994"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:18:52 crc kubenswrapper[4856]: I1122 07:18:52.761822 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-kube-api-access-ddw5w" (OuterVolumeSpecName: "kube-api-access-ddw5w") pod "1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" (UID: "1b9d085d-9e65-4c72-81b6-0d2c1fa3e994"). InnerVolumeSpecName "kube-api-access-ddw5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:18:52 crc kubenswrapper[4856]: I1122 07:18:52.798123 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" (UID: "1b9d085d-9e65-4c72-81b6-0d2c1fa3e994"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:18:52 crc kubenswrapper[4856]: I1122 07:18:52.849613 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:52 crc kubenswrapper[4856]: I1122 07:18:52.849657 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddw5w\" (UniqueName: \"kubernetes.io/projected/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-kube-api-access-ddw5w\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:52 crc kubenswrapper[4856]: I1122 07:18:52.849669 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:18:53 crc kubenswrapper[4856]: I1122 07:18:53.624341 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2w22p" Nov 22 07:18:53 crc kubenswrapper[4856]: I1122 07:18:53.649242 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2w22p"] Nov 22 07:18:53 crc kubenswrapper[4856]: I1122 07:18:53.653264 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2w22p"] Nov 22 07:18:54 crc kubenswrapper[4856]: I1122 07:18:54.717144 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" path="/var/lib/kubelet/pods/1b9d085d-9e65-4c72-81b6-0d2c1fa3e994/volumes" Nov 22 07:18:59 crc kubenswrapper[4856]: I1122 07:18:59.754974 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:18:59 crc kubenswrapper[4856]: I1122 07:18:59.755042 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:19:07 crc kubenswrapper[4856]: I1122 07:19:07.708355 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89" event={"ID":"61864e42-8719-40b5-b1cc-4202c27be724","Type":"ContainerStarted","Data":"3933bac75d18304d212de90479848a8bafd0f306beee485e6f6d240a78723769"} Nov 22 07:19:07 crc kubenswrapper[4856]: I1122 07:19:07.729006 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-wmn89" podStartSLOduration=1.874786313 podStartE2EDuration="35.728986525s" podCreationTimestamp="2025-11-22 07:18:32 +0000 UTC" firstStartedPulling="2025-11-22 07:18:32.780727702 +0000 UTC m=+955.194120970" lastFinishedPulling="2025-11-22 07:19:06.634927924 +0000 UTC m=+989.048321182" observedRunningTime="2025-11-22 07:19:07.728217234 +0000 UTC m=+990.141610492" watchObservedRunningTime="2025-11-22 07:19:07.728986525 +0000 UTC m=+990.142379783" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.273365 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-28g8j"] Nov 22 07:19:12 crc kubenswrapper[4856]: E1122 07:19:12.274255 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" containerName="registry-server" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.274272 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" containerName="registry-server" Nov 22 07:19:12 crc kubenswrapper[4856]: E1122 07:19:12.274289 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" containerName="extract-utilities" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.274295 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" containerName="extract-utilities" Nov 22 07:19:12 crc kubenswrapper[4856]: E1122 07:19:12.274307 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" containerName="extract-content" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.274314 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" containerName="extract-content" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.274456 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b9d085d-9e65-4c72-81b6-0d2c1fa3e994" containerName="registry-server" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.274968 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-28g8j" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.277103 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.277318 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.277486 4856 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tf4zn" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.284048 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-28g8j"] Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.414831 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p46j2\" (UniqueName: \"kubernetes.io/projected/252a44ce-6594-4999-9785-22cabfc6b0d5-kube-api-access-p46j2\") pod \"cert-manager-webhook-f4fb5df64-28g8j\" (UID: \"252a44ce-6594-4999-9785-22cabfc6b0d5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-28g8j" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.414888 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/252a44ce-6594-4999-9785-22cabfc6b0d5-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-28g8j\" (UID: \"252a44ce-6594-4999-9785-22cabfc6b0d5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-28g8j" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.516493 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p46j2\" (UniqueName: \"kubernetes.io/projected/252a44ce-6594-4999-9785-22cabfc6b0d5-kube-api-access-p46j2\") pod \"cert-manager-webhook-f4fb5df64-28g8j\" (UID: \"252a44ce-6594-4999-9785-22cabfc6b0d5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-28g8j" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.516559 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/252a44ce-6594-4999-9785-22cabfc6b0d5-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-28g8j\" (UID: \"252a44ce-6594-4999-9785-22cabfc6b0d5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-28g8j" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.536053 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/252a44ce-6594-4999-9785-22cabfc6b0d5-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-28g8j\" (UID: \"252a44ce-6594-4999-9785-22cabfc6b0d5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-28g8j" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.536149 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p46j2\" (UniqueName: \"kubernetes.io/projected/252a44ce-6594-4999-9785-22cabfc6b0d5-kube-api-access-p46j2\") pod \"cert-manager-webhook-f4fb5df64-28g8j\" (UID: \"252a44ce-6594-4999-9785-22cabfc6b0d5\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-28g8j" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.594038 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-28g8j" Nov 22 07:19:12 crc kubenswrapper[4856]: I1122 07:19:12.806411 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-28g8j"] Nov 22 07:19:13 crc kubenswrapper[4856]: I1122 07:19:13.255493 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt"] Nov 22 07:19:13 crc kubenswrapper[4856]: I1122 07:19:13.257594 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt" Nov 22 07:19:13 crc kubenswrapper[4856]: I1122 07:19:13.259748 4856 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-qbwb5" Nov 22 07:19:13 crc kubenswrapper[4856]: I1122 07:19:13.265833 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt"] Nov 22 07:19:13 crc kubenswrapper[4856]: I1122 07:19:13.330434 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be8f5834-2ddb-4156-8185-ae87e19cb6f6-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-jw8xt\" (UID: \"be8f5834-2ddb-4156-8185-ae87e19cb6f6\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt" Nov 22 07:19:13 crc kubenswrapper[4856]: I1122 07:19:13.330484 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wvdq\" (UniqueName: \"kubernetes.io/projected/be8f5834-2ddb-4156-8185-ae87e19cb6f6-kube-api-access-4wvdq\") pod \"cert-manager-cainjector-855d9ccff4-jw8xt\" (UID: \"be8f5834-2ddb-4156-8185-ae87e19cb6f6\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt" Nov 22 07:19:13 crc kubenswrapper[4856]: I1122 07:19:13.432063 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be8f5834-2ddb-4156-8185-ae87e19cb6f6-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-jw8xt\" (UID: \"be8f5834-2ddb-4156-8185-ae87e19cb6f6\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt" Nov 22 07:19:13 crc kubenswrapper[4856]: I1122 07:19:13.432111 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wvdq\" (UniqueName: \"kubernetes.io/projected/be8f5834-2ddb-4156-8185-ae87e19cb6f6-kube-api-access-4wvdq\") pod \"cert-manager-cainjector-855d9ccff4-jw8xt\" (UID: \"be8f5834-2ddb-4156-8185-ae87e19cb6f6\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt" Nov 22 07:19:13 crc kubenswrapper[4856]: I1122 07:19:13.453417 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/be8f5834-2ddb-4156-8185-ae87e19cb6f6-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-jw8xt\" (UID: \"be8f5834-2ddb-4156-8185-ae87e19cb6f6\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt" Nov 22 07:19:13 crc kubenswrapper[4856]: I1122 07:19:13.453902 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wvdq\" (UniqueName: \"kubernetes.io/projected/be8f5834-2ddb-4156-8185-ae87e19cb6f6-kube-api-access-4wvdq\") pod \"cert-manager-cainjector-855d9ccff4-jw8xt\" (UID: \"be8f5834-2ddb-4156-8185-ae87e19cb6f6\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt" Nov 22 07:19:13 crc kubenswrapper[4856]: I1122 07:19:13.584981 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt" Nov 22 07:19:13 crc kubenswrapper[4856]: I1122 07:19:13.747795 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-28g8j" event={"ID":"252a44ce-6594-4999-9785-22cabfc6b0d5","Type":"ContainerStarted","Data":"e227badb366e38c9ab5db22c0608ac956af2a2830ac01e0f940b247ae80a7e79"} Nov 22 07:19:13 crc kubenswrapper[4856]: I1122 07:19:13.808858 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt"] Nov 22 07:19:13 crc kubenswrapper[4856]: W1122 07:19:13.827634 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe8f5834_2ddb_4156_8185_ae87e19cb6f6.slice/crio-5744a128b7d40aa4166fc4a5af9cf975c6e579af759cb7afcb99a9522c5047fa WatchSource:0}: Error finding container 5744a128b7d40aa4166fc4a5af9cf975c6e579af759cb7afcb99a9522c5047fa: Status 404 returned error can't find the container with id 5744a128b7d40aa4166fc4a5af9cf975c6e579af759cb7afcb99a9522c5047fa Nov 22 07:19:14 crc kubenswrapper[4856]: I1122 07:19:14.771798 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt" event={"ID":"be8f5834-2ddb-4156-8185-ae87e19cb6f6","Type":"ContainerStarted","Data":"5744a128b7d40aa4166fc4a5af9cf975c6e579af759cb7afcb99a9522c5047fa"} Nov 22 07:19:21 crc kubenswrapper[4856]: I1122 07:19:21.822070 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-28g8j" event={"ID":"252a44ce-6594-4999-9785-22cabfc6b0d5","Type":"ContainerStarted","Data":"3c5a3f89c43348551091b53e711a3dc4d61d9bbd97968c1a8ed84227e2810b82"} Nov 22 07:19:21 crc kubenswrapper[4856]: I1122 07:19:21.822620 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-28g8j" Nov 22 07:19:21 crc kubenswrapper[4856]: I1122 07:19:21.823473 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt" event={"ID":"be8f5834-2ddb-4156-8185-ae87e19cb6f6","Type":"ContainerStarted","Data":"eb6069c21ff721c3d23d508fb9f10d0a2500f48aa65103a29e0b25e3e22f2955"} Nov 22 07:19:21 crc kubenswrapper[4856]: I1122 07:19:21.861936 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-28g8j" podStartSLOduration=1.481355881 podStartE2EDuration="9.861901865s" podCreationTimestamp="2025-11-22 07:19:12 +0000 UTC" firstStartedPulling="2025-11-22 07:19:12.822741813 +0000 UTC m=+995.236135071" lastFinishedPulling="2025-11-22 07:19:21.203287797 +0000 UTC m=+1003.616681055" observedRunningTime="2025-11-22 07:19:21.843798808 +0000 UTC m=+1004.257192066" watchObservedRunningTime="2025-11-22 07:19:21.861901865 +0000 UTC m=+1004.275295123" Nov 22 07:19:21 crc kubenswrapper[4856]: I1122 07:19:21.864550 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-jw8xt" podStartSLOduration=1.4646201699999999 podStartE2EDuration="8.864532366s" podCreationTimestamp="2025-11-22 07:19:13 +0000 UTC" firstStartedPulling="2025-11-22 07:19:13.830381867 +0000 UTC m=+996.243775125" lastFinishedPulling="2025-11-22 07:19:21.230294063 +0000 UTC m=+1003.643687321" observedRunningTime="2025-11-22 07:19:21.861827363 +0000 UTC m=+1004.275220621" watchObservedRunningTime="2025-11-22 07:19:21.864532366 +0000 UTC m=+1004.277925624" Nov 22 07:19:27 crc kubenswrapper[4856]: I1122 07:19:27.597620 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-28g8j" Nov 22 07:19:29 crc kubenswrapper[4856]: I1122 07:19:29.637423 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-7svc5"] Nov 22 07:19:29 crc kubenswrapper[4856]: I1122 07:19:29.638395 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-7svc5" Nov 22 07:19:29 crc kubenswrapper[4856]: I1122 07:19:29.640747 4856 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-vwww4" Nov 22 07:19:29 crc kubenswrapper[4856]: I1122 07:19:29.649239 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-7svc5"] Nov 22 07:19:29 crc kubenswrapper[4856]: I1122 07:19:29.700573 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d18d974-0eb5-4949-9632-f8f0d00946b5-bound-sa-token\") pod \"cert-manager-86cb77c54b-7svc5\" (UID: \"6d18d974-0eb5-4949-9632-f8f0d00946b5\") " pod="cert-manager/cert-manager-86cb77c54b-7svc5" Nov 22 07:19:29 crc kubenswrapper[4856]: I1122 07:19:29.700640 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqqj8\" (UniqueName: \"kubernetes.io/projected/6d18d974-0eb5-4949-9632-f8f0d00946b5-kube-api-access-qqqj8\") pod \"cert-manager-86cb77c54b-7svc5\" (UID: \"6d18d974-0eb5-4949-9632-f8f0d00946b5\") " pod="cert-manager/cert-manager-86cb77c54b-7svc5" Nov 22 07:19:29 crc kubenswrapper[4856]: I1122 07:19:29.755051 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:19:29 crc kubenswrapper[4856]: I1122 07:19:29.755108 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:19:29 crc kubenswrapper[4856]: I1122 07:19:29.801839 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d18d974-0eb5-4949-9632-f8f0d00946b5-bound-sa-token\") pod \"cert-manager-86cb77c54b-7svc5\" (UID: \"6d18d974-0eb5-4949-9632-f8f0d00946b5\") " pod="cert-manager/cert-manager-86cb77c54b-7svc5" Nov 22 07:19:29 crc kubenswrapper[4856]: I1122 07:19:29.801902 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqqj8\" (UniqueName: \"kubernetes.io/projected/6d18d974-0eb5-4949-9632-f8f0d00946b5-kube-api-access-qqqj8\") pod \"cert-manager-86cb77c54b-7svc5\" (UID: \"6d18d974-0eb5-4949-9632-f8f0d00946b5\") " pod="cert-manager/cert-manager-86cb77c54b-7svc5" Nov 22 07:19:29 crc kubenswrapper[4856]: I1122 07:19:29.823271 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6d18d974-0eb5-4949-9632-f8f0d00946b5-bound-sa-token\") pod \"cert-manager-86cb77c54b-7svc5\" (UID: \"6d18d974-0eb5-4949-9632-f8f0d00946b5\") " pod="cert-manager/cert-manager-86cb77c54b-7svc5" Nov 22 07:19:29 crc kubenswrapper[4856]: I1122 07:19:29.824255 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqqj8\" (UniqueName: \"kubernetes.io/projected/6d18d974-0eb5-4949-9632-f8f0d00946b5-kube-api-access-qqqj8\") pod \"cert-manager-86cb77c54b-7svc5\" (UID: \"6d18d974-0eb5-4949-9632-f8f0d00946b5\") " pod="cert-manager/cert-manager-86cb77c54b-7svc5" Nov 22 07:19:29 crc kubenswrapper[4856]: I1122 07:19:29.958215 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-7svc5" Nov 22 07:19:30 crc kubenswrapper[4856]: I1122 07:19:30.342227 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-7svc5"] Nov 22 07:19:30 crc kubenswrapper[4856]: I1122 07:19:30.876872 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-7svc5" event={"ID":"6d18d974-0eb5-4949-9632-f8f0d00946b5","Type":"ContainerStarted","Data":"937b0a6f5b33cb569f1a8b6ba7b8aae08e35fed1acfd196229ff25a8b35bb8d9"} Nov 22 07:19:30 crc kubenswrapper[4856]: I1122 07:19:30.877346 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-7svc5" event={"ID":"6d18d974-0eb5-4949-9632-f8f0d00946b5","Type":"ContainerStarted","Data":"424a7ec2981766083b1caf2f6268c695446b1b04147bfad9dd7eccacc0853b1c"} Nov 22 07:19:31 crc kubenswrapper[4856]: I1122 07:19:31.898597 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-7svc5" podStartSLOduration=2.89850668 podStartE2EDuration="2.89850668s" podCreationTimestamp="2025-11-22 07:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:19:31.896007212 +0000 UTC m=+1014.309400480" watchObservedRunningTime="2025-11-22 07:19:31.89850668 +0000 UTC m=+1014.311899948" Nov 22 07:19:41 crc kubenswrapper[4856]: I1122 07:19:41.331571 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vsv57"] Nov 22 07:19:41 crc kubenswrapper[4856]: I1122 07:19:41.333499 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vsv57" Nov 22 07:19:41 crc kubenswrapper[4856]: I1122 07:19:41.339424 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-z6qpp" Nov 22 07:19:41 crc kubenswrapper[4856]: I1122 07:19:41.340969 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 22 07:19:41 crc kubenswrapper[4856]: I1122 07:19:41.341402 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 22 07:19:41 crc kubenswrapper[4856]: I1122 07:19:41.373485 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vsv57"] Nov 22 07:19:41 crc kubenswrapper[4856]: I1122 07:19:41.471878 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl68p\" (UniqueName: \"kubernetes.io/projected/842accac-4202-4c80-a903-ebdbc52580ea-kube-api-access-dl68p\") pod \"openstack-operator-index-vsv57\" (UID: \"842accac-4202-4c80-a903-ebdbc52580ea\") " pod="openstack-operators/openstack-operator-index-vsv57" Nov 22 07:19:41 crc kubenswrapper[4856]: I1122 07:19:41.573995 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl68p\" (UniqueName: \"kubernetes.io/projected/842accac-4202-4c80-a903-ebdbc52580ea-kube-api-access-dl68p\") pod \"openstack-operator-index-vsv57\" (UID: \"842accac-4202-4c80-a903-ebdbc52580ea\") " pod="openstack-operators/openstack-operator-index-vsv57" Nov 22 07:19:41 crc kubenswrapper[4856]: I1122 07:19:41.601111 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl68p\" (UniqueName: \"kubernetes.io/projected/842accac-4202-4c80-a903-ebdbc52580ea-kube-api-access-dl68p\") pod \"openstack-operator-index-vsv57\" (UID: \"842accac-4202-4c80-a903-ebdbc52580ea\") " pod="openstack-operators/openstack-operator-index-vsv57" Nov 22 07:19:41 crc kubenswrapper[4856]: I1122 07:19:41.654237 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vsv57" Nov 22 07:19:41 crc kubenswrapper[4856]: I1122 07:19:41.846406 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vsv57"] Nov 22 07:19:41 crc kubenswrapper[4856]: I1122 07:19:41.958846 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vsv57" event={"ID":"842accac-4202-4c80-a903-ebdbc52580ea","Type":"ContainerStarted","Data":"1f0e0bf41956f67b8ab715a69b3bd52419a6dd32163223fc36ad11967c8f54ac"} Nov 22 07:19:43 crc kubenswrapper[4856]: I1122 07:19:43.970715 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vsv57" event={"ID":"842accac-4202-4c80-a903-ebdbc52580ea","Type":"ContainerStarted","Data":"f1a68e6aa0eee94ad78008b20b1a0dff0b0c381091ba75208b8ba56d7acf2ce1"} Nov 22 07:19:43 crc kubenswrapper[4856]: I1122 07:19:43.988691 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vsv57" podStartSLOduration=2.131244732 podStartE2EDuration="2.988672852s" podCreationTimestamp="2025-11-22 07:19:41 +0000 UTC" firstStartedPulling="2025-11-22 07:19:41.856414735 +0000 UTC m=+1024.269807993" lastFinishedPulling="2025-11-22 07:19:42.713842855 +0000 UTC m=+1025.127236113" observedRunningTime="2025-11-22 07:19:43.983960864 +0000 UTC m=+1026.397354132" watchObservedRunningTime="2025-11-22 07:19:43.988672852 +0000 UTC m=+1026.402066110" Nov 22 07:19:44 crc kubenswrapper[4856]: I1122 07:19:44.504453 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vsv57"] Nov 22 07:19:45 crc kubenswrapper[4856]: I1122 07:19:45.114406 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-k5p5q"] Nov 22 07:19:45 crc kubenswrapper[4856]: I1122 07:19:45.115359 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k5p5q" Nov 22 07:19:45 crc kubenswrapper[4856]: I1122 07:19:45.129300 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k5p5q"] Nov 22 07:19:45 crc kubenswrapper[4856]: I1122 07:19:45.243157 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr4jk\" (UniqueName: \"kubernetes.io/projected/53a40c3b-f70f-4fa6-80f9-bbbc6ef4a5f0-kube-api-access-hr4jk\") pod \"openstack-operator-index-k5p5q\" (UID: \"53a40c3b-f70f-4fa6-80f9-bbbc6ef4a5f0\") " pod="openstack-operators/openstack-operator-index-k5p5q" Nov 22 07:19:45 crc kubenswrapper[4856]: I1122 07:19:45.344804 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr4jk\" (UniqueName: \"kubernetes.io/projected/53a40c3b-f70f-4fa6-80f9-bbbc6ef4a5f0-kube-api-access-hr4jk\") pod \"openstack-operator-index-k5p5q\" (UID: \"53a40c3b-f70f-4fa6-80f9-bbbc6ef4a5f0\") " pod="openstack-operators/openstack-operator-index-k5p5q" Nov 22 07:19:45 crc kubenswrapper[4856]: I1122 07:19:45.362437 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr4jk\" (UniqueName: \"kubernetes.io/projected/53a40c3b-f70f-4fa6-80f9-bbbc6ef4a5f0-kube-api-access-hr4jk\") pod \"openstack-operator-index-k5p5q\" (UID: \"53a40c3b-f70f-4fa6-80f9-bbbc6ef4a5f0\") " pod="openstack-operators/openstack-operator-index-k5p5q" Nov 22 07:19:45 crc kubenswrapper[4856]: I1122 07:19:45.446318 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-k5p5q" Nov 22 07:19:45 crc kubenswrapper[4856]: I1122 07:19:45.912907 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-k5p5q"] Nov 22 07:19:45 crc kubenswrapper[4856]: I1122 07:19:45.982268 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k5p5q" event={"ID":"53a40c3b-f70f-4fa6-80f9-bbbc6ef4a5f0","Type":"ContainerStarted","Data":"0f707ba6108135d63b87961838dd6a32566985220e33ca0a29fc0f011bb4dc5b"} Nov 22 07:19:45 crc kubenswrapper[4856]: I1122 07:19:45.982394 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vsv57" podUID="842accac-4202-4c80-a903-ebdbc52580ea" containerName="registry-server" containerID="cri-o://f1a68e6aa0eee94ad78008b20b1a0dff0b0c381091ba75208b8ba56d7acf2ce1" gracePeriod=2 Nov 22 07:19:46 crc kubenswrapper[4856]: I1122 07:19:46.990691 4856 generic.go:334] "Generic (PLEG): container finished" podID="842accac-4202-4c80-a903-ebdbc52580ea" containerID="f1a68e6aa0eee94ad78008b20b1a0dff0b0c381091ba75208b8ba56d7acf2ce1" exitCode=0 Nov 22 07:19:46 crc kubenswrapper[4856]: I1122 07:19:46.990798 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vsv57" event={"ID":"842accac-4202-4c80-a903-ebdbc52580ea","Type":"ContainerDied","Data":"f1a68e6aa0eee94ad78008b20b1a0dff0b0c381091ba75208b8ba56d7acf2ce1"} Nov 22 07:19:47 crc kubenswrapper[4856]: I1122 07:19:47.084028 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vsv57" Nov 22 07:19:47 crc kubenswrapper[4856]: I1122 07:19:47.273763 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dl68p\" (UniqueName: \"kubernetes.io/projected/842accac-4202-4c80-a903-ebdbc52580ea-kube-api-access-dl68p\") pod \"842accac-4202-4c80-a903-ebdbc52580ea\" (UID: \"842accac-4202-4c80-a903-ebdbc52580ea\") " Nov 22 07:19:47 crc kubenswrapper[4856]: I1122 07:19:47.279227 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/842accac-4202-4c80-a903-ebdbc52580ea-kube-api-access-dl68p" (OuterVolumeSpecName: "kube-api-access-dl68p") pod "842accac-4202-4c80-a903-ebdbc52580ea" (UID: "842accac-4202-4c80-a903-ebdbc52580ea"). InnerVolumeSpecName "kube-api-access-dl68p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:19:47 crc kubenswrapper[4856]: I1122 07:19:47.374692 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dl68p\" (UniqueName: \"kubernetes.io/projected/842accac-4202-4c80-a903-ebdbc52580ea-kube-api-access-dl68p\") on node \"crc\" DevicePath \"\"" Nov 22 07:19:47 crc kubenswrapper[4856]: I1122 07:19:47.997690 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vsv57" event={"ID":"842accac-4202-4c80-a903-ebdbc52580ea","Type":"ContainerDied","Data":"1f0e0bf41956f67b8ab715a69b3bd52419a6dd32163223fc36ad11967c8f54ac"} Nov 22 07:19:47 crc kubenswrapper[4856]: I1122 07:19:47.997767 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vsv57" Nov 22 07:19:47 crc kubenswrapper[4856]: I1122 07:19:47.997965 4856 scope.go:117] "RemoveContainer" containerID="f1a68e6aa0eee94ad78008b20b1a0dff0b0c381091ba75208b8ba56d7acf2ce1" Nov 22 07:19:47 crc kubenswrapper[4856]: I1122 07:19:47.999328 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-k5p5q" event={"ID":"53a40c3b-f70f-4fa6-80f9-bbbc6ef4a5f0","Type":"ContainerStarted","Data":"6cd4009a5b9fa99ed86dbac2c6e916bc01aba4b5b000f14ee4ec1b9bb5bdfd96"} Nov 22 07:19:48 crc kubenswrapper[4856]: I1122 07:19:48.032361 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-k5p5q" podStartSLOduration=1.545611423 podStartE2EDuration="3.032323583s" podCreationTimestamp="2025-11-22 07:19:45 +0000 UTC" firstStartedPulling="2025-11-22 07:19:45.932056036 +0000 UTC m=+1028.345449284" lastFinishedPulling="2025-11-22 07:19:47.418768166 +0000 UTC m=+1029.832161444" observedRunningTime="2025-11-22 07:19:48.021381338 +0000 UTC m=+1030.434774616" watchObservedRunningTime="2025-11-22 07:19:48.032323583 +0000 UTC m=+1030.445716871" Nov 22 07:19:48 crc kubenswrapper[4856]: I1122 07:19:48.038888 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vsv57"] Nov 22 07:19:48 crc kubenswrapper[4856]: I1122 07:19:48.044576 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vsv57"] Nov 22 07:19:48 crc kubenswrapper[4856]: I1122 07:19:48.717060 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="842accac-4202-4c80-a903-ebdbc52580ea" path="/var/lib/kubelet/pods/842accac-4202-4c80-a903-ebdbc52580ea/volumes" Nov 22 07:19:55 crc kubenswrapper[4856]: I1122 07:19:55.447228 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-k5p5q" Nov 22 07:19:55 crc kubenswrapper[4856]: I1122 07:19:55.448058 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-k5p5q" Nov 22 07:19:55 crc kubenswrapper[4856]: I1122 07:19:55.492035 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-k5p5q" Nov 22 07:19:56 crc kubenswrapper[4856]: I1122 07:19:56.078412 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-k5p5q" Nov 22 07:19:59 crc kubenswrapper[4856]: I1122 07:19:59.754286 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:19:59 crc kubenswrapper[4856]: I1122 07:19:59.754656 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:19:59 crc kubenswrapper[4856]: I1122 07:19:59.754708 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:19:59 crc kubenswrapper[4856]: I1122 07:19:59.755260 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2ea5ccf83836498246295e06fea7da0e6ecc690c06aeac649547d0e64344abd"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:19:59 crc kubenswrapper[4856]: I1122 07:19:59.755312 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://b2ea5ccf83836498246295e06fea7da0e6ecc690c06aeac649547d0e64344abd" gracePeriod=600 Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.543793 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj"] Nov 22 07:20:02 crc kubenswrapper[4856]: E1122 07:20:02.544435 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="842accac-4202-4c80-a903-ebdbc52580ea" containerName="registry-server" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.544452 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="842accac-4202-4c80-a903-ebdbc52580ea" containerName="registry-server" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.544587 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="842accac-4202-4c80-a903-ebdbc52580ea" containerName="registry-server" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.545450 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.548944 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zbjkn" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.553890 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj"] Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.731277 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4a4c291-e079-478c-a3fb-86c0e9eceb07-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj\" (UID: \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.731334 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvwpl\" (UniqueName: \"kubernetes.io/projected/e4a4c291-e079-478c-a3fb-86c0e9eceb07-kube-api-access-gvwpl\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj\" (UID: \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.731385 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4a4c291-e079-478c-a3fb-86c0e9eceb07-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj\" (UID: \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.832371 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4a4c291-e079-478c-a3fb-86c0e9eceb07-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj\" (UID: \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.833149 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4a4c291-e079-478c-a3fb-86c0e9eceb07-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj\" (UID: \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.833185 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvwpl\" (UniqueName: \"kubernetes.io/projected/e4a4c291-e079-478c-a3fb-86c0e9eceb07-kube-api-access-gvwpl\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj\" (UID: \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.833640 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4a4c291-e079-478c-a3fb-86c0e9eceb07-util\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj\" (UID: \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.833696 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4a4c291-e079-478c-a3fb-86c0e9eceb07-bundle\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj\" (UID: \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.854449 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvwpl\" (UniqueName: \"kubernetes.io/projected/e4a4c291-e079-478c-a3fb-86c0e9eceb07-kube-api-access-gvwpl\") pod \"1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj\" (UID: \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\") " pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" Nov 22 07:20:02 crc kubenswrapper[4856]: I1122 07:20:02.906955 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" Nov 22 07:20:03 crc kubenswrapper[4856]: I1122 07:20:03.302805 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj"] Nov 22 07:20:03 crc kubenswrapper[4856]: W1122 07:20:03.311743 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4a4c291_e079_478c_a3fb_86c0e9eceb07.slice/crio-bedb4e6b6bb59815333e5e0992cd24a13f19135dffdd778f668d8e18ef271832 WatchSource:0}: Error finding container bedb4e6b6bb59815333e5e0992cd24a13f19135dffdd778f668d8e18ef271832: Status 404 returned error can't find the container with id bedb4e6b6bb59815333e5e0992cd24a13f19135dffdd778f668d8e18ef271832 Nov 22 07:20:03 crc kubenswrapper[4856]: I1122 07:20:03.541544 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="b2ea5ccf83836498246295e06fea7da0e6ecc690c06aeac649547d0e64344abd" exitCode=0 Nov 22 07:20:03 crc kubenswrapper[4856]: I1122 07:20:03.541606 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"b2ea5ccf83836498246295e06fea7da0e6ecc690c06aeac649547d0e64344abd"} Nov 22 07:20:03 crc kubenswrapper[4856]: I1122 07:20:03.541795 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"4366d97abee77d6bcf27f0824324e78ad727912da8d9c8585365d5f93d21ed74"} Nov 22 07:20:03 crc kubenswrapper[4856]: I1122 07:20:03.541816 4856 scope.go:117] "RemoveContainer" containerID="704ded6d89f91ae94e03498e78b0126d0b80a3e0d0c6bf737cb1be33e4a00015" Nov 22 07:20:03 crc kubenswrapper[4856]: I1122 07:20:03.545012 4856 generic.go:334] "Generic (PLEG): container finished" podID="e4a4c291-e079-478c-a3fb-86c0e9eceb07" containerID="ea184e215a410e38245e099af614e2a6566cb1e5c105d07eab89d91ff1fe4849" exitCode=0 Nov 22 07:20:03 crc kubenswrapper[4856]: I1122 07:20:03.545035 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" event={"ID":"e4a4c291-e079-478c-a3fb-86c0e9eceb07","Type":"ContainerDied","Data":"ea184e215a410e38245e099af614e2a6566cb1e5c105d07eab89d91ff1fe4849"} Nov 22 07:20:03 crc kubenswrapper[4856]: I1122 07:20:03.545051 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" event={"ID":"e4a4c291-e079-478c-a3fb-86c0e9eceb07","Type":"ContainerStarted","Data":"bedb4e6b6bb59815333e5e0992cd24a13f19135dffdd778f668d8e18ef271832"} Nov 22 07:20:06 crc kubenswrapper[4856]: I1122 07:20:06.573745 4856 generic.go:334] "Generic (PLEG): container finished" podID="e4a4c291-e079-478c-a3fb-86c0e9eceb07" containerID="f52fd8d341906b63d7426910898dc6bf71455f4838471e3ad20eca9fdd6c8dff" exitCode=0 Nov 22 07:20:06 crc kubenswrapper[4856]: I1122 07:20:06.573863 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" event={"ID":"e4a4c291-e079-478c-a3fb-86c0e9eceb07","Type":"ContainerDied","Data":"f52fd8d341906b63d7426910898dc6bf71455f4838471e3ad20eca9fdd6c8dff"} Nov 22 07:20:07 crc kubenswrapper[4856]: I1122 07:20:07.583744 4856 generic.go:334] "Generic (PLEG): container finished" podID="e4a4c291-e079-478c-a3fb-86c0e9eceb07" containerID="199cef7de51c7a094da6cf8ed28ef263e436b2cbb72b4e30335b342f2d2b0cb8" exitCode=0 Nov 22 07:20:07 crc kubenswrapper[4856]: I1122 07:20:07.584106 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" event={"ID":"e4a4c291-e079-478c-a3fb-86c0e9eceb07","Type":"ContainerDied","Data":"199cef7de51c7a094da6cf8ed28ef263e436b2cbb72b4e30335b342f2d2b0cb8"} Nov 22 07:20:09 crc kubenswrapper[4856]: I1122 07:20:09.251667 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" Nov 22 07:20:09 crc kubenswrapper[4856]: I1122 07:20:09.320463 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4a4c291-e079-478c-a3fb-86c0e9eceb07-util\") pod \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\" (UID: \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\") " Nov 22 07:20:09 crc kubenswrapper[4856]: I1122 07:20:09.320565 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvwpl\" (UniqueName: \"kubernetes.io/projected/e4a4c291-e079-478c-a3fb-86c0e9eceb07-kube-api-access-gvwpl\") pod \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\" (UID: \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\") " Nov 22 07:20:09 crc kubenswrapper[4856]: I1122 07:20:09.320607 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4a4c291-e079-478c-a3fb-86c0e9eceb07-bundle\") pod \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\" (UID: \"e4a4c291-e079-478c-a3fb-86c0e9eceb07\") " Nov 22 07:20:09 crc kubenswrapper[4856]: I1122 07:20:09.322704 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4a4c291-e079-478c-a3fb-86c0e9eceb07-bundle" (OuterVolumeSpecName: "bundle") pod "e4a4c291-e079-478c-a3fb-86c0e9eceb07" (UID: "e4a4c291-e079-478c-a3fb-86c0e9eceb07"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:20:09 crc kubenswrapper[4856]: I1122 07:20:09.329373 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a4c291-e079-478c-a3fb-86c0e9eceb07-kube-api-access-gvwpl" (OuterVolumeSpecName: "kube-api-access-gvwpl") pod "e4a4c291-e079-478c-a3fb-86c0e9eceb07" (UID: "e4a4c291-e079-478c-a3fb-86c0e9eceb07"). InnerVolumeSpecName "kube-api-access-gvwpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:20:09 crc kubenswrapper[4856]: I1122 07:20:09.422012 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvwpl\" (UniqueName: \"kubernetes.io/projected/e4a4c291-e079-478c-a3fb-86c0e9eceb07-kube-api-access-gvwpl\") on node \"crc\" DevicePath \"\"" Nov 22 07:20:09 crc kubenswrapper[4856]: I1122 07:20:09.422114 4856 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e4a4c291-e079-478c-a3fb-86c0e9eceb07-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:20:09 crc kubenswrapper[4856]: I1122 07:20:09.602437 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" event={"ID":"e4a4c291-e079-478c-a3fb-86c0e9eceb07","Type":"ContainerDied","Data":"bedb4e6b6bb59815333e5e0992cd24a13f19135dffdd778f668d8e18ef271832"} Nov 22 07:20:09 crc kubenswrapper[4856]: I1122 07:20:09.602538 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bedb4e6b6bb59815333e5e0992cd24a13f19135dffdd778f668d8e18ef271832" Nov 22 07:20:09 crc kubenswrapper[4856]: I1122 07:20:09.602574 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj" Nov 22 07:20:10 crc kubenswrapper[4856]: I1122 07:20:10.019438 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4a4c291-e079-478c-a3fb-86c0e9eceb07-util" (OuterVolumeSpecName: "util") pod "e4a4c291-e079-478c-a3fb-86c0e9eceb07" (UID: "e4a4c291-e079-478c-a3fb-86c0e9eceb07"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:20:10 crc kubenswrapper[4856]: I1122 07:20:10.031320 4856 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e4a4c291-e079-478c-a3fb-86c0e9eceb07-util\") on node \"crc\" DevicePath \"\"" Nov 22 07:20:14 crc kubenswrapper[4856]: I1122 07:20:14.286436 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt"] Nov 22 07:20:14 crc kubenswrapper[4856]: E1122 07:20:14.287262 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a4c291-e079-478c-a3fb-86c0e9eceb07" containerName="extract" Nov 22 07:20:14 crc kubenswrapper[4856]: I1122 07:20:14.287279 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a4c291-e079-478c-a3fb-86c0e9eceb07" containerName="extract" Nov 22 07:20:14 crc kubenswrapper[4856]: E1122 07:20:14.287298 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a4c291-e079-478c-a3fb-86c0e9eceb07" containerName="pull" Nov 22 07:20:14 crc kubenswrapper[4856]: I1122 07:20:14.287305 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a4c291-e079-478c-a3fb-86c0e9eceb07" containerName="pull" Nov 22 07:20:14 crc kubenswrapper[4856]: E1122 07:20:14.287321 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a4c291-e079-478c-a3fb-86c0e9eceb07" containerName="util" Nov 22 07:20:14 crc kubenswrapper[4856]: I1122 07:20:14.287328 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a4c291-e079-478c-a3fb-86c0e9eceb07" containerName="util" Nov 22 07:20:14 crc kubenswrapper[4856]: I1122 07:20:14.287441 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a4c291-e079-478c-a3fb-86c0e9eceb07" containerName="extract" Nov 22 07:20:14 crc kubenswrapper[4856]: I1122 07:20:14.288157 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt" Nov 22 07:20:14 crc kubenswrapper[4856]: I1122 07:20:14.295603 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-9czgb" Nov 22 07:20:14 crc kubenswrapper[4856]: I1122 07:20:14.313554 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt"] Nov 22 07:20:14 crc kubenswrapper[4856]: I1122 07:20:14.412198 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bh99\" (UniqueName: \"kubernetes.io/projected/d3b50ae8-2e9c-4c5e-ae72-b31f10dfc37f-kube-api-access-2bh99\") pod \"openstack-operator-controller-operator-8486c7f98b-j2sjt\" (UID: \"d3b50ae8-2e9c-4c5e-ae72-b31f10dfc37f\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt" Nov 22 07:20:14 crc kubenswrapper[4856]: I1122 07:20:14.513149 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bh99\" (UniqueName: \"kubernetes.io/projected/d3b50ae8-2e9c-4c5e-ae72-b31f10dfc37f-kube-api-access-2bh99\") pod \"openstack-operator-controller-operator-8486c7f98b-j2sjt\" (UID: \"d3b50ae8-2e9c-4c5e-ae72-b31f10dfc37f\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt" Nov 22 07:20:14 crc kubenswrapper[4856]: I1122 07:20:14.538385 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bh99\" (UniqueName: \"kubernetes.io/projected/d3b50ae8-2e9c-4c5e-ae72-b31f10dfc37f-kube-api-access-2bh99\") pod \"openstack-operator-controller-operator-8486c7f98b-j2sjt\" (UID: \"d3b50ae8-2e9c-4c5e-ae72-b31f10dfc37f\") " pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt" Nov 22 07:20:14 crc kubenswrapper[4856]: I1122 07:20:14.605338 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt" Nov 22 07:20:15 crc kubenswrapper[4856]: I1122 07:20:15.066772 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt"] Nov 22 07:20:15 crc kubenswrapper[4856]: I1122 07:20:15.644374 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt" event={"ID":"d3b50ae8-2e9c-4c5e-ae72-b31f10dfc37f","Type":"ContainerStarted","Data":"398c70b5100b0c39486761a216a1f0a87772fe1c7892c62b63a650f354450c0b"} Nov 22 07:20:19 crc kubenswrapper[4856]: I1122 07:20:19.685959 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt" event={"ID":"d3b50ae8-2e9c-4c5e-ae72-b31f10dfc37f","Type":"ContainerStarted","Data":"ce13383a6132f6af918f6c8ecef274f5360fd28fc090fbf5d5e4006fa9184548"} Nov 22 07:20:22 crc kubenswrapper[4856]: I1122 07:20:22.719006 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt" event={"ID":"d3b50ae8-2e9c-4c5e-ae72-b31f10dfc37f","Type":"ContainerStarted","Data":"d8d99ec701d6c2d2ddd745f26e8abb599f0668edd65137f7f25b49cb5af21768"} Nov 22 07:20:22 crc kubenswrapper[4856]: I1122 07:20:22.719913 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt" Nov 22 07:20:22 crc kubenswrapper[4856]: I1122 07:20:22.752665 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt" podStartSLOduration=1.7331181820000001 podStartE2EDuration="8.752633879s" podCreationTimestamp="2025-11-22 07:20:14 +0000 UTC" firstStartedPulling="2025-11-22 07:20:15.072670465 +0000 UTC m=+1057.486063723" lastFinishedPulling="2025-11-22 07:20:22.092186162 +0000 UTC m=+1064.505579420" observedRunningTime="2025-11-22 07:20:22.750168674 +0000 UTC m=+1065.163561932" watchObservedRunningTime="2025-11-22 07:20:22.752633879 +0000 UTC m=+1065.166027137" Nov 22 07:20:24 crc kubenswrapper[4856]: I1122 07:20:24.608088 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-8486c7f98b-j2sjt" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.674397 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.676200 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.680961 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-4vglq" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.686529 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.694988 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.696231 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.703926 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-fx9fj" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.722140 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.742143 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.747324 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.750703 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jvz28" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.758981 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.782011 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.783323 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.785920 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.785972 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kzzv\" (UniqueName: \"kubernetes.io/projected/5ac8c521-cea0-4bdf-a90c-5d61cff9e30d-kube-api-access-4kzzv\") pod \"barbican-operator-controller-manager-7768f8c84f-8hnhw\" (UID: \"5ac8c521-cea0-4bdf-a90c-5d61cff9e30d\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.786131 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t6pd\" (UniqueName: \"kubernetes.io/projected/6726415a-8b70-4cde-80fa-5e9954cacb16-kube-api-access-5t6pd\") pod \"cinder-operator-controller-manager-6d8fd67bf7-w92jz\" (UID: \"6726415a-8b70-4cde-80fa-5e9954cacb16\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.786964 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nwmwv" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.799268 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.800195 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.802484 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-f2nrq" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.823353 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.824333 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.833333 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bxrnx" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.852733 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.857314 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.860424 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.861366 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.863311 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.869907 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-bspfp" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.870327 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.872278 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.874095 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-skl4p" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.876010 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.888854 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.890835 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2gw7\" (UniqueName: \"kubernetes.io/projected/1a831555-0593-4c78-9b32-8469445182c6-kube-api-access-c2gw7\") pod \"designate-operator-controller-manager-56dfb6b67f-f29xt\" (UID: \"1a831555-0593-4c78-9b32-8469445182c6\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.890894 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wps8k\" (UniqueName: \"kubernetes.io/projected/11bca657-d3dd-4ecc-b2a7-fc430d0e27d9-kube-api-access-wps8k\") pod \"heat-operator-controller-manager-bf4c6585d-29dmw\" (UID: \"11bca657-d3dd-4ecc-b2a7-fc430d0e27d9\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.890937 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kzzv\" (UniqueName: \"kubernetes.io/projected/5ac8c521-cea0-4bdf-a90c-5d61cff9e30d-kube-api-access-4kzzv\") pod \"barbican-operator-controller-manager-7768f8c84f-8hnhw\" (UID: \"5ac8c521-cea0-4bdf-a90c-5d61cff9e30d\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.891027 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t6pd\" (UniqueName: \"kubernetes.io/projected/6726415a-8b70-4cde-80fa-5e9954cacb16-kube-api-access-5t6pd\") pod \"cinder-operator-controller-manager-6d8fd67bf7-w92jz\" (UID: \"6726415a-8b70-4cde-80fa-5e9954cacb16\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.891281 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzpjf\" (UniqueName: \"kubernetes.io/projected/b8bdffad-516a-4927-8319-72b583afead1-kube-api-access-bzpjf\") pod \"glance-operator-controller-manager-8667fbf6f6-bsv2v\" (UID: \"b8bdffad-516a-4927-8319-72b583afead1\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.904880 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.905842 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.913438 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-7584d" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.928700 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.934335 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kzzv\" (UniqueName: \"kubernetes.io/projected/5ac8c521-cea0-4bdf-a90c-5d61cff9e30d-kube-api-access-4kzzv\") pod \"barbican-operator-controller-manager-7768f8c84f-8hnhw\" (UID: \"5ac8c521-cea0-4bdf-a90c-5d61cff9e30d\") " pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.937766 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.938775 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.947287 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-x6ptv" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.949280 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t6pd\" (UniqueName: \"kubernetes.io/projected/6726415a-8b70-4cde-80fa-5e9954cacb16-kube-api-access-5t6pd\") pod \"cinder-operator-controller-manager-6d8fd67bf7-w92jz\" (UID: \"6726415a-8b70-4cde-80fa-5e9954cacb16\") " pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.957537 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.958720 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.962928 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-n2ln2" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.968568 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.972741 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.983452 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv"] Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.984681 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.991362 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-lm7x2" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.992155 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzpjf\" (UniqueName: \"kubernetes.io/projected/b8bdffad-516a-4927-8319-72b583afead1-kube-api-access-bzpjf\") pod \"glance-operator-controller-manager-8667fbf6f6-bsv2v\" (UID: \"b8bdffad-516a-4927-8319-72b583afead1\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.992262 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdtzd\" (UniqueName: \"kubernetes.io/projected/8c6edaa5-7bd8-4fbb-bee5-92735fe2d2de-kube-api-access-xdtzd\") pod \"horizon-operator-controller-manager-5d86b44686-v7qv6\" (UID: \"8c6edaa5-7bd8-4fbb-bee5-92735fe2d2de\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.992304 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2gw7\" (UniqueName: \"kubernetes.io/projected/1a831555-0593-4c78-9b32-8469445182c6-kube-api-access-c2gw7\") pod \"designate-operator-controller-manager-56dfb6b67f-f29xt\" (UID: \"1a831555-0593-4c78-9b32-8469445182c6\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.992338 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wps8k\" (UniqueName: \"kubernetes.io/projected/11bca657-d3dd-4ecc-b2a7-fc430d0e27d9-kube-api-access-wps8k\") pod \"heat-operator-controller-manager-bf4c6585d-29dmw\" (UID: \"11bca657-d3dd-4ecc-b2a7-fc430d0e27d9\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.992408 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2lr5\" (UniqueName: \"kubernetes.io/projected/5fa0bc39-1657-44bf-9c49-0bdee78de9bd-kube-api-access-w2lr5\") pod \"ironic-operator-controller-manager-5c75d7c94b-2cv5t\" (UID: \"5fa0bc39-1657-44bf-9c49-0bdee78de9bd\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.992446 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hszc\" (UniqueName: \"kubernetes.io/projected/33b6c3db-1c77-452f-a0b6-26ed5d261a15-kube-api-access-8hszc\") pod \"infra-operator-controller-manager-769d9c7585-xgmp2\" (UID: \"33b6c3db-1c77-452f-a0b6-26ed5d261a15\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" Nov 22 07:20:40 crc kubenswrapper[4856]: I1122 07:20:40.992485 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33b6c3db-1c77-452f-a0b6-26ed5d261a15-cert\") pod \"infra-operator-controller-manager-769d9c7585-xgmp2\" (UID: \"33b6c3db-1c77-452f-a0b6-26ed5d261a15\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.004749 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.009338 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.017392 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.036990 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.045481 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzpjf\" (UniqueName: \"kubernetes.io/projected/b8bdffad-516a-4927-8319-72b583afead1-kube-api-access-bzpjf\") pod \"glance-operator-controller-manager-8667fbf6f6-bsv2v\" (UID: \"b8bdffad-516a-4927-8319-72b583afead1\") " pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.081206 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-7cn6v" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.081280 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2gw7\" (UniqueName: \"kubernetes.io/projected/1a831555-0593-4c78-9b32-8469445182c6-kube-api-access-c2gw7\") pod \"designate-operator-controller-manager-56dfb6b67f-f29xt\" (UID: \"1a831555-0593-4c78-9b32-8469445182c6\") " pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.086962 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.093489 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.097383 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdtzd\" (UniqueName: \"kubernetes.io/projected/8c6edaa5-7bd8-4fbb-bee5-92735fe2d2de-kube-api-access-xdtzd\") pod \"horizon-operator-controller-manager-5d86b44686-v7qv6\" (UID: \"8c6edaa5-7bd8-4fbb-bee5-92735fe2d2de\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.097448 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk6p8\" (UniqueName: \"kubernetes.io/projected/da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708-kube-api-access-gk6p8\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-5gk8v\" (UID: \"da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.097489 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r6ln\" (UniqueName: \"kubernetes.io/projected/4d026193-be5d-4202-9379-adbff15842b6-kube-api-access-8r6ln\") pod \"neutron-operator-controller-manager-66b7d6f598-fkrzv\" (UID: \"4d026193-be5d-4202-9379-adbff15842b6\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.097533 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2lr5\" (UniqueName: \"kubernetes.io/projected/5fa0bc39-1657-44bf-9c49-0bdee78de9bd-kube-api-access-w2lr5\") pod \"ironic-operator-controller-manager-5c75d7c94b-2cv5t\" (UID: \"5fa0bc39-1657-44bf-9c49-0bdee78de9bd\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.097550 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hszc\" (UniqueName: \"kubernetes.io/projected/33b6c3db-1c77-452f-a0b6-26ed5d261a15-kube-api-access-8hszc\") pod \"infra-operator-controller-manager-769d9c7585-xgmp2\" (UID: \"33b6c3db-1c77-452f-a0b6-26ed5d261a15\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.097573 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33b6c3db-1c77-452f-a0b6-26ed5d261a15-cert\") pod \"infra-operator-controller-manager-769d9c7585-xgmp2\" (UID: \"33b6c3db-1c77-452f-a0b6-26ed5d261a15\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.097590 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2fn7\" (UniqueName: \"kubernetes.io/projected/d160dfd5-d7c2-4004-9b82-e6883be21331-kube-api-access-m2fn7\") pod \"manila-operator-controller-manager-7bb88cb858-89ntq\" (UID: \"d160dfd5-d7c2-4004-9b82-e6883be21331\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.097617 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh767\" (UniqueName: \"kubernetes.io/projected/c5a2bc4d-cfa9-4f96-add5-8e498f4caf7e-kube-api-access-gh767\") pod \"keystone-operator-controller-manager-7879fb76fd-cnd64\" (UID: \"c5a2bc4d-cfa9-4f96-add5-8e498f4caf7e\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64" Nov 22 07:20:41 crc kubenswrapper[4856]: E1122 07:20:41.099254 4856 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 22 07:20:41 crc kubenswrapper[4856]: E1122 07:20:41.099331 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33b6c3db-1c77-452f-a0b6-26ed5d261a15-cert podName:33b6c3db-1c77-452f-a0b6-26ed5d261a15 nodeName:}" failed. No retries permitted until 2025-11-22 07:20:41.59930767 +0000 UTC m=+1084.012701028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33b6c3db-1c77-452f-a0b6-26ed5d261a15-cert") pod "infra-operator-controller-manager-769d9c7585-xgmp2" (UID: "33b6c3db-1c77-452f-a0b6-26ed5d261a15") : secret "infra-operator-webhook-server-cert" not found Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.100165 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wps8k\" (UniqueName: \"kubernetes.io/projected/11bca657-d3dd-4ecc-b2a7-fc430d0e27d9-kube-api-access-wps8k\") pod \"heat-operator-controller-manager-bf4c6585d-29dmw\" (UID: \"11bca657-d3dd-4ecc-b2a7-fc430d0e27d9\") " pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.111965 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.122776 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.134630 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdtzd\" (UniqueName: \"kubernetes.io/projected/8c6edaa5-7bd8-4fbb-bee5-92735fe2d2de-kube-api-access-xdtzd\") pod \"horizon-operator-controller-manager-5d86b44686-v7qv6\" (UID: \"8c6edaa5-7bd8-4fbb-bee5-92735fe2d2de\") " pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.143084 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.144253 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.145997 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.156345 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.158248 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-s7xhj" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.196225 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.197683 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.198419 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfwhb\" (UniqueName: \"kubernetes.io/projected/d923f559-33c8-4832-8eec-c8b1879ba8cd-kube-api-access-rfwhb\") pod \"nova-operator-controller-manager-86d796d84d-rk8pp\" (UID: \"d923f559-33c8-4832-8eec-c8b1879ba8cd\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.198546 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gk6p8\" (UniqueName: \"kubernetes.io/projected/da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708-kube-api-access-gk6p8\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-5gk8v\" (UID: \"da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.198580 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r6ln\" (UniqueName: \"kubernetes.io/projected/4d026193-be5d-4202-9379-adbff15842b6-kube-api-access-8r6ln\") pod \"neutron-operator-controller-manager-66b7d6f598-fkrzv\" (UID: \"4d026193-be5d-4202-9379-adbff15842b6\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.198647 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2fn7\" (UniqueName: \"kubernetes.io/projected/d160dfd5-d7c2-4004-9b82-e6883be21331-kube-api-access-m2fn7\") pod \"manila-operator-controller-manager-7bb88cb858-89ntq\" (UID: \"d160dfd5-d7c2-4004-9b82-e6883be21331\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.198678 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh767\" (UniqueName: \"kubernetes.io/projected/c5a2bc4d-cfa9-4f96-add5-8e498f4caf7e-kube-api-access-gh767\") pod \"keystone-operator-controller-manager-7879fb76fd-cnd64\" (UID: \"c5a2bc4d-cfa9-4f96-add5-8e498f4caf7e\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.200199 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fjspw" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.200336 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.203233 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.224378 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.225364 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.225488 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.227872 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-77bx4" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.239616 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.269571 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.272967 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.279540 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-8cdwl" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.292842 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.294375 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.296994 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7s55b" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.300407 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhgjh\" (UniqueName: \"kubernetes.io/projected/7119f7f3-e9e5-49db-afec-6c3b9fbe5a97-kube-api-access-fhgjh\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-nns6d\" (UID: \"7119f7f3-e9e5-49db-afec-6c3b9fbe5a97\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.300547 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfwhb\" (UniqueName: \"kubernetes.io/projected/d923f559-33c8-4832-8eec-c8b1879ba8cd-kube-api-access-rfwhb\") pod \"nova-operator-controller-manager-86d796d84d-rk8pp\" (UID: \"d923f559-33c8-4832-8eec-c8b1879ba8cd\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.300719 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4wmz\" (UniqueName: \"kubernetes.io/projected/9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1-kube-api-access-j4wmz\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr\" (UID: \"9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.300808 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr\" (UID: \"9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.300963 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbfhn\" (UniqueName: \"kubernetes.io/projected/837a0948-1f0d-4478-8e0a-fd8f897dd107-kube-api-access-jbfhn\") pod \"octavia-operator-controller-manager-6fdc856c5d-nmllh\" (UID: \"837a0948-1f0d-4478-8e0a-fd8f897dd107\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.304051 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2fn7\" (UniqueName: \"kubernetes.io/projected/d160dfd5-d7c2-4004-9b82-e6883be21331-kube-api-access-m2fn7\") pod \"manila-operator-controller-manager-7bb88cb858-89ntq\" (UID: \"d160dfd5-d7c2-4004-9b82-e6883be21331\") " pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.304088 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.304097 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hszc\" (UniqueName: \"kubernetes.io/projected/33b6c3db-1c77-452f-a0b6-26ed5d261a15-kube-api-access-8hszc\") pod \"infra-operator-controller-manager-769d9c7585-xgmp2\" (UID: \"33b6c3db-1c77-452f-a0b6-26ed5d261a15\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.304060 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh767\" (UniqueName: \"kubernetes.io/projected/c5a2bc4d-cfa9-4f96-add5-8e498f4caf7e-kube-api-access-gh767\") pod \"keystone-operator-controller-manager-7879fb76fd-cnd64\" (UID: \"c5a2bc4d-cfa9-4f96-add5-8e498f4caf7e\") " pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.305285 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.312193 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-96djb" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.320197 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r6ln\" (UniqueName: \"kubernetes.io/projected/4d026193-be5d-4202-9379-adbff15842b6-kube-api-access-8r6ln\") pod \"neutron-operator-controller-manager-66b7d6f598-fkrzv\" (UID: \"4d026193-be5d-4202-9379-adbff15842b6\") " pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.329345 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gk6p8\" (UniqueName: \"kubernetes.io/projected/da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708-kube-api-access-gk6p8\") pod \"mariadb-operator-controller-manager-6f8c5b86cb-5gk8v\" (UID: \"da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708\") " pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.333819 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.345716 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfwhb\" (UniqueName: \"kubernetes.io/projected/d923f559-33c8-4832-8eec-c8b1879ba8cd-kube-api-access-rfwhb\") pod \"nova-operator-controller-manager-86d796d84d-rk8pp\" (UID: \"d923f559-33c8-4832-8eec-c8b1879ba8cd\") " pod="openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.349348 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2lr5\" (UniqueName: \"kubernetes.io/projected/5fa0bc39-1657-44bf-9c49-0bdee78de9bd-kube-api-access-w2lr5\") pod \"ironic-operator-controller-manager-5c75d7c94b-2cv5t\" (UID: \"5fa0bc39-1657-44bf-9c49-0bdee78de9bd\") " pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.349732 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.360781 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.369455 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.380692 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-swr7c"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.382159 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-swr7c" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.387777 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fw9m9" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.392729 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-swr7c"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.404000 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvnqb\" (UniqueName: \"kubernetes.io/projected/bd3472b0-3e99-46e7-bef3-dbd8283ce6de-kube-api-access-dvnqb\") pod \"placement-operator-controller-manager-6dc664666c-tgk9d\" (UID: \"bd3472b0-3e99-46e7-bef3-dbd8283ce6de\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.404060 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjn4s\" (UniqueName: \"kubernetes.io/projected/7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0-kube-api-access-fjn4s\") pod \"swift-operator-controller-manager-799cb6ffd6-qf6ld\" (UID: \"7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.404083 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4wmz\" (UniqueName: \"kubernetes.io/projected/9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1-kube-api-access-j4wmz\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr\" (UID: \"9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.404104 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr\" (UID: \"9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.404127 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbfhn\" (UniqueName: \"kubernetes.io/projected/837a0948-1f0d-4478-8e0a-fd8f897dd107-kube-api-access-jbfhn\") pod \"octavia-operator-controller-manager-6fdc856c5d-nmllh\" (UID: \"837a0948-1f0d-4478-8e0a-fd8f897dd107\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.404172 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhgjh\" (UniqueName: \"kubernetes.io/projected/7119f7f3-e9e5-49db-afec-6c3b9fbe5a97-kube-api-access-fhgjh\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-nns6d\" (UID: \"7119f7f3-e9e5-49db-afec-6c3b9fbe5a97\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.404190 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7zzv\" (UniqueName: \"kubernetes.io/projected/d7809224-a0c8-47fa-91ac-2f02578819fe-kube-api-access-c7zzv\") pod \"telemetry-operator-controller-manager-7798859c74-47z2l\" (UID: \"d7809224-a0c8-47fa-91ac-2f02578819fe\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l" Nov 22 07:20:41 crc kubenswrapper[4856]: E1122 07:20:41.404557 4856 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 07:20:41 crc kubenswrapper[4856]: E1122 07:20:41.404602 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1-cert podName:9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1 nodeName:}" failed. No retries permitted until 2025-11-22 07:20:41.904589558 +0000 UTC m=+1084.317982816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1-cert") pod "openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" (UID: "9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.425243 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.427063 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.433127 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-p6t7t" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.434269 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4wmz\" (UniqueName: \"kubernetes.io/projected/9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1-kube-api-access-j4wmz\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr\" (UID: \"9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.436841 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.463216 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhgjh\" (UniqueName: \"kubernetes.io/projected/7119f7f3-e9e5-49db-afec-6c3b9fbe5a97-kube-api-access-fhgjh\") pod \"ovn-operator-controller-manager-5bdf4f7f7f-nns6d\" (UID: \"7119f7f3-e9e5-49db-afec-6c3b9fbe5a97\") " pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.472049 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.472895 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbfhn\" (UniqueName: \"kubernetes.io/projected/837a0948-1f0d-4478-8e0a-fd8f897dd107-kube-api-access-jbfhn\") pod \"octavia-operator-controller-manager-6fdc856c5d-nmllh\" (UID: \"837a0948-1f0d-4478-8e0a-fd8f897dd107\") " pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.473371 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.491600 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-85k96" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.491788 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.505191 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjn4s\" (UniqueName: \"kubernetes.io/projected/7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0-kube-api-access-fjn4s\") pod \"swift-operator-controller-manager-799cb6ffd6-qf6ld\" (UID: \"7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.505244 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbhxj\" (UniqueName: \"kubernetes.io/projected/ea4c3d48-c5fc-498d-a095-455572fcbb9e-kube-api-access-sbhxj\") pod \"test-operator-controller-manager-8464cf66df-swr7c\" (UID: \"ea4c3d48-c5fc-498d-a095-455572fcbb9e\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-swr7c" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.505283 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7zzv\" (UniqueName: \"kubernetes.io/projected/d7809224-a0c8-47fa-91ac-2f02578819fe-kube-api-access-c7zzv\") pod \"telemetry-operator-controller-manager-7798859c74-47z2l\" (UID: \"d7809224-a0c8-47fa-91ac-2f02578819fe\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.505309 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d95gd\" (UniqueName: \"kubernetes.io/projected/e2be6208-86a1-4604-bddc-a3bd98258537-kube-api-access-d95gd\") pod \"watcher-operator-controller-manager-7cd4fb6f79-pg9z9\" (UID: \"e2be6208-86a1-4604-bddc-a3bd98258537\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.505380 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvnqb\" (UniqueName: \"kubernetes.io/projected/bd3472b0-3e99-46e7-bef3-dbd8283ce6de-kube-api-access-dvnqb\") pod \"placement-operator-controller-manager-6dc664666c-tgk9d\" (UID: \"bd3472b0-3e99-46e7-bef3-dbd8283ce6de\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.519288 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.519726 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.525769 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.526718 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.533822 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.534394 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-7zpnr" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.537242 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvnqb\" (UniqueName: \"kubernetes.io/projected/bd3472b0-3e99-46e7-bef3-dbd8283ce6de-kube-api-access-dvnqb\") pod \"placement-operator-controller-manager-6dc664666c-tgk9d\" (UID: \"bd3472b0-3e99-46e7-bef3-dbd8283ce6de\") " pod="openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.542249 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.544638 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjn4s\" (UniqueName: \"kubernetes.io/projected/7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0-kube-api-access-fjn4s\") pod \"swift-operator-controller-manager-799cb6ffd6-qf6ld\" (UID: \"7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0\") " pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.561592 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7zzv\" (UniqueName: \"kubernetes.io/projected/d7809224-a0c8-47fa-91ac-2f02578819fe-kube-api-access-c7zzv\") pod \"telemetry-operator-controller-manager-7798859c74-47z2l\" (UID: \"d7809224-a0c8-47fa-91ac-2f02578819fe\") " pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.590816 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.595036 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.595726 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.595837 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.604780 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.606202 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d95gd\" (UniqueName: \"kubernetes.io/projected/e2be6208-86a1-4604-bddc-a3bd98258537-kube-api-access-d95gd\") pod \"watcher-operator-controller-manager-7cd4fb6f79-pg9z9\" (UID: \"e2be6208-86a1-4604-bddc-a3bd98258537\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.606261 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e08c83ff-ad65-4b8d-8ce9-e21c467aa01f-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-zm2ph\" (UID: \"e08c83ff-ad65-4b8d-8ce9-e21c467aa01f\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.606357 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbhxj\" (UniqueName: \"kubernetes.io/projected/ea4c3d48-c5fc-498d-a095-455572fcbb9e-kube-api-access-sbhxj\") pod \"test-operator-controller-manager-8464cf66df-swr7c\" (UID: \"ea4c3d48-c5fc-498d-a095-455572fcbb9e\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-swr7c" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.606384 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg5hz\" (UniqueName: \"kubernetes.io/projected/b9b9c1ca-f17c-4fbb-805e-4464e3b93b02-kube-api-access-fg5hz\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc\" (UID: \"b9b9c1ca-f17c-4fbb-805e-4464e3b93b02\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.606416 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33b6c3db-1c77-452f-a0b6-26ed5d261a15-cert\") pod \"infra-operator-controller-manager-769d9c7585-xgmp2\" (UID: \"33b6c3db-1c77-452f-a0b6-26ed5d261a15\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.606437 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjg4b\" (UniqueName: \"kubernetes.io/projected/e08c83ff-ad65-4b8d-8ce9-e21c467aa01f-kube-api-access-qjg4b\") pod \"openstack-operator-controller-manager-6cb9dc54f8-zm2ph\" (UID: \"e08c83ff-ad65-4b8d-8ce9-e21c467aa01f\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" Nov 22 07:20:41 crc kubenswrapper[4856]: E1122 07:20:41.606854 4856 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 22 07:20:41 crc kubenswrapper[4856]: E1122 07:20:41.606978 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/33b6c3db-1c77-452f-a0b6-26ed5d261a15-cert podName:33b6c3db-1c77-452f-a0b6-26ed5d261a15 nodeName:}" failed. No retries permitted until 2025-11-22 07:20:42.606942614 +0000 UTC m=+1085.020335872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/33b6c3db-1c77-452f-a0b6-26ed5d261a15-cert") pod "infra-operator-controller-manager-769d9c7585-xgmp2" (UID: "33b6c3db-1c77-452f-a0b6-26ed5d261a15") : secret "infra-operator-webhook-server-cert" not found Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.612958 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.620545 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.650801 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.652250 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbhxj\" (UniqueName: \"kubernetes.io/projected/ea4c3d48-c5fc-498d-a095-455572fcbb9e-kube-api-access-sbhxj\") pod \"test-operator-controller-manager-8464cf66df-swr7c\" (UID: \"ea4c3d48-c5fc-498d-a095-455572fcbb9e\") " pod="openstack-operators/test-operator-controller-manager-8464cf66df-swr7c" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.667242 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d95gd\" (UniqueName: \"kubernetes.io/projected/e2be6208-86a1-4604-bddc-a3bd98258537-kube-api-access-d95gd\") pod \"watcher-operator-controller-manager-7cd4fb6f79-pg9z9\" (UID: \"e2be6208-86a1-4604-bddc-a3bd98258537\") " pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.707469 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg5hz\" (UniqueName: \"kubernetes.io/projected/b9b9c1ca-f17c-4fbb-805e-4464e3b93b02-kube-api-access-fg5hz\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc\" (UID: \"b9b9c1ca-f17c-4fbb-805e-4464e3b93b02\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.707538 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjg4b\" (UniqueName: \"kubernetes.io/projected/e08c83ff-ad65-4b8d-8ce9-e21c467aa01f-kube-api-access-qjg4b\") pod \"openstack-operator-controller-manager-6cb9dc54f8-zm2ph\" (UID: \"e08c83ff-ad65-4b8d-8ce9-e21c467aa01f\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.707580 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e08c83ff-ad65-4b8d-8ce9-e21c467aa01f-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-zm2ph\" (UID: \"e08c83ff-ad65-4b8d-8ce9-e21c467aa01f\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" Nov 22 07:20:41 crc kubenswrapper[4856]: E1122 07:20:41.707703 4856 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 22 07:20:41 crc kubenswrapper[4856]: E1122 07:20:41.707749 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e08c83ff-ad65-4b8d-8ce9-e21c467aa01f-cert podName:e08c83ff-ad65-4b8d-8ce9-e21c467aa01f nodeName:}" failed. No retries permitted until 2025-11-22 07:20:42.207733417 +0000 UTC m=+1084.621126675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e08c83ff-ad65-4b8d-8ce9-e21c467aa01f-cert") pod "openstack-operator-controller-manager-6cb9dc54f8-zm2ph" (UID: "e08c83ff-ad65-4b8d-8ce9-e21c467aa01f") : secret "webhook-server-cert" not found Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.729570 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-8464cf66df-swr7c" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.731579 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg5hz\" (UniqueName: \"kubernetes.io/projected/b9b9c1ca-f17c-4fbb-805e-4464e3b93b02-kube-api-access-fg5hz\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc\" (UID: \"b9b9c1ca-f17c-4fbb-805e-4464e3b93b02\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.737407 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjg4b\" (UniqueName: \"kubernetes.io/projected/e08c83ff-ad65-4b8d-8ce9-e21c467aa01f-kube-api-access-qjg4b\") pod \"openstack-operator-controller-manager-6cb9dc54f8-zm2ph\" (UID: \"e08c83ff-ad65-4b8d-8ce9-e21c467aa01f\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.747385 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.748122 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.756187 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz"] Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.824159 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw" event={"ID":"5ac8c521-cea0-4bdf-a90c-5d61cff9e30d","Type":"ContainerStarted","Data":"821b25a00f98de124799adac196706ac937810f423f8807d20cab547b3141696"} Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.828368 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz" event={"ID":"6726415a-8b70-4cde-80fa-5e9954cacb16","Type":"ContainerStarted","Data":"68b0f1fb0059ace716bfb72f3f7540091f48e8d2678d55df85b5865823f25ac5"} Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.910082 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr\" (UID: \"9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" Nov 22 07:20:41 crc kubenswrapper[4856]: E1122 07:20:41.910257 4856 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 07:20:41 crc kubenswrapper[4856]: E1122 07:20:41.910328 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1-cert podName:9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1 nodeName:}" failed. No retries permitted until 2025-11-22 07:20:42.910309801 +0000 UTC m=+1085.323703059 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1-cert") pod "openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" (UID: "9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.943076 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc" Nov 22 07:20:41 crc kubenswrapper[4856]: I1122 07:20:41.950229 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.074707 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.079116 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.083902 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6"] Nov 22 07:20:42 crc kubenswrapper[4856]: W1122 07:20:42.172649 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11bca657_d3dd_4ecc_b2a7_fc430d0e27d9.slice/crio-a0a39cf624ed9a71dc60c9f8a788c5f69fbc045b4b00da56d506c0ae7f6a4116 WatchSource:0}: Error finding container a0a39cf624ed9a71dc60c9f8a788c5f69fbc045b4b00da56d506c0ae7f6a4116: Status 404 returned error can't find the container with id a0a39cf624ed9a71dc60c9f8a788c5f69fbc045b4b00da56d506c0ae7f6a4116 Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.214435 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e08c83ff-ad65-4b8d-8ce9-e21c467aa01f-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-zm2ph\" (UID: \"e08c83ff-ad65-4b8d-8ce9-e21c467aa01f\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.235270 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e08c83ff-ad65-4b8d-8ce9-e21c467aa01f-cert\") pod \"openstack-operator-controller-manager-6cb9dc54f8-zm2ph\" (UID: \"e08c83ff-ad65-4b8d-8ce9-e21c467aa01f\") " pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.281643 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.293308 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.410212 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.412248 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh"] Nov 22 07:20:42 crc kubenswrapper[4856]: W1122 07:20:42.416523 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod837a0948_1f0d_4478_8e0a_fd8f897dd107.slice/crio-65530066eda032e3a8dc738886b2c8ec1b0300b3a860116f97e08986766e3150 WatchSource:0}: Error finding container 65530066eda032e3a8dc738886b2c8ec1b0300b3a860116f97e08986766e3150: Status 404 returned error can't find the container with id 65530066eda032e3a8dc738886b2c8ec1b0300b3a860116f97e08986766e3150 Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.487814 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.510106 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.519138 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64"] Nov 22 07:20:42 crc kubenswrapper[4856]: W1122 07:20:42.521951 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd160dfd5_d7c2_4004_9b82_e6883be21331.slice/crio-1994c46656b774d8493c32173a2da08a17b86c567fc4f32d1642a6f6ae760086 WatchSource:0}: Error finding container 1994c46656b774d8493c32173a2da08a17b86c567fc4f32d1642a6f6ae760086: Status 404 returned error can't find the container with id 1994c46656b774d8493c32173a2da08a17b86c567fc4f32d1642a6f6ae760086 Nov 22 07:20:42 crc kubenswrapper[4856]: W1122 07:20:42.535639 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd923f559_33c8_4832_8eec_c8b1879ba8cd.slice/crio-cf8e6def7cea7f26f9cab719553b45a0634876f145884fefc98acce67604fc93 WatchSource:0}: Error finding container cf8e6def7cea7f26f9cab719553b45a0634876f145884fefc98acce67604fc93: Status 404 returned error can't find the container with id cf8e6def7cea7f26f9cab719553b45a0634876f145884fefc98acce67604fc93 Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.622590 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33b6c3db-1c77-452f-a0b6-26ed5d261a15-cert\") pod \"infra-operator-controller-manager-769d9c7585-xgmp2\" (UID: \"33b6c3db-1c77-452f-a0b6-26ed5d261a15\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.628028 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/33b6c3db-1c77-452f-a0b6-26ed5d261a15-cert\") pod \"infra-operator-controller-manager-769d9c7585-xgmp2\" (UID: \"33b6c3db-1c77-452f-a0b6-26ed5d261a15\") " pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.655578 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.661234 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.688839 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.750003 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.769435 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-8464cf66df-swr7c"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.777142 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.783286 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v"] Nov 22 07:20:42 crc kubenswrapper[4856]: W1122 07:20:42.785367 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7809224_a0c8_47fa_91ac_2f02578819fe.slice/crio-442cba79a3bc934b0456c97d343b47a613dc83b65d240d677b4b064e2216e64d WatchSource:0}: Error finding container 442cba79a3bc934b0456c97d343b47a613dc83b65d240d677b4b064e2216e64d: Status 404 returned error can't find the container with id 442cba79a3bc934b0456c97d343b47a613dc83b65d240d677b4b064e2216e64d Nov 22 07:20:42 crc kubenswrapper[4856]: W1122 07:20:42.792697 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea4c3d48_c5fc_498d_a095_455572fcbb9e.slice/crio-d3b6e3821572e3f1659ff0b97d5babd6a8d6e67af6dfc2df7cf2f9874bde581c WatchSource:0}: Error finding container d3b6e3821572e3f1659ff0b97d5babd6a8d6e67af6dfc2df7cf2f9874bde581c: Status 404 returned error can't find the container with id d3b6e3821572e3f1659ff0b97d5babd6a8d6e67af6dfc2df7cf2f9874bde581c Nov 22 07:20:42 crc kubenswrapper[4856]: W1122 07:20:42.797794 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda5eefaf_63ce_4e1b_8cbe_4f7b4d67e708.slice/crio-daa9f965fc071da3c968e6bb746f51e5096b7bf6096e696ded4dc43fc8503540 WatchSource:0}: Error finding container daa9f965fc071da3c968e6bb746f51e5096b7bf6096e696ded4dc43fc8503540: Status 404 returned error can't find the container with id daa9f965fc071da3c968e6bb746f51e5096b7bf6096e696ded4dc43fc8503540 Nov 22 07:20:42 crc kubenswrapper[4856]: E1122 07:20:42.803834 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbhxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-8464cf66df-swr7c_openstack-operators(ea4c3d48-c5fc-498d-a095-455572fcbb9e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 07:20:42 crc kubenswrapper[4856]: E1122 07:20:42.804005 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c7zzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-7798859c74-47z2l_openstack-operators(d7809224-a0c8-47fa-91ac-2f02578819fe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 07:20:42 crc kubenswrapper[4856]: E1122 07:20:42.814427 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gk6p8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6f8c5b86cb-5gk8v_openstack-operators(da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.832933 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.866888 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" event={"ID":"7119f7f3-e9e5-49db-afec-6c3b9fbe5a97","Type":"ContainerStarted","Data":"a3002646246e202b4288fc16a7d0f01092ab4768a7009e4feb57842bae539946"} Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.869381 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v" event={"ID":"da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708","Type":"ContainerStarted","Data":"daa9f965fc071da3c968e6bb746f51e5096b7bf6096e696ded4dc43fc8503540"} Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.885441 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp" event={"ID":"d923f559-33c8-4832-8eec-c8b1879ba8cd","Type":"ContainerStarted","Data":"cf8e6def7cea7f26f9cab719553b45a0634876f145884fefc98acce67604fc93"} Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.907338 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.913293 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t" event={"ID":"5fa0bc39-1657-44bf-9c49-0bdee78de9bd","Type":"ContainerStarted","Data":"9d5c96a08be712fb11c2e04a4364147a191530fe4f31e0d61bd5c7446280c3d7"} Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.921001 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc"] Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.927672 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr\" (UID: \"9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.933056 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v" event={"ID":"b8bdffad-516a-4927-8319-72b583afead1","Type":"ContainerStarted","Data":"169ef0b0f0e3980d0439bea964797ddc7f5af3b35646ce0b376fb0c4ae779457"} Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.933816 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1-cert\") pod \"openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr\" (UID: \"9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.957834 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d" event={"ID":"bd3472b0-3e99-46e7-bef3-dbd8283ce6de","Type":"ContainerStarted","Data":"15790796907bc0255cd72f503af36b22148aefc31449637bb8b9f5ce1162cc97"} Nov 22 07:20:42 crc kubenswrapper[4856]: E1122 07:20:42.960353 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fg5hz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc_openstack-operators(b9b9c1ca-f17c-4fbb-805e-4464e3b93b02): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 07:20:42 crc kubenswrapper[4856]: E1122 07:20:42.961967 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc" podUID="b9b9c1ca-f17c-4fbb-805e-4464e3b93b02" Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.962638 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv" event={"ID":"4d026193-be5d-4202-9379-adbff15842b6","Type":"ContainerStarted","Data":"0f9ddb1cf1b602fc824b2b376e5a96dc0eec3feea53a50fefdffacc67a0f059d"} Nov 22 07:20:42 crc kubenswrapper[4856]: E1122 07:20:42.962941 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjn4s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-799cb6ffd6-qf6ld_openstack-operators(7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.966085 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-swr7c" event={"ID":"ea4c3d48-c5fc-498d-a095-455572fcbb9e","Type":"ContainerStarted","Data":"d3b6e3821572e3f1659ff0b97d5babd6a8d6e67af6dfc2df7cf2f9874bde581c"} Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.973873 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw" event={"ID":"11bca657-d3dd-4ecc-b2a7-fc430d0e27d9","Type":"ContainerStarted","Data":"a0a39cf624ed9a71dc60c9f8a788c5f69fbc045b4b00da56d506c0ae7f6a4116"} Nov 22 07:20:42 crc kubenswrapper[4856]: I1122 07:20:42.985651 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh" event={"ID":"837a0948-1f0d-4478-8e0a-fd8f897dd107","Type":"ContainerStarted","Data":"65530066eda032e3a8dc738886b2c8ec1b0300b3a860116f97e08986766e3150"} Nov 22 07:20:43 crc kubenswrapper[4856]: I1122 07:20:43.014342 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64" event={"ID":"c5a2bc4d-cfa9-4f96-add5-8e498f4caf7e","Type":"ContainerStarted","Data":"0263224851d42524344567a6085be9402a4059ca881fbea91d929fd5d64c6f10"} Nov 22 07:20:43 crc kubenswrapper[4856]: I1122 07:20:43.018731 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l" event={"ID":"d7809224-a0c8-47fa-91ac-2f02578819fe","Type":"ContainerStarted","Data":"442cba79a3bc934b0456c97d343b47a613dc83b65d240d677b4b064e2216e64d"} Nov 22 07:20:43 crc kubenswrapper[4856]: I1122 07:20:43.024563 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2"] Nov 22 07:20:43 crc kubenswrapper[4856]: I1122 07:20:43.028254 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq" event={"ID":"d160dfd5-d7c2-4004-9b82-e6883be21331","Type":"ContainerStarted","Data":"1994c46656b774d8493c32173a2da08a17b86c567fc4f32d1642a6f6ae760086"} Nov 22 07:20:43 crc kubenswrapper[4856]: I1122 07:20:43.031242 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6" event={"ID":"8c6edaa5-7bd8-4fbb-bee5-92735fe2d2de","Type":"ContainerStarted","Data":"fa12af2ac322c117604bb23927078434bfb64955abf768543df629f181431f76"} Nov 22 07:20:43 crc kubenswrapper[4856]: I1122 07:20:43.035199 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" event={"ID":"e2be6208-86a1-4604-bddc-a3bd98258537","Type":"ContainerStarted","Data":"19066e7adfc62990dcc97545d6e7b807414b3fcfb3b8ebbca19899f576817d2a"} Nov 22 07:20:43 crc kubenswrapper[4856]: I1122 07:20:43.038017 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt" event={"ID":"1a831555-0593-4c78-9b32-8469445182c6","Type":"ContainerStarted","Data":"1fec577694bc5fb0e98199dcb44fa08e980c55c76b32c8ec9a829b3e56b431ca"} Nov 22 07:20:43 crc kubenswrapper[4856]: I1122 07:20:43.097127 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" Nov 22 07:20:45 crc kubenswrapper[4856]: I1122 07:20:43.628890 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr"] Nov 22 07:20:45 crc kubenswrapper[4856]: W1122 07:20:43.640153 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a706ecd_d4b5_402c_a5e5_1cfb7244bcf1.slice/crio-2011da462d24425ac83c0d37606c64fab411a7e66047185d91831154ae19a2e4 WatchSource:0}: Error finding container 2011da462d24425ac83c0d37606c64fab411a7e66047185d91831154ae19a2e4: Status 404 returned error can't find the container with id 2011da462d24425ac83c0d37606c64fab411a7e66047185d91831154ae19a2e4 Nov 22 07:20:45 crc kubenswrapper[4856]: I1122 07:20:44.049864 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" event={"ID":"e08c83ff-ad65-4b8d-8ce9-e21c467aa01f","Type":"ContainerStarted","Data":"8af72d60f7037570603e5aa066155bc355da881b5c4d0801af947a4e795a731d"} Nov 22 07:20:45 crc kubenswrapper[4856]: I1122 07:20:44.052073 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" event={"ID":"33b6c3db-1c77-452f-a0b6-26ed5d261a15","Type":"ContainerStarted","Data":"e87b93558fd86704869b00ad9bfa8cf0caa452cef13f931a024245fafec4e909"} Nov 22 07:20:45 crc kubenswrapper[4856]: I1122 07:20:44.053633 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" event={"ID":"9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1","Type":"ContainerStarted","Data":"2011da462d24425ac83c0d37606c64fab411a7e66047185d91831154ae19a2e4"} Nov 22 07:20:45 crc kubenswrapper[4856]: I1122 07:20:44.055146 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc" event={"ID":"b9b9c1ca-f17c-4fbb-805e-4464e3b93b02","Type":"ContainerStarted","Data":"2d577066370ed8de6108f4c1ccb8f1c341bc1551e142ed065d6743658c45aa4b"} Nov 22 07:20:45 crc kubenswrapper[4856]: E1122 07:20:44.058414 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc" podUID="b9b9c1ca-f17c-4fbb-805e-4464e3b93b02" Nov 22 07:20:45 crc kubenswrapper[4856]: I1122 07:20:44.059374 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld" event={"ID":"7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0","Type":"ContainerStarted","Data":"3e779c1f58334c3495ee015af7046b1b3c14aae5b55a5b906a151be26c9200b1"} Nov 22 07:20:45 crc kubenswrapper[4856]: I1122 07:20:45.066146 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v" event={"ID":"da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708","Type":"ContainerStarted","Data":"9ecab3721b87ca85ff4181259b311c1e11f6fa62bec5bc6e7025678786a942b9"} Nov 22 07:20:45 crc kubenswrapper[4856]: I1122 07:20:45.067804 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld" event={"ID":"7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0","Type":"ContainerStarted","Data":"c3dd1ecbf465dbc0823db1b84d7713d1da539cb1780c91ae3cf2197690cfc22f"} Nov 22 07:20:45 crc kubenswrapper[4856]: I1122 07:20:45.069334 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l" event={"ID":"d7809224-a0c8-47fa-91ac-2f02578819fe","Type":"ContainerStarted","Data":"f659e7bf3643aefc03513229b7db519df597256750cd128d702828a7723a05fa"} Nov 22 07:20:45 crc kubenswrapper[4856]: I1122 07:20:45.070652 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-swr7c" event={"ID":"ea4c3d48-c5fc-498d-a095-455572fcbb9e","Type":"ContainerStarted","Data":"8ecd9f55f141c673e3c3b4eb74aee343b2305f6bac51e8b29e323dd93f31c80b"} Nov 22 07:20:45 crc kubenswrapper[4856]: I1122 07:20:45.071967 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" event={"ID":"e08c83ff-ad65-4b8d-8ce9-e21c467aa01f","Type":"ContainerStarted","Data":"c591bdd8795d0b90e4e20d2b27e652904dd8737159db05f8f95292ca4bfbed5a"} Nov 22 07:20:45 crc kubenswrapper[4856]: E1122 07:20:45.073535 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc" podUID="b9b9c1ca-f17c-4fbb-805e-4464e3b93b02" Nov 22 07:20:45 crc kubenswrapper[4856]: E1122 07:20:45.187241 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l" podUID="d7809224-a0c8-47fa-91ac-2f02578819fe" Nov 22 07:20:45 crc kubenswrapper[4856]: E1122 07:20:45.189738 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-swr7c" podUID="ea4c3d48-c5fc-498d-a095-455572fcbb9e" Nov 22 07:20:45 crc kubenswrapper[4856]: E1122 07:20:45.190659 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld" podUID="7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0" Nov 22 07:20:45 crc kubenswrapper[4856]: E1122 07:20:45.191840 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v" podUID="da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708" Nov 22 07:20:46 crc kubenswrapper[4856]: I1122 07:20:46.099592 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" event={"ID":"e08c83ff-ad65-4b8d-8ce9-e21c467aa01f","Type":"ContainerStarted","Data":"7e47dff5555fe0ace69f4951ec2e7da01a8aab949ac94544d021d8a930c634a0"} Nov 22 07:20:46 crc kubenswrapper[4856]: E1122 07:20:46.101537 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d\\\"\"" pod="openstack-operators/test-operator-controller-manager-8464cf66df-swr7c" podUID="ea4c3d48-c5fc-498d-a095-455572fcbb9e" Nov 22 07:20:46 crc kubenswrapper[4856]: E1122 07:20:46.102800 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:5324a6d2f76fc3041023b0cbd09a733ef2b59f310d390e4d6483d219eb96494f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l" podUID="d7809224-a0c8-47fa-91ac-2f02578819fe" Nov 22 07:20:46 crc kubenswrapper[4856]: E1122 07:20:46.103403 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld" podUID="7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0" Nov 22 07:20:46 crc kubenswrapper[4856]: E1122 07:20:46.103664 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v" podUID="da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708" Nov 22 07:20:46 crc kubenswrapper[4856]: I1122 07:20:46.155245 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" podStartSLOduration=5.155224599 podStartE2EDuration="5.155224599s" podCreationTimestamp="2025-11-22 07:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:20:46.149370742 +0000 UTC m=+1088.562764020" watchObservedRunningTime="2025-11-22 07:20:46.155224599 +0000 UTC m=+1088.568617867" Nov 22 07:20:47 crc kubenswrapper[4856]: I1122 07:20:47.108198 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" Nov 22 07:20:52 crc kubenswrapper[4856]: I1122 07:20:52.416983 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6cb9dc54f8-zm2ph" Nov 22 07:20:57 crc kubenswrapper[4856]: E1122 07:20:57.083545 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b" Nov 22 07:20:57 crc kubenswrapper[4856]: E1122 07:20:57.084001 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhgjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5bdf4f7f7f-nns6d_openstack-operators(7119f7f3-e9e5-49db-afec-6c3b9fbe5a97): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:20:57 crc kubenswrapper[4856]: E1122 07:20:57.598325 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894" Nov 22 07:20:57 crc kubenswrapper[4856]: E1122 07:20:57.598541 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8hszc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-769d9c7585-xgmp2_openstack-operators(33b6c3db-1c77-452f-a0b6-26ed5d261a15): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:20:58 crc kubenswrapper[4856]: E1122 07:20:58.408579 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f" Nov 22 07:20:58 crc kubenswrapper[4856]: E1122 07:20:58.409089 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d95gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7cd4fb6f79-pg9z9_openstack-operators(e2be6208-86a1-4604-bddc-a3bd98258537): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:20:58 crc kubenswrapper[4856]: E1122 07:20:58.840052 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" podUID="7119f7f3-e9e5-49db-afec-6c3b9fbe5a97" Nov 22 07:20:58 crc kubenswrapper[4856]: E1122 07:20:58.841709 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" podUID="33b6c3db-1c77-452f-a0b6-26ed5d261a15" Nov 22 07:20:59 crc kubenswrapper[4856]: E1122 07:20:59.071152 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" podUID="e2be6208-86a1-4604-bddc-a3bd98258537" Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.246642 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz" event={"ID":"6726415a-8b70-4cde-80fa-5e9954cacb16","Type":"ContainerStarted","Data":"35e947f34738a28627dce18ef1e6c651586eb89b223025c8e237e07b6b09a981"} Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.255809 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw" event={"ID":"5ac8c521-cea0-4bdf-a90c-5d61cff9e30d","Type":"ContainerStarted","Data":"d3acaa2ec455a848b8186ba9cfc5217b8a97a79e77bbe510b7bc461c216229ce"} Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.282559 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t" event={"ID":"5fa0bc39-1657-44bf-9c49-0bdee78de9bd","Type":"ContainerStarted","Data":"706a54046b2b431da74eda9de2f761dd22931fd8007c9cccd8fba8599cdb3096"} Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.306220 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" event={"ID":"9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1","Type":"ContainerStarted","Data":"c55d474a1cd36afb7f45ac2991546287f8af3389941ce587b8b0d0e364808e92"} Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.321822 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" event={"ID":"7119f7f3-e9e5-49db-afec-6c3b9fbe5a97","Type":"ContainerStarted","Data":"524ff467f861ece7ea5deadea5b1fa5eaef0a6e4bbb419db667c0b0c5481e2a5"} Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.344964 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d" event={"ID":"bd3472b0-3e99-46e7-bef3-dbd8283ce6de","Type":"ContainerStarted","Data":"c536e90355c6f0da04b1612618f4743befa629a0bb3f27d9cd0db176a2c900b9"} Nov 22 07:20:59 crc kubenswrapper[4856]: E1122 07:20:59.364960 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" podUID="7119f7f3-e9e5-49db-afec-6c3b9fbe5a97" Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.370089 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp" event={"ID":"d923f559-33c8-4832-8eec-c8b1879ba8cd","Type":"ContainerStarted","Data":"072ce6d9dd3729c58f3af780bcf01430362ab27dc3188b39f64924b69015587e"} Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.383080 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh" event={"ID":"837a0948-1f0d-4478-8e0a-fd8f897dd107","Type":"ContainerStarted","Data":"7ab1d83163fecc5415fecb609cf5faedbb90651c70dac43fa151b25ea5465641"} Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.420199 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt" event={"ID":"1a831555-0593-4c78-9b32-8469445182c6","Type":"ContainerStarted","Data":"f850582a962d7b7b7427e49e1811869965027122ec59818cf562884c308a96cf"} Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.438035 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6" event={"ID":"8c6edaa5-7bd8-4fbb-bee5-92735fe2d2de","Type":"ContainerStarted","Data":"e5b8143da67c6dc95d6d1e5280b17ed902825ea08e3148381e4ab6856c9bff51"} Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.449711 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw" event={"ID":"11bca657-d3dd-4ecc-b2a7-fc430d0e27d9","Type":"ContainerStarted","Data":"18c2fff0d210182046fc37bf2973dd10f3559ae7d5d6c47f6aa66ff04ca97f99"} Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.452349 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" event={"ID":"e2be6208-86a1-4604-bddc-a3bd98258537","Type":"ContainerStarted","Data":"d9cca18e1c44d4e9de563fcdbcc8fff66d2143210c536f7a13290a986e95f263"} Nov 22 07:20:59 crc kubenswrapper[4856]: E1122 07:20:59.458872 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" podUID="e2be6208-86a1-4604-bddc-a3bd98258537" Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.461983 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv" event={"ID":"4d026193-be5d-4202-9379-adbff15842b6","Type":"ContainerStarted","Data":"f0572c0f1894f0398b5c2f4d7992975d861739795608c42f8237b80844cd56d0"} Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.465152 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" event={"ID":"33b6c3db-1c77-452f-a0b6-26ed5d261a15","Type":"ContainerStarted","Data":"63d59144d26d98283bc07596917a4a4602850e1e4519c2458ca10a89fc3183fa"} Nov 22 07:20:59 crc kubenswrapper[4856]: E1122 07:20:59.467713 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\"" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" podUID="33b6c3db-1c77-452f-a0b6-26ed5d261a15" Nov 22 07:20:59 crc kubenswrapper[4856]: I1122 07:20:59.486116 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64" event={"ID":"c5a2bc4d-cfa9-4f96-add5-8e498f4caf7e","Type":"ContainerStarted","Data":"3bb39458a005ce2a229625fae0f6744642fe694abd9407f98967a2af3b937679"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.518274 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv" event={"ID":"4d026193-be5d-4202-9379-adbff15842b6","Type":"ContainerStarted","Data":"b8cceab3dfc754c42e16326e92a893f19f2063f2a2cde38aae70cb424c2a5939"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.518442 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.537738 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz" event={"ID":"6726415a-8b70-4cde-80fa-5e9954cacb16","Type":"ContainerStarted","Data":"d674d576fb3f834c921060d31aa5bc644a24e705f919784634b9e8f44aa3e40e"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.538474 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.556819 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw" event={"ID":"5ac8c521-cea0-4bdf-a90c-5d61cff9e30d","Type":"ContainerStarted","Data":"7988d2d47476f1599289061e8f2df84c4e6e547e4f27e51a748872cd24fa3f66"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.557309 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.562447 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t" event={"ID":"5fa0bc39-1657-44bf-9c49-0bdee78de9bd","Type":"ContainerStarted","Data":"4feb5b590de12c533e0e6ecde209978f1b74c44b9380b243dfb9b199aa40d3bb"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.563112 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.571562 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv" podStartSLOduration=4.532807093 podStartE2EDuration="20.571533571s" podCreationTimestamp="2025-11-22 07:20:40 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.327933002 +0000 UTC m=+1084.741326270" lastFinishedPulling="2025-11-22 07:20:58.36665949 +0000 UTC m=+1100.780052748" observedRunningTime="2025-11-22 07:21:00.547988901 +0000 UTC m=+1102.961382169" watchObservedRunningTime="2025-11-22 07:21:00.571533571 +0000 UTC m=+1102.984926829" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.574208 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz" podStartSLOduration=4.134389777 podStartE2EDuration="20.574190015s" podCreationTimestamp="2025-11-22 07:20:40 +0000 UTC" firstStartedPulling="2025-11-22 07:20:41.818373656 +0000 UTC m=+1084.231766914" lastFinishedPulling="2025-11-22 07:20:58.258173894 +0000 UTC m=+1100.671567152" observedRunningTime="2025-11-22 07:21:00.56712502 +0000 UTC m=+1102.980518278" watchObservedRunningTime="2025-11-22 07:21:00.574190015 +0000 UTC m=+1102.987583273" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.583347 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d" event={"ID":"bd3472b0-3e99-46e7-bef3-dbd8283ce6de","Type":"ContainerStarted","Data":"cc92f193e35b3cb856204091dd49853025925cf028190c9b0559306e8214b699"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.584680 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.588255 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp" event={"ID":"d923f559-33c8-4832-8eec-c8b1879ba8cd","Type":"ContainerStarted","Data":"b9ec5301aa92adb5748be760719c7335cdc5e34b1f113ffdb17ae9f6d98e967e"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.588942 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.608431 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw" podStartSLOduration=4.022897324 podStartE2EDuration="20.608393899s" podCreationTimestamp="2025-11-22 07:20:40 +0000 UTC" firstStartedPulling="2025-11-22 07:20:41.779818208 +0000 UTC m=+1084.193211466" lastFinishedPulling="2025-11-22 07:20:58.365314753 +0000 UTC m=+1100.778708041" observedRunningTime="2025-11-22 07:21:00.591674747 +0000 UTC m=+1103.005068005" watchObservedRunningTime="2025-11-22 07:21:00.608393899 +0000 UTC m=+1103.021787157" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.620014 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt" event={"ID":"1a831555-0593-4c78-9b32-8469445182c6","Type":"ContainerStarted","Data":"06ac077654da5f2680677a448d92dc87ea5347f0681210cbf1ee807e35fe7d07"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.620200 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.623980 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t" podStartSLOduration=4.590353798 podStartE2EDuration="20.623954629s" podCreationTimestamp="2025-11-22 07:20:40 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.333655036 +0000 UTC m=+1084.747048294" lastFinishedPulling="2025-11-22 07:20:58.367255867 +0000 UTC m=+1100.780649125" observedRunningTime="2025-11-22 07:21:00.620556046 +0000 UTC m=+1103.033949314" watchObservedRunningTime="2025-11-22 07:21:00.623954629 +0000 UTC m=+1103.037347887" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.633051 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64" event={"ID":"c5a2bc4d-cfa9-4f96-add5-8e498f4caf7e","Type":"ContainerStarted","Data":"027d6899a769040bf4a87b6baf8d617bda46228e3a337cd0ec43138c7f05df96"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.633953 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.647662 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq" event={"ID":"d160dfd5-d7c2-4004-9b82-e6883be21331","Type":"ContainerStarted","Data":"ce81d028454f1c5b335deb450b5e7a6c255ca8718e6b3eaf2d38d21b6296a0b6"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.647946 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq" event={"ID":"d160dfd5-d7c2-4004-9b82-e6883be21331","Type":"ContainerStarted","Data":"b923a39834bff1e8b848682f7c51b7261c7287a6c676ceb1ff7033593c2409f1"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.647976 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.652322 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp" podStartSLOduration=4.797766298 podStartE2EDuration="20.65217202s" podCreationTimestamp="2025-11-22 07:20:40 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.547583075 +0000 UTC m=+1084.960976343" lastFinishedPulling="2025-11-22 07:20:58.401988797 +0000 UTC m=+1100.815382065" observedRunningTime="2025-11-22 07:21:00.650988467 +0000 UTC m=+1103.064381725" watchObservedRunningTime="2025-11-22 07:21:00.65217202 +0000 UTC m=+1103.065565278" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.659205 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6" event={"ID":"8c6edaa5-7bd8-4fbb-bee5-92735fe2d2de","Type":"ContainerStarted","Data":"d86fc26a913c8f2040153a86cfaa1ecaa9ed4e39a58fa841d036401e66a3e10e"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.659281 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.702044 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw" event={"ID":"11bca657-d3dd-4ecc-b2a7-fc430d0e27d9","Type":"ContainerStarted","Data":"28bd830bd11b98db536f73490810f1ca77311f20adb3df9300d768e0c42dd671"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.702663 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt" podStartSLOduration=4.485362138 podStartE2EDuration="20.702593253s" podCreationTimestamp="2025-11-22 07:20:40 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.039710004 +0000 UTC m=+1084.453103262" lastFinishedPulling="2025-11-22 07:20:58.256941119 +0000 UTC m=+1100.670334377" observedRunningTime="2025-11-22 07:21:00.702139249 +0000 UTC m=+1103.115532527" watchObservedRunningTime="2025-11-22 07:21:00.702593253 +0000 UTC m=+1103.115986511" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.702841 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.748144 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d" podStartSLOduration=4.18115766 podStartE2EDuration="19.74811196s" podCreationTimestamp="2025-11-22 07:20:41 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.801928802 +0000 UTC m=+1085.215322060" lastFinishedPulling="2025-11-22 07:20:58.368883092 +0000 UTC m=+1100.782276360" observedRunningTime="2025-11-22 07:21:00.745280291 +0000 UTC m=+1103.158673569" watchObservedRunningTime="2025-11-22 07:21:00.74811196 +0000 UTC m=+1103.161505218" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.775895 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6" podStartSLOduration=4.610554196 podStartE2EDuration="20.775877097s" podCreationTimestamp="2025-11-22 07:20:40 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.093924483 +0000 UTC m=+1084.507317741" lastFinishedPulling="2025-11-22 07:20:58.259247384 +0000 UTC m=+1100.672640642" observedRunningTime="2025-11-22 07:21:00.761333215 +0000 UTC m=+1103.174726473" watchObservedRunningTime="2025-11-22 07:21:00.775877097 +0000 UTC m=+1103.189270355" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.779616 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh" event={"ID":"837a0948-1f0d-4478-8e0a-fd8f897dd107","Type":"ContainerStarted","Data":"72da2cc00cca92d03f76d947a13370cb4eb8ef935f78a229b2b61f4532c2a253"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.781938 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.799630 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v" event={"ID":"b8bdffad-516a-4927-8319-72b583afead1","Type":"ContainerStarted","Data":"0d3eb4deed204cf1cfb92ca5628f016ca029fa602418bf8c14d477a5255d7526"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.799693 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v" event={"ID":"b8bdffad-516a-4927-8319-72b583afead1","Type":"ContainerStarted","Data":"906517996b86030c6520597827c64d85dd1cc02cf7904c03b133b4bf3696d3ac"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.801395 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.804667 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw" podStartSLOduration=4.6295194330000005 podStartE2EDuration="20.804640032s" podCreationTimestamp="2025-11-22 07:20:40 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.19076485 +0000 UTC m=+1084.604158108" lastFinishedPulling="2025-11-22 07:20:58.365885449 +0000 UTC m=+1100.779278707" observedRunningTime="2025-11-22 07:21:00.794149072 +0000 UTC m=+1103.207542330" watchObservedRunningTime="2025-11-22 07:21:00.804640032 +0000 UTC m=+1103.218033280" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.814934 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" event={"ID":"9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1","Type":"ContainerStarted","Data":"62aad0228e85ddef3635d1776b030ae5d8adf974283b86bc0a34b35b5238b653"} Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.814979 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" Nov 22 07:21:00 crc kubenswrapper[4856]: E1122 07:21:00.821184 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" podUID="7119f7f3-e9e5-49db-afec-6c3b9fbe5a97" Nov 22 07:21:00 crc kubenswrapper[4856]: E1122 07:21:00.821477 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/infra-operator@sha256:86df58f744c1d23233cc98f6ea17c8d6da637c50003d0fc8c100045594aa9894\\\"\"" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" podUID="33b6c3db-1c77-452f-a0b6-26ed5d261a15" Nov 22 07:21:00 crc kubenswrapper[4856]: E1122 07:21:00.821544 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" podUID="e2be6208-86a1-4604-bddc-a3bd98258537" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.852619 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64" podStartSLOduration=5.022353781 podStartE2EDuration="20.852601227s" podCreationTimestamp="2025-11-22 07:20:40 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.537496244 +0000 UTC m=+1084.950889502" lastFinishedPulling="2025-11-22 07:20:58.36774369 +0000 UTC m=+1100.781136948" observedRunningTime="2025-11-22 07:21:00.851770863 +0000 UTC m=+1103.265164131" watchObservedRunningTime="2025-11-22 07:21:00.852601227 +0000 UTC m=+1103.265994485" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.927278 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq" podStartSLOduration=5.067253879 podStartE2EDuration="20.927238278s" podCreationTimestamp="2025-11-22 07:20:40 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.525130451 +0000 UTC m=+1084.938523709" lastFinishedPulling="2025-11-22 07:20:58.38511485 +0000 UTC m=+1100.798508108" observedRunningTime="2025-11-22 07:21:00.896987522 +0000 UTC m=+1103.310380780" watchObservedRunningTime="2025-11-22 07:21:00.927238278 +0000 UTC m=+1103.340631536" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.944209 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh" podStartSLOduration=4.003234822 podStartE2EDuration="19.944193267s" podCreationTimestamp="2025-11-22 07:20:41 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.42444501 +0000 UTC m=+1084.837838268" lastFinishedPulling="2025-11-22 07:20:58.365403415 +0000 UTC m=+1100.778796713" observedRunningTime="2025-11-22 07:21:00.940778683 +0000 UTC m=+1103.354171951" watchObservedRunningTime="2025-11-22 07:21:00.944193267 +0000 UTC m=+1103.357586515" Nov 22 07:21:00 crc kubenswrapper[4856]: I1122 07:21:00.986718 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" podStartSLOduration=5.370761049 podStartE2EDuration="19.986693981s" podCreationTimestamp="2025-11-22 07:20:41 +0000 UTC" firstStartedPulling="2025-11-22 07:20:43.642402116 +0000 UTC m=+1086.055795374" lastFinishedPulling="2025-11-22 07:20:58.258335058 +0000 UTC m=+1100.671728306" observedRunningTime="2025-11-22 07:21:00.978727941 +0000 UTC m=+1103.392121219" watchObservedRunningTime="2025-11-22 07:21:00.986693981 +0000 UTC m=+1103.400087249" Nov 22 07:21:01 crc kubenswrapper[4856]: I1122 07:21:01.055084 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v" podStartSLOduration=4.836589704 podStartE2EDuration="21.055069649s" podCreationTimestamp="2025-11-22 07:20:40 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.14653911 +0000 UTC m=+1084.559932368" lastFinishedPulling="2025-11-22 07:20:58.365019055 +0000 UTC m=+1100.778412313" observedRunningTime="2025-11-22 07:21:01.052224341 +0000 UTC m=+1103.465617599" watchObservedRunningTime="2025-11-22 07:21:01.055069649 +0000 UTC m=+1103.468462907" Nov 22 07:21:03 crc kubenswrapper[4856]: I1122 07:21:03.036497 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6d8fd67bf7-w92jz" Nov 22 07:21:03 crc kubenswrapper[4856]: I1122 07:21:03.045252 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-6dc664666c-tgk9d" Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.048701 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v" event={"ID":"da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708","Type":"ContainerStarted","Data":"3b634dc0f10447f85f3344da70fda87f12775c1d5daf1636a955e849dfa00521"} Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.049219 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v" Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.050326 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld" event={"ID":"7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0","Type":"ContainerStarted","Data":"fef31913844a829e997b5b1a96e3c93d18d80f420099bc4956b88b571c79c090"} Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.050548 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld" Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.051611 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc" event={"ID":"b9b9c1ca-f17c-4fbb-805e-4464e3b93b02","Type":"ContainerStarted","Data":"2da449148545f68b25d6be97e2e2c83964114aa45e23800e67c41f6b1ecd4d2a"} Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.053381 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l" event={"ID":"d7809224-a0c8-47fa-91ac-2f02578819fe","Type":"ContainerStarted","Data":"3372f5a1b33d5f24f4c2e842d4feed54afe2fa7624ba96f0b05f5908aff1ae18"} Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.054288 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l" Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.056613 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-8464cf66df-swr7c" event={"ID":"ea4c3d48-c5fc-498d-a095-455572fcbb9e","Type":"ContainerStarted","Data":"e9f4035506d7330332cb71da22770d9612a393077a3a14ccf9c2dcbbd25328c0"} Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.056844 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-8464cf66df-swr7c" Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.068394 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v" podStartSLOduration=3.530494202 podStartE2EDuration="26.068373739s" podCreationTimestamp="2025-11-22 07:20:40 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.814247454 +0000 UTC m=+1085.227640712" lastFinishedPulling="2025-11-22 07:21:05.352126991 +0000 UTC m=+1107.765520249" observedRunningTime="2025-11-22 07:21:06.0640766 +0000 UTC m=+1108.477469858" watchObservedRunningTime="2025-11-22 07:21:06.068373739 +0000 UTC m=+1108.481766997" Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.090844 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l" podStartSLOduration=2.478121392 podStartE2EDuration="25.090821639s" podCreationTimestamp="2025-11-22 07:20:41 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.803602477 +0000 UTC m=+1085.216995735" lastFinishedPulling="2025-11-22 07:21:05.416302724 +0000 UTC m=+1107.829695982" observedRunningTime="2025-11-22 07:21:06.085077141 +0000 UTC m=+1108.498470409" watchObservedRunningTime="2025-11-22 07:21:06.090821639 +0000 UTC m=+1108.504214897" Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.107762 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-8464cf66df-swr7c" podStartSLOduration=2.558897292 podStartE2EDuration="25.107737306s" podCreationTimestamp="2025-11-22 07:20:41 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.803639378 +0000 UTC m=+1085.217032636" lastFinishedPulling="2025-11-22 07:21:05.352479392 +0000 UTC m=+1107.765872650" observedRunningTime="2025-11-22 07:21:06.103493509 +0000 UTC m=+1108.516886777" watchObservedRunningTime="2025-11-22 07:21:06.107737306 +0000 UTC m=+1108.521130564" Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.125151 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld" podStartSLOduration=2.686711943 podStartE2EDuration="25.125136277s" podCreationTimestamp="2025-11-22 07:20:41 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.960326106 +0000 UTC m=+1085.373719364" lastFinishedPulling="2025-11-22 07:21:05.39875044 +0000 UTC m=+1107.812143698" observedRunningTime="2025-11-22 07:21:06.12162515 +0000 UTC m=+1108.535018418" watchObservedRunningTime="2025-11-22 07:21:06.125136277 +0000 UTC m=+1108.538529535" Nov 22 07:21:06 crc kubenswrapper[4856]: I1122 07:21:06.138024 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc" podStartSLOduration=2.7457908250000003 podStartE2EDuration="25.138001153s" podCreationTimestamp="2025-11-22 07:20:41 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.960219393 +0000 UTC m=+1085.373612651" lastFinishedPulling="2025-11-22 07:21:05.352429721 +0000 UTC m=+1107.765822979" observedRunningTime="2025-11-22 07:21:06.136770349 +0000 UTC m=+1108.550163607" watchObservedRunningTime="2025-11-22 07:21:06.138001153 +0000 UTC m=+1108.551394411" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.016857 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7768f8c84f-8hnhw" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.089892 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-56dfb6b67f-f29xt" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.127437 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8667fbf6f6-bsv2v" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.152395 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-bf4c6585d-29dmw" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.163094 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d86b44686-v7qv6" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.337387 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-66b7d6f598-fkrzv" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.521591 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5c75d7c94b-2cv5t" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.537885 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7879fb76fd-cnd64" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.593649 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7bb88cb858-89ntq" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.600081 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-86d796d84d-rk8pp" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.600849 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6fdc856c5d-nmllh" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.619061 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6f8c5b86cb-5gk8v" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.669214 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-799cb6ffd6-qf6ld" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.670260 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7798859c74-47z2l" Nov 22 07:21:11 crc kubenswrapper[4856]: I1122 07:21:11.734591 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-8464cf66df-swr7c" Nov 22 07:21:12 crc kubenswrapper[4856]: I1122 07:21:12.712484 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:21:13 crc kubenswrapper[4856]: I1122 07:21:13.111057 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr" Nov 22 07:21:43 crc kubenswrapper[4856]: I1122 07:21:43.332061 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" event={"ID":"33b6c3db-1c77-452f-a0b6-26ed5d261a15","Type":"ContainerStarted","Data":"3ed1b1959911b6eb146d8ab610ae36ed341a1565f39d11eb4841ab754749130d"} Nov 22 07:21:43 crc kubenswrapper[4856]: I1122 07:21:43.332781 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" Nov 22 07:21:43 crc kubenswrapper[4856]: I1122 07:21:43.334069 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" event={"ID":"e2be6208-86a1-4604-bddc-a3bd98258537","Type":"ContainerStarted","Data":"1efd2c0477f7090a745117f288508fc823915006b4fb701560ee993723d9740b"} Nov 22 07:21:43 crc kubenswrapper[4856]: I1122 07:21:43.334276 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" Nov 22 07:21:43 crc kubenswrapper[4856]: I1122 07:21:43.336892 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" event={"ID":"7119f7f3-e9e5-49db-afec-6c3b9fbe5a97","Type":"ContainerStarted","Data":"c057d9a90eaf342e8cb9ef62238b684c4ce05f50765e445458ea8257d6480aff"} Nov 22 07:21:43 crc kubenswrapper[4856]: I1122 07:21:43.337075 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" Nov 22 07:21:43 crc kubenswrapper[4856]: I1122 07:21:43.377014 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" podStartSLOduration=3.796705869 podStartE2EDuration="1m3.376995465s" podCreationTimestamp="2025-11-22 07:20:40 +0000 UTC" firstStartedPulling="2025-11-22 07:20:43.035304064 +0000 UTC m=+1085.448697322" lastFinishedPulling="2025-11-22 07:21:42.61559367 +0000 UTC m=+1145.028986918" observedRunningTime="2025-11-22 07:21:43.355856181 +0000 UTC m=+1145.769249439" watchObservedRunningTime="2025-11-22 07:21:43.376995465 +0000 UTC m=+1145.790388723" Nov 22 07:21:43 crc kubenswrapper[4856]: I1122 07:21:43.380783 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" podStartSLOduration=2.433635667 podStartE2EDuration="1m2.380767919s" podCreationTimestamp="2025-11-22 07:20:41 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.669819385 +0000 UTC m=+1085.083212643" lastFinishedPulling="2025-11-22 07:21:42.616951617 +0000 UTC m=+1145.030344895" observedRunningTime="2025-11-22 07:21:43.373071916 +0000 UTC m=+1145.786465184" watchObservedRunningTime="2025-11-22 07:21:43.380767919 +0000 UTC m=+1145.794161177" Nov 22 07:21:43 crc kubenswrapper[4856]: I1122 07:21:43.395996 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" podStartSLOduration=2.451459637 podStartE2EDuration="1m2.395974499s" podCreationTimestamp="2025-11-22 07:20:41 +0000 UTC" firstStartedPulling="2025-11-22 07:20:42.672341153 +0000 UTC m=+1085.085734411" lastFinishedPulling="2025-11-22 07:21:42.616856015 +0000 UTC m=+1145.030249273" observedRunningTime="2025-11-22 07:21:43.39493538 +0000 UTC m=+1145.808328658" watchObservedRunningTime="2025-11-22 07:21:43.395974499 +0000 UTC m=+1145.809367767" Nov 22 07:21:51 crc kubenswrapper[4856]: I1122 07:21:51.598711 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5bdf4f7f7f-nns6d" Nov 22 07:21:51 crc kubenswrapper[4856]: I1122 07:21:51.751004 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7cd4fb6f79-pg9z9" Nov 22 07:21:52 crc kubenswrapper[4856]: I1122 07:21:52.703235 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-769d9c7585-xgmp2" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.115764 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-278g7"] Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.117431 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-278g7" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.119481 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.119802 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.120326 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-xbp5k" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.122997 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.128100 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-278g7"] Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.214451 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6584b49599-ktn5n"] Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.239323 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-ktn5n"] Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.239425 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-ktn5n" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.243293 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.306179 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-config\") pod \"dnsmasq-dns-6584b49599-ktn5n\" (UID: \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\") " pod="openstack/dnsmasq-dns-6584b49599-ktn5n" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.306265 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kspj9\" (UniqueName: \"kubernetes.io/projected/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-kube-api-access-kspj9\") pod \"dnsmasq-dns-6584b49599-ktn5n\" (UID: \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\") " pod="openstack/dnsmasq-dns-6584b49599-ktn5n" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.306552 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25pkd\" (UniqueName: \"kubernetes.io/projected/db623bd8-4495-47cf-acc1-abaaa8b754d7-kube-api-access-25pkd\") pod \"dnsmasq-dns-7bdd77c89-278g7\" (UID: \"db623bd8-4495-47cf-acc1-abaaa8b754d7\") " pod="openstack/dnsmasq-dns-7bdd77c89-278g7" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.306684 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-dns-svc\") pod \"dnsmasq-dns-6584b49599-ktn5n\" (UID: \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\") " pod="openstack/dnsmasq-dns-6584b49599-ktn5n" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.306811 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db623bd8-4495-47cf-acc1-abaaa8b754d7-config\") pod \"dnsmasq-dns-7bdd77c89-278g7\" (UID: \"db623bd8-4495-47cf-acc1-abaaa8b754d7\") " pod="openstack/dnsmasq-dns-7bdd77c89-278g7" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.409015 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-dns-svc\") pod \"dnsmasq-dns-6584b49599-ktn5n\" (UID: \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\") " pod="openstack/dnsmasq-dns-6584b49599-ktn5n" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.409076 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db623bd8-4495-47cf-acc1-abaaa8b754d7-config\") pod \"dnsmasq-dns-7bdd77c89-278g7\" (UID: \"db623bd8-4495-47cf-acc1-abaaa8b754d7\") " pod="openstack/dnsmasq-dns-7bdd77c89-278g7" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.409129 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-config\") pod \"dnsmasq-dns-6584b49599-ktn5n\" (UID: \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\") " pod="openstack/dnsmasq-dns-6584b49599-ktn5n" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.409169 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kspj9\" (UniqueName: \"kubernetes.io/projected/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-kube-api-access-kspj9\") pod \"dnsmasq-dns-6584b49599-ktn5n\" (UID: \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\") " pod="openstack/dnsmasq-dns-6584b49599-ktn5n" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.409209 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25pkd\" (UniqueName: \"kubernetes.io/projected/db623bd8-4495-47cf-acc1-abaaa8b754d7-kube-api-access-25pkd\") pod \"dnsmasq-dns-7bdd77c89-278g7\" (UID: \"db623bd8-4495-47cf-acc1-abaaa8b754d7\") " pod="openstack/dnsmasq-dns-7bdd77c89-278g7" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.410128 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db623bd8-4495-47cf-acc1-abaaa8b754d7-config\") pod \"dnsmasq-dns-7bdd77c89-278g7\" (UID: \"db623bd8-4495-47cf-acc1-abaaa8b754d7\") " pod="openstack/dnsmasq-dns-7bdd77c89-278g7" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.410147 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-config\") pod \"dnsmasq-dns-6584b49599-ktn5n\" (UID: \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\") " pod="openstack/dnsmasq-dns-6584b49599-ktn5n" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.410168 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-dns-svc\") pod \"dnsmasq-dns-6584b49599-ktn5n\" (UID: \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\") " pod="openstack/dnsmasq-dns-6584b49599-ktn5n" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.443981 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25pkd\" (UniqueName: \"kubernetes.io/projected/db623bd8-4495-47cf-acc1-abaaa8b754d7-kube-api-access-25pkd\") pod \"dnsmasq-dns-7bdd77c89-278g7\" (UID: \"db623bd8-4495-47cf-acc1-abaaa8b754d7\") " pod="openstack/dnsmasq-dns-7bdd77c89-278g7" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.446227 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kspj9\" (UniqueName: \"kubernetes.io/projected/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-kube-api-access-kspj9\") pod \"dnsmasq-dns-6584b49599-ktn5n\" (UID: \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\") " pod="openstack/dnsmasq-dns-6584b49599-ktn5n" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.559614 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-ktn5n" Nov 22 07:22:10 crc kubenswrapper[4856]: I1122 07:22:10.738981 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-278g7" Nov 22 07:22:11 crc kubenswrapper[4856]: W1122 07:22:11.083137 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb623bd8_4495_47cf_acc1_abaaa8b754d7.slice/crio-a99c582c7865ac9ab4a6e80a2c36980d536641ee39baa2ed77e84b85c6ee1a9e WatchSource:0}: Error finding container a99c582c7865ac9ab4a6e80a2c36980d536641ee39baa2ed77e84b85c6ee1a9e: Status 404 returned error can't find the container with id a99c582c7865ac9ab4a6e80a2c36980d536641ee39baa2ed77e84b85c6ee1a9e Nov 22 07:22:11 crc kubenswrapper[4856]: I1122 07:22:11.083633 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-278g7"] Nov 22 07:22:11 crc kubenswrapper[4856]: I1122 07:22:11.112565 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-ktn5n"] Nov 22 07:22:11 crc kubenswrapper[4856]: W1122 07:22:11.121884 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd95245ab_74a0_4bfc_bbe1_6522d32e06dd.slice/crio-ffd9376b677c9f2cf8e4200ddc71b938c5f2ed87d64b6ec6df7c06429b9ea9bd WatchSource:0}: Error finding container ffd9376b677c9f2cf8e4200ddc71b938c5f2ed87d64b6ec6df7c06429b9ea9bd: Status 404 returned error can't find the container with id ffd9376b677c9f2cf8e4200ddc71b938c5f2ed87d64b6ec6df7c06429b9ea9bd Nov 22 07:22:11 crc kubenswrapper[4856]: I1122 07:22:11.530842 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-ktn5n" event={"ID":"d95245ab-74a0-4bfc-bbe1-6522d32e06dd","Type":"ContainerStarted","Data":"ffd9376b677c9f2cf8e4200ddc71b938c5f2ed87d64b6ec6df7c06429b9ea9bd"} Nov 22 07:22:11 crc kubenswrapper[4856]: I1122 07:22:11.531886 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-278g7" event={"ID":"db623bd8-4495-47cf-acc1-abaaa8b754d7","Type":"ContainerStarted","Data":"a99c582c7865ac9ab4a6e80a2c36980d536641ee39baa2ed77e84b85c6ee1a9e"} Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.445532 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-ktn5n"] Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.472114 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-57w79"] Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.473719 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8746976c-57w79" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.482241 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-57w79"] Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.545296 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lckt\" (UniqueName: \"kubernetes.io/projected/a2717bae-6059-463a-a2e6-eec30a5b57f4-kube-api-access-4lckt\") pod \"dnsmasq-dns-6d8746976c-57w79\" (UID: \"a2717bae-6059-463a-a2e6-eec30a5b57f4\") " pod="openstack/dnsmasq-dns-6d8746976c-57w79" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.545359 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2717bae-6059-463a-a2e6-eec30a5b57f4-config\") pod \"dnsmasq-dns-6d8746976c-57w79\" (UID: \"a2717bae-6059-463a-a2e6-eec30a5b57f4\") " pod="openstack/dnsmasq-dns-6d8746976c-57w79" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.545485 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2717bae-6059-463a-a2e6-eec30a5b57f4-dns-svc\") pod \"dnsmasq-dns-6d8746976c-57w79\" (UID: \"a2717bae-6059-463a-a2e6-eec30a5b57f4\") " pod="openstack/dnsmasq-dns-6d8746976c-57w79" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.647126 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2717bae-6059-463a-a2e6-eec30a5b57f4-dns-svc\") pod \"dnsmasq-dns-6d8746976c-57w79\" (UID: \"a2717bae-6059-463a-a2e6-eec30a5b57f4\") " pod="openstack/dnsmasq-dns-6d8746976c-57w79" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.647204 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lckt\" (UniqueName: \"kubernetes.io/projected/a2717bae-6059-463a-a2e6-eec30a5b57f4-kube-api-access-4lckt\") pod \"dnsmasq-dns-6d8746976c-57w79\" (UID: \"a2717bae-6059-463a-a2e6-eec30a5b57f4\") " pod="openstack/dnsmasq-dns-6d8746976c-57w79" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.647247 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2717bae-6059-463a-a2e6-eec30a5b57f4-config\") pod \"dnsmasq-dns-6d8746976c-57w79\" (UID: \"a2717bae-6059-463a-a2e6-eec30a5b57f4\") " pod="openstack/dnsmasq-dns-6d8746976c-57w79" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.648622 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2717bae-6059-463a-a2e6-eec30a5b57f4-config\") pod \"dnsmasq-dns-6d8746976c-57w79\" (UID: \"a2717bae-6059-463a-a2e6-eec30a5b57f4\") " pod="openstack/dnsmasq-dns-6d8746976c-57w79" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.648963 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2717bae-6059-463a-a2e6-eec30a5b57f4-dns-svc\") pod \"dnsmasq-dns-6d8746976c-57w79\" (UID: \"a2717bae-6059-463a-a2e6-eec30a5b57f4\") " pod="openstack/dnsmasq-dns-6d8746976c-57w79" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.671937 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lckt\" (UniqueName: \"kubernetes.io/projected/a2717bae-6059-463a-a2e6-eec30a5b57f4-kube-api-access-4lckt\") pod \"dnsmasq-dns-6d8746976c-57w79\" (UID: \"a2717bae-6059-463a-a2e6-eec30a5b57f4\") " pod="openstack/dnsmasq-dns-6d8746976c-57w79" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.762334 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-278g7"] Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.797485 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8746976c-57w79" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.827955 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-44ln7"] Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.829188 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-44ln7" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.848995 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-44ln7"] Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.852923 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa7fa19-a5f6-44ca-baa2-950db382636e-dns-svc\") pod \"dnsmasq-dns-6486446b9f-44ln7\" (UID: \"3fa7fa19-a5f6-44ca-baa2-950db382636e\") " pod="openstack/dnsmasq-dns-6486446b9f-44ln7" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.852968 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa7fa19-a5f6-44ca-baa2-950db382636e-config\") pod \"dnsmasq-dns-6486446b9f-44ln7\" (UID: \"3fa7fa19-a5f6-44ca-baa2-950db382636e\") " pod="openstack/dnsmasq-dns-6486446b9f-44ln7" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.852998 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vh5r\" (UniqueName: \"kubernetes.io/projected/3fa7fa19-a5f6-44ca-baa2-950db382636e-kube-api-access-2vh5r\") pod \"dnsmasq-dns-6486446b9f-44ln7\" (UID: \"3fa7fa19-a5f6-44ca-baa2-950db382636e\") " pod="openstack/dnsmasq-dns-6486446b9f-44ln7" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.956234 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa7fa19-a5f6-44ca-baa2-950db382636e-dns-svc\") pod \"dnsmasq-dns-6486446b9f-44ln7\" (UID: \"3fa7fa19-a5f6-44ca-baa2-950db382636e\") " pod="openstack/dnsmasq-dns-6486446b9f-44ln7" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.957496 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa7fa19-a5f6-44ca-baa2-950db382636e-config\") pod \"dnsmasq-dns-6486446b9f-44ln7\" (UID: \"3fa7fa19-a5f6-44ca-baa2-950db382636e\") " pod="openstack/dnsmasq-dns-6486446b9f-44ln7" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.957613 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vh5r\" (UniqueName: \"kubernetes.io/projected/3fa7fa19-a5f6-44ca-baa2-950db382636e-kube-api-access-2vh5r\") pod \"dnsmasq-dns-6486446b9f-44ln7\" (UID: \"3fa7fa19-a5f6-44ca-baa2-950db382636e\") " pod="openstack/dnsmasq-dns-6486446b9f-44ln7" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.957459 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa7fa19-a5f6-44ca-baa2-950db382636e-dns-svc\") pod \"dnsmasq-dns-6486446b9f-44ln7\" (UID: \"3fa7fa19-a5f6-44ca-baa2-950db382636e\") " pod="openstack/dnsmasq-dns-6486446b9f-44ln7" Nov 22 07:22:12 crc kubenswrapper[4856]: I1122 07:22:12.958470 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa7fa19-a5f6-44ca-baa2-950db382636e-config\") pod \"dnsmasq-dns-6486446b9f-44ln7\" (UID: \"3fa7fa19-a5f6-44ca-baa2-950db382636e\") " pod="openstack/dnsmasq-dns-6486446b9f-44ln7" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.003619 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vh5r\" (UniqueName: \"kubernetes.io/projected/3fa7fa19-a5f6-44ca-baa2-950db382636e-kube-api-access-2vh5r\") pod \"dnsmasq-dns-6486446b9f-44ln7\" (UID: \"3fa7fa19-a5f6-44ca-baa2-950db382636e\") " pod="openstack/dnsmasq-dns-6486446b9f-44ln7" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.256924 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-44ln7" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.497867 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-57w79"] Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.555562 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8746976c-57w79" event={"ID":"a2717bae-6059-463a-a2e6-eec30a5b57f4","Type":"ContainerStarted","Data":"ae866f76d586e5ff151b5d60522c314ed92cb15583d2ffe4f2ba95fdae5e300b"} Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.656677 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.658808 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.661690 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.663001 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-2lblc" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.663272 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.663429 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.664431 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.665150 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.665906 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.674105 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.773325 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.773401 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.773430 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.773455 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.773560 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.773583 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.773609 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.773636 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.773654 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkd7q\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-kube-api-access-pkd7q\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.773673 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.773693 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.816066 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-44ln7"] Nov 22 07:22:13 crc kubenswrapper[4856]: W1122 07:22:13.826455 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fa7fa19_a5f6_44ca_baa2_950db382636e.slice/crio-516c1de75f1977cfad93b4ff59c0af37cbe3096646786ae1361d347291754645 WatchSource:0}: Error finding container 516c1de75f1977cfad93b4ff59c0af37cbe3096646786ae1361d347291754645: Status 404 returned error can't find the container with id 516c1de75f1977cfad93b4ff59c0af37cbe3096646786ae1361d347291754645 Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.876398 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.876460 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.876552 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkd7q\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-kube-api-access-pkd7q\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.876612 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.876641 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.876690 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.876755 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.876773 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.876802 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.876883 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.877279 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.877674 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.877732 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.878309 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.878775 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.879286 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.880729 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.884349 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.885723 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.890760 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.898345 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.901858 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkd7q\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-kube-api-access-pkd7q\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:13 crc kubenswrapper[4856]: I1122 07:22:13.910184 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.021906 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.023580 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.028924 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.029161 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.029215 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.029161 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.029274 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.029372 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.029421 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.029558 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-krdd6" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.029608 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.182334 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.182715 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.182739 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.182776 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.182816 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.182842 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.182879 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.182906 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.182927 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.182974 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcvcb\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-kube-api-access-xcvcb\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.183008 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.284707 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.284763 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.284788 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.284821 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.284860 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.284886 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.284920 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.284946 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.284971 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.285026 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcvcb\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-kube-api-access-xcvcb\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.285059 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.285314 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.285627 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.285992 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.285995 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.286877 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.287143 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.289873 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.289985 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.290398 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.297125 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.311062 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcvcb\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-kube-api-access-xcvcb\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.334173 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.366238 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.571707 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-44ln7" event={"ID":"3fa7fa19-a5f6-44ca-baa2-950db382636e","Type":"ContainerStarted","Data":"516c1de75f1977cfad93b4ff59c0af37cbe3096646786ae1361d347291754645"} Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.601251 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:22:14 crc kubenswrapper[4856]: W1122 07:22:14.637712 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ac8c44e_0667_43f7_aebd_a7b4c5bcb429.slice/crio-791ac470b0a6a247ad8c7af344f1a356c217449002fca7553a956a180dba9c6b WatchSource:0}: Error finding container 791ac470b0a6a247ad8c7af344f1a356c217449002fca7553a956a180dba9c6b: Status 404 returned error can't find the container with id 791ac470b0a6a247ad8c7af344f1a356c217449002fca7553a956a180dba9c6b Nov 22 07:22:14 crc kubenswrapper[4856]: I1122 07:22:14.809961 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.224616 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.227455 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.230731 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-xm8bq" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.231612 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.231831 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.232010 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.247230 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.252110 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.319858 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.319915 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-config-data-default\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.319971 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-kolla-config\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.320000 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcqg5\" (UniqueName: \"kubernetes.io/projected/b27ecbc9-0058-49d3-8715-826a4a1bb544-kube-api-access-wcqg5\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.320307 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.320339 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27ecbc9-0058-49d3-8715-826a4a1bb544-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.320421 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b27ecbc9-0058-49d3-8715-826a4a1bb544-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.321050 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27ecbc9-0058-49d3-8715-826a4a1bb544-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.422350 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27ecbc9-0058-49d3-8715-826a4a1bb544-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.422405 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.422429 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-config-data-default\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.422466 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-kolla-config\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.422488 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcqg5\" (UniqueName: \"kubernetes.io/projected/b27ecbc9-0058-49d3-8715-826a4a1bb544-kube-api-access-wcqg5\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.422546 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.422563 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27ecbc9-0058-49d3-8715-826a4a1bb544-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.422592 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b27ecbc9-0058-49d3-8715-826a4a1bb544-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.423308 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.423338 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b27ecbc9-0058-49d3-8715-826a4a1bb544-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.424317 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-config-data-default\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.425230 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-kolla-config\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.426060 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27ecbc9-0058-49d3-8715-826a4a1bb544-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.426936 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.441027 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcqg5\" (UniqueName: \"kubernetes.io/projected/b27ecbc9-0058-49d3-8715-826a4a1bb544-kube-api-access-wcqg5\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.447001 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27ecbc9-0058-49d3-8715-826a4a1bb544-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.456754 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-galera-0\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.570691 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.587931 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429","Type":"ContainerStarted","Data":"791ac470b0a6a247ad8c7af344f1a356c217449002fca7553a956a180dba9c6b"} Nov 22 07:22:15 crc kubenswrapper[4856]: I1122 07:22:15.596901 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89","Type":"ContainerStarted","Data":"28bac9423af5affa00aaa0be97f54a42134b5ba014610634481243e61a0a4c61"} Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.045303 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:22:16 crc kubenswrapper[4856]: W1122 07:22:16.064340 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb27ecbc9_0058_49d3_8715_826a4a1bb544.slice/crio-88b50e703fe21af5e61c3aaf7e283d3f6fb2d2434709cdbb182ffec13dadd42d WatchSource:0}: Error finding container 88b50e703fe21af5e61c3aaf7e283d3f6fb2d2434709cdbb182ffec13dadd42d: Status 404 returned error can't find the container with id 88b50e703fe21af5e61c3aaf7e283d3f6fb2d2434709cdbb182ffec13dadd42d Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.594622 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.596015 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.637498 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.637792 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.637985 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.638375 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rztg4" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.649357 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.670009 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27ecbc9-0058-49d3-8715-826a4a1bb544","Type":"ContainerStarted","Data":"88b50e703fe21af5e61c3aaf7e283d3f6fb2d2434709cdbb182ffec13dadd42d"} Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.746183 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d3a5d31-7183-4298-87ea-4aa84aa395b4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.746271 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.746306 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.746343 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d3a5d31-7183-4298-87ea-4aa84aa395b4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.746380 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3a5d31-7183-4298-87ea-4aa84aa395b4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.746434 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.746540 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx7nj\" (UniqueName: \"kubernetes.io/projected/1d3a5d31-7183-4298-87ea-4aa84aa395b4-kube-api-access-kx7nj\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.746572 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.848630 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx7nj\" (UniqueName: \"kubernetes.io/projected/1d3a5d31-7183-4298-87ea-4aa84aa395b4-kube-api-access-kx7nj\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.849410 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.849459 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d3a5d31-7183-4298-87ea-4aa84aa395b4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.849608 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.849638 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.849678 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d3a5d31-7183-4298-87ea-4aa84aa395b4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.849780 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3a5d31-7183-4298-87ea-4aa84aa395b4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.849944 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.850214 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.850502 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d3a5d31-7183-4298-87ea-4aa84aa395b4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.851223 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.853504 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.861681 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3a5d31-7183-4298-87ea-4aa84aa395b4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.870762 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d3a5d31-7183-4298-87ea-4aa84aa395b4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.880816 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx7nj\" (UniqueName: \"kubernetes.io/projected/1d3a5d31-7183-4298-87ea-4aa84aa395b4-kube-api-access-kx7nj\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.897036 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.905947 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:16 crc kubenswrapper[4856]: I1122 07:22:16.969807 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.078801 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.079859 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.085154 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-fxfhb" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.085366 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.085485 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.104826 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.258415 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-config-data\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.258845 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-kolla-config\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.258886 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.258955 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4gch\" (UniqueName: \"kubernetes.io/projected/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-kube-api-access-q4gch\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.258986 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.364082 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-config-data\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.364283 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-kolla-config\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.364370 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.364493 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4gch\" (UniqueName: \"kubernetes.io/projected/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-kube-api-access-q4gch\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.364561 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.365738 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-kolla-config\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.367263 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-config-data\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.377133 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-combined-ca-bundle\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.377150 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-memcached-tls-certs\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.393695 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4gch\" (UniqueName: \"kubernetes.io/projected/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-kube-api-access-q4gch\") pod \"memcached-0\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.400809 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 07:22:17 crc kubenswrapper[4856]: I1122 07:22:17.627351 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:22:18 crc kubenswrapper[4856]: I1122 07:22:18.706959 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:22:18 crc kubenswrapper[4856]: I1122 07:22:18.708622 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:22:18 crc kubenswrapper[4856]: I1122 07:22:18.719779 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-kz2d9" Nov 22 07:22:18 crc kubenswrapper[4856]: I1122 07:22:18.775952 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:22:18 crc kubenswrapper[4856]: I1122 07:22:18.896925 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chm6g\" (UniqueName: \"kubernetes.io/projected/898257c3-b9a4-4d7b-8484-f3466c19e051-kube-api-access-chm6g\") pod \"kube-state-metrics-0\" (UID: \"898257c3-b9a4-4d7b-8484-f3466c19e051\") " pod="openstack/kube-state-metrics-0" Nov 22 07:22:18 crc kubenswrapper[4856]: I1122 07:22:18.999078 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chm6g\" (UniqueName: \"kubernetes.io/projected/898257c3-b9a4-4d7b-8484-f3466c19e051-kube-api-access-chm6g\") pod \"kube-state-metrics-0\" (UID: \"898257c3-b9a4-4d7b-8484-f3466c19e051\") " pod="openstack/kube-state-metrics-0" Nov 22 07:22:19 crc kubenswrapper[4856]: I1122 07:22:19.029419 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chm6g\" (UniqueName: \"kubernetes.io/projected/898257c3-b9a4-4d7b-8484-f3466c19e051-kube-api-access-chm6g\") pod \"kube-state-metrics-0\" (UID: \"898257c3-b9a4-4d7b-8484-f3466c19e051\") " pod="openstack/kube-state-metrics-0" Nov 22 07:22:19 crc kubenswrapper[4856]: I1122 07:22:19.066234 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.665796 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hwrb9"] Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.667172 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.672120 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qvlsd" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.672152 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.672306 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.682982 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hwrb9"] Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.774168 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-run\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.774254 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-combined-ca-bundle\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.774306 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-log-ovn\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.774363 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-ovn-controller-tls-certs\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.774438 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfz9n\" (UniqueName: \"kubernetes.io/projected/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-kube-api-access-xfz9n\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.774464 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-scripts\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.774493 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-run-ovn\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.814500 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-zz5h4"] Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.817273 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.847317 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zz5h4"] Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.875885 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-ovn-controller-tls-certs\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.875967 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfz9n\" (UniqueName: \"kubernetes.io/projected/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-kube-api-access-xfz9n\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.875987 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-scripts\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.876324 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-run-ovn\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.876352 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-run\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.876374 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-combined-ca-bundle\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.876404 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-log-ovn\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.876839 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-log-ovn\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.877035 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-run\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.877149 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-run-ovn\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.878357 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-scripts\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.881435 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-combined-ca-bundle\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.898996 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-ovn-controller-tls-certs\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.912579 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfz9n\" (UniqueName: \"kubernetes.io/projected/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-kube-api-access-xfz9n\") pod \"ovn-controller-hwrb9\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.977641 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg5jb\" (UniqueName: \"kubernetes.io/projected/285d77d1-e278-4664-97f0-7562e2740a0b-kube-api-access-bg5jb\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.977736 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/285d77d1-e278-4664-97f0-7562e2740a0b-scripts\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.977756 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-log\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.977776 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-run\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.977894 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-lib\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.977918 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-etc-ovs\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:22 crc kubenswrapper[4856]: I1122 07:22:22.988204 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hwrb9" Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.079843 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-lib\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.079918 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-etc-ovs\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.079963 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg5jb\" (UniqueName: \"kubernetes.io/projected/285d77d1-e278-4664-97f0-7562e2740a0b-kube-api-access-bg5jb\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.080046 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/285d77d1-e278-4664-97f0-7562e2740a0b-scripts\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.080077 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-log\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.080100 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-run\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.080233 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-lib\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.080299 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-etc-ovs\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.080331 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-run\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.080444 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-log\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.081945 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/285d77d1-e278-4664-97f0-7562e2740a0b-scripts\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.101836 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg5jb\" (UniqueName: \"kubernetes.io/projected/285d77d1-e278-4664-97f0-7562e2740a0b-kube-api-access-bg5jb\") pod \"ovn-controller-ovs-zz5h4\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.162809 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:22:23 crc kubenswrapper[4856]: W1122 07:22:23.682363 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d3a5d31_7183_4298_87ea_4aa84aa395b4.slice/crio-55b8b6cc11950f796e4ca15a70ab3b5a09ce69182c5025eae1e348eb376cbc08 WatchSource:0}: Error finding container 55b8b6cc11950f796e4ca15a70ab3b5a09ce69182c5025eae1e348eb376cbc08: Status 404 returned error can't find the container with id 55b8b6cc11950f796e4ca15a70ab3b5a09ce69182c5025eae1e348eb376cbc08 Nov 22 07:22:23 crc kubenswrapper[4856]: I1122 07:22:23.772492 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1d3a5d31-7183-4298-87ea-4aa84aa395b4","Type":"ContainerStarted","Data":"55b8b6cc11950f796e4ca15a70ab3b5a09ce69182c5025eae1e348eb376cbc08"} Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.539177 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.548770 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.552707 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.554423 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-h4bf5" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.554499 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.554725 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.554985 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.555157 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.707375 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55308d58-6be6-483d-bc27-2904f15d32f0-config\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.707453 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.707545 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.707568 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.707596 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55308d58-6be6-483d-bc27-2904f15d32f0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.707611 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ts6n\" (UniqueName: \"kubernetes.io/projected/55308d58-6be6-483d-bc27-2904f15d32f0-kube-api-access-9ts6n\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.707680 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.707714 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55308d58-6be6-483d-bc27-2904f15d32f0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.808731 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.809598 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55308d58-6be6-483d-bc27-2904f15d32f0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.809643 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.809815 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55308d58-6be6-483d-bc27-2904f15d32f0-config\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.809900 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.810094 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.810148 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.810195 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55308d58-6be6-483d-bc27-2904f15d32f0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.810227 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ts6n\" (UniqueName: \"kubernetes.io/projected/55308d58-6be6-483d-bc27-2904f15d32f0-kube-api-access-9ts6n\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.811166 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55308d58-6be6-483d-bc27-2904f15d32f0-config\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.813373 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55308d58-6be6-483d-bc27-2904f15d32f0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.813704 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55308d58-6be6-483d-bc27-2904f15d32f0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.817791 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.818706 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.832224 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ts6n\" (UniqueName: \"kubernetes.io/projected/55308d58-6be6-483d-bc27-2904f15d32f0-kube-api-access-9ts6n\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.832622 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.848084 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:24 crc kubenswrapper[4856]: I1122 07:22:24.879013 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 07:22:25 crc kubenswrapper[4856]: I1122 07:22:25.589725 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:22:25 crc kubenswrapper[4856]: I1122 07:22:25.690989 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zz5h4"] Nov 22 07:22:25 crc kubenswrapper[4856]: I1122 07:22:25.986925 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:22:25 crc kubenswrapper[4856]: I1122 07:22:25.988832 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.012597 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.012935 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.013068 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.013934 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mkt6x" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.014866 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.139675 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0768fe63-c6c8-48c2-a121-7216823f73ef-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.139746 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0768fe63-c6c8-48c2-a121-7216823f73ef-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.139789 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0768fe63-c6c8-48c2-a121-7216823f73ef-config\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.139832 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.139891 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.139919 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.140179 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwf99\" (UniqueName: \"kubernetes.io/projected/0768fe63-c6c8-48c2-a121-7216823f73ef-kube-api-access-bwf99\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.140247 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.181966 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-8zttm"] Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.183284 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.195856 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8zttm"] Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.196068 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.241876 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0768fe63-c6c8-48c2-a121-7216823f73ef-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.241929 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0768fe63-c6c8-48c2-a121-7216823f73ef-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.241953 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0768fe63-c6c8-48c2-a121-7216823f73ef-config\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.241991 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.242018 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.242035 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.242095 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwf99\" (UniqueName: \"kubernetes.io/projected/0768fe63-c6c8-48c2-a121-7216823f73ef-kube-api-access-bwf99\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.242118 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.242389 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.243154 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0768fe63-c6c8-48c2-a121-7216823f73ef-config\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.243461 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0768fe63-c6c8-48c2-a121-7216823f73ef-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.243858 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0768fe63-c6c8-48c2-a121-7216823f73ef-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.250274 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.250961 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.266878 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.267117 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwf99\" (UniqueName: \"kubernetes.io/projected/0768fe63-c6c8-48c2-a121-7216823f73ef-kube-api-access-bwf99\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.268594 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.320083 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.343863 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-combined-ca-bundle\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.343921 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlttb\" (UniqueName: \"kubernetes.io/projected/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-kube-api-access-hlttb\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.343953 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-ovs-rundir\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.343980 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.344021 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-config\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.344063 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-ovn-rundir\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.446233 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-config\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.446315 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-ovn-rundir\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.446366 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-combined-ca-bundle\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.446400 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlttb\" (UniqueName: \"kubernetes.io/projected/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-kube-api-access-hlttb\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.446442 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-ovs-rundir\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.446473 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.446671 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-ovn-rundir\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.447082 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-ovs-rundir\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.447179 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-config\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.450232 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.450845 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-combined-ca-bundle\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.465277 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlttb\" (UniqueName: \"kubernetes.io/projected/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-kube-api-access-hlttb\") pod \"ovn-controller-metrics-8zttm\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:26 crc kubenswrapper[4856]: I1122 07:22:26.524100 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:22:29 crc kubenswrapper[4856]: I1122 07:22:29.754888 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:22:29 crc kubenswrapper[4856]: I1122 07:22:29.755218 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:22:32 crc kubenswrapper[4856]: W1122 07:22:32.425435 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod285d77d1_e278_4664_97f0_7562e2740a0b.slice/crio-8348d4e10904380f8f331e39a468968f43a9942115652a22a69a7414ef1393da WatchSource:0}: Error finding container 8348d4e10904380f8f331e39a468968f43a9942115652a22a69a7414ef1393da: Status 404 returned error can't find the container with id 8348d4e10904380f8f331e39a468968f43a9942115652a22a69a7414ef1393da Nov 22 07:22:32 crc kubenswrapper[4856]: I1122 07:22:32.848953 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"898257c3-b9a4-4d7b-8484-f3466c19e051","Type":"ContainerStarted","Data":"2e2842c993f54993b4fc7cb6e515a6eddf52c462e2af859c384c462dbece99fe"} Nov 22 07:22:32 crc kubenswrapper[4856]: I1122 07:22:32.850105 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zz5h4" event={"ID":"285d77d1-e278-4664-97f0-7562e2740a0b","Type":"ContainerStarted","Data":"8348d4e10904380f8f331e39a468968f43a9942115652a22a69a7414ef1393da"} Nov 22 07:22:36 crc kubenswrapper[4856]: E1122 07:22:36.011866 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce" Nov 22 07:22:36 crc kubenswrapper[4856]: E1122 07:22:36.012863 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wcqg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(b27ecbc9-0058-49d3-8715-826a4a1bb544): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:22:36 crc kubenswrapper[4856]: E1122 07:22:36.014128 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="b27ecbc9-0058-49d3-8715-826a4a1bb544" Nov 22 07:22:36 crc kubenswrapper[4856]: E1122 07:22:36.880868 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce\\\"\"" pod="openstack/openstack-galera-0" podUID="b27ecbc9-0058-49d3-8715-826a4a1bb544" Nov 22 07:22:47 crc kubenswrapper[4856]: E1122 07:22:47.919531 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 22 07:22:47 crc kubenswrapper[4856]: E1122 07:22:47.920251 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vh5r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6486446b9f-44ln7_openstack(3fa7fa19-a5f6-44ca-baa2-950db382636e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:22:47 crc kubenswrapper[4856]: E1122 07:22:47.921496 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6486446b9f-44ln7" podUID="3fa7fa19-a5f6-44ca-baa2-950db382636e" Nov 22 07:22:48 crc kubenswrapper[4856]: E1122 07:22:48.179664 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce" Nov 22 07:22:48 crc kubenswrapper[4856]: E1122 07:22:48.179960 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba\\\"\"" pod="openstack/dnsmasq-dns-6486446b9f-44ln7" podUID="3fa7fa19-a5f6-44ca-baa2-950db382636e" Nov 22 07:22:48 crc kubenswrapper[4856]: E1122 07:22:48.180108 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kx7nj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(1d3a5d31-7183-4298-87ea-4aa84aa395b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:22:48 crc kubenswrapper[4856]: E1122 07:22:48.181237 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="1d3a5d31-7183-4298-87ea-4aa84aa395b4" Nov 22 07:22:48 crc kubenswrapper[4856]: E1122 07:22:48.421294 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 22 07:22:48 crc kubenswrapper[4856]: E1122 07:22:48.421546 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lckt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6d8746976c-57w79_openstack(a2717bae-6059-463a-a2e6-eec30a5b57f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:22:48 crc kubenswrapper[4856]: E1122 07:22:48.422754 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6d8746976c-57w79" podUID="a2717bae-6059-463a-a2e6-eec30a5b57f4" Nov 22 07:22:48 crc kubenswrapper[4856]: I1122 07:22:48.597754 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hwrb9"] Nov 22 07:22:48 crc kubenswrapper[4856]: I1122 07:22:48.705520 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 07:22:49 crc kubenswrapper[4856]: E1122 07:22:49.081772 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:10452e2144368e2f128c8fb8ef9e54880b06ef1d71d9f084a0217dcb099c51ce\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="1d3a5d31-7183-4298-87ea-4aa84aa395b4" Nov 22 07:22:49 crc kubenswrapper[4856]: E1122 07:22:49.081934 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba\\\"\"" pod="openstack/dnsmasq-dns-6d8746976c-57w79" podUID="a2717bae-6059-463a-a2e6-eec30a5b57f4" Nov 22 07:22:49 crc kubenswrapper[4856]: E1122 07:22:49.458603 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 22 07:22:49 crc kubenswrapper[4856]: E1122 07:22:49.459843 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25pkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7bdd77c89-278g7_openstack(db623bd8-4495-47cf-acc1-abaaa8b754d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:22:49 crc kubenswrapper[4856]: E1122 07:22:49.461340 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7bdd77c89-278g7" podUID="db623bd8-4495-47cf-acc1-abaaa8b754d7" Nov 22 07:22:49 crc kubenswrapper[4856]: I1122 07:22:49.633965 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:22:49 crc kubenswrapper[4856]: W1122 07:22:49.650802 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55308d58_6be6_483d_bc27_2904f15d32f0.slice/crio-c5d83babff0062e1a2e5abe3ec909b187ab8e44bbfd6eab77bdac52642c62e4b WatchSource:0}: Error finding container c5d83babff0062e1a2e5abe3ec909b187ab8e44bbfd6eab77bdac52642c62e4b: Status 404 returned error can't find the container with id c5d83babff0062e1a2e5abe3ec909b187ab8e44bbfd6eab77bdac52642c62e4b Nov 22 07:22:49 crc kubenswrapper[4856]: E1122 07:22:49.786276 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba" Nov 22 07:22:49 crc kubenswrapper[4856]: E1122 07:22:49.786471 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:18f8463fe46fe6081d5682009e92bbcb3df33282b83b0a2857abaece795cf1ba,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kspj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6584b49599-ktn5n_openstack(d95245ab-74a0-4bfc-bbe1-6522d32e06dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:22:49 crc kubenswrapper[4856]: E1122 07:22:49.788207 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6584b49599-ktn5n" podUID="d95245ab-74a0-4bfc-bbe1-6522d32e06dd" Nov 22 07:22:49 crc kubenswrapper[4856]: I1122 07:22:49.889996 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-8zttm"] Nov 22 07:22:49 crc kubenswrapper[4856]: I1122 07:22:49.984773 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"55308d58-6be6-483d-bc27-2904f15d32f0","Type":"ContainerStarted","Data":"c5d83babff0062e1a2e5abe3ec909b187ab8e44bbfd6eab77bdac52642c62e4b"} Nov 22 07:22:49 crc kubenswrapper[4856]: I1122 07:22:49.986917 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8zttm" event={"ID":"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1","Type":"ContainerStarted","Data":"99b11eeba96df72f922be4388c2d874e1c34e44149ab656894a3c81bacdeab57"} Nov 22 07:22:49 crc kubenswrapper[4856]: I1122 07:22:49.988789 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hwrb9" event={"ID":"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3","Type":"ContainerStarted","Data":"efd643fc4f2fed96eafc1048f314e88d90b5b3ffb076b18bdd30c237ebb01b33"} Nov 22 07:22:49 crc kubenswrapper[4856]: I1122 07:22:49.989902 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"63e9edb8-ed05-4d0f-aff1-d59b369cd76d","Type":"ContainerStarted","Data":"302f5810ebcb14d86323414de2c3e642b10138700566cdfa4601f3ae41122fba"} Nov 22 07:22:49 crc kubenswrapper[4856]: I1122 07:22:49.993245 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.651618 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-278g7" Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.661125 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-ktn5n" Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.777203 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db623bd8-4495-47cf-acc1-abaaa8b754d7-config\") pod \"db623bd8-4495-47cf-acc1-abaaa8b754d7\" (UID: \"db623bd8-4495-47cf-acc1-abaaa8b754d7\") " Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.777267 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25pkd\" (UniqueName: \"kubernetes.io/projected/db623bd8-4495-47cf-acc1-abaaa8b754d7-kube-api-access-25pkd\") pod \"db623bd8-4495-47cf-acc1-abaaa8b754d7\" (UID: \"db623bd8-4495-47cf-acc1-abaaa8b754d7\") " Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.777298 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-config\") pod \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\" (UID: \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\") " Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.777321 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kspj9\" (UniqueName: \"kubernetes.io/projected/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-kube-api-access-kspj9\") pod \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\" (UID: \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\") " Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.777426 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-dns-svc\") pod \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\" (UID: \"d95245ab-74a0-4bfc-bbe1-6522d32e06dd\") " Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.778349 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d95245ab-74a0-4bfc-bbe1-6522d32e06dd" (UID: "d95245ab-74a0-4bfc-bbe1-6522d32e06dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.778845 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-config" (OuterVolumeSpecName: "config") pod "d95245ab-74a0-4bfc-bbe1-6522d32e06dd" (UID: "d95245ab-74a0-4bfc-bbe1-6522d32e06dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.779239 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db623bd8-4495-47cf-acc1-abaaa8b754d7-config" (OuterVolumeSpecName: "config") pod "db623bd8-4495-47cf-acc1-abaaa8b754d7" (UID: "db623bd8-4495-47cf-acc1-abaaa8b754d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.783433 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-kube-api-access-kspj9" (OuterVolumeSpecName: "kube-api-access-kspj9") pod "d95245ab-74a0-4bfc-bbe1-6522d32e06dd" (UID: "d95245ab-74a0-4bfc-bbe1-6522d32e06dd"). InnerVolumeSpecName "kube-api-access-kspj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.783485 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db623bd8-4495-47cf-acc1-abaaa8b754d7-kube-api-access-25pkd" (OuterVolumeSpecName: "kube-api-access-25pkd") pod "db623bd8-4495-47cf-acc1-abaaa8b754d7" (UID: "db623bd8-4495-47cf-acc1-abaaa8b754d7"). InnerVolumeSpecName "kube-api-access-25pkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.879745 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db623bd8-4495-47cf-acc1-abaaa8b754d7-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.879790 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25pkd\" (UniqueName: \"kubernetes.io/projected/db623bd8-4495-47cf-acc1-abaaa8b754d7-kube-api-access-25pkd\") on node \"crc\" DevicePath \"\"" Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.879805 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.879818 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kspj9\" (UniqueName: \"kubernetes.io/projected/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-kube-api-access-kspj9\") on node \"crc\" DevicePath \"\"" Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.879830 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d95245ab-74a0-4bfc-bbe1-6522d32e06dd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:22:50 crc kubenswrapper[4856]: I1122 07:22:50.998671 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0768fe63-c6c8-48c2-a121-7216823f73ef","Type":"ContainerStarted","Data":"586c46078d618d2d93d14100623bfa053417297c9f2511cf5dae809eb647e663"} Nov 22 07:22:51 crc kubenswrapper[4856]: I1122 07:22:51.000158 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6584b49599-ktn5n" event={"ID":"d95245ab-74a0-4bfc-bbe1-6522d32e06dd","Type":"ContainerDied","Data":"ffd9376b677c9f2cf8e4200ddc71b938c5f2ed87d64b6ec6df7c06429b9ea9bd"} Nov 22 07:22:51 crc kubenswrapper[4856]: I1122 07:22:51.000186 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6584b49599-ktn5n" Nov 22 07:22:51 crc kubenswrapper[4856]: I1122 07:22:51.001362 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7bdd77c89-278g7" event={"ID":"db623bd8-4495-47cf-acc1-abaaa8b754d7","Type":"ContainerDied","Data":"a99c582c7865ac9ab4a6e80a2c36980d536641ee39baa2ed77e84b85c6ee1a9e"} Nov 22 07:22:51 crc kubenswrapper[4856]: I1122 07:22:51.001348 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7bdd77c89-278g7" Nov 22 07:22:51 crc kubenswrapper[4856]: I1122 07:22:51.067845 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-ktn5n"] Nov 22 07:22:51 crc kubenswrapper[4856]: I1122 07:22:51.082664 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6584b49599-ktn5n"] Nov 22 07:22:51 crc kubenswrapper[4856]: I1122 07:22:51.095676 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-278g7"] Nov 22 07:22:51 crc kubenswrapper[4856]: I1122 07:22:51.101189 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7bdd77c89-278g7"] Nov 22 07:22:52 crc kubenswrapper[4856]: I1122 07:22:52.020341 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89","Type":"ContainerStarted","Data":"c10cec0c537e858b480226f22e5be592da7a6e4e6ce33e779e0e631dde2f8987"} Nov 22 07:22:52 crc kubenswrapper[4856]: I1122 07:22:52.023693 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429","Type":"ContainerStarted","Data":"f5173b778bc6df84dd44ccb0081f7b0478ee848a30a82116594357ab8bd607c4"} Nov 22 07:22:52 crc kubenswrapper[4856]: I1122 07:22:52.725235 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d95245ab-74a0-4bfc-bbe1-6522d32e06dd" path="/var/lib/kubelet/pods/d95245ab-74a0-4bfc-bbe1-6522d32e06dd/volumes" Nov 22 07:22:52 crc kubenswrapper[4856]: I1122 07:22:52.726011 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db623bd8-4495-47cf-acc1-abaaa8b754d7" path="/var/lib/kubelet/pods/db623bd8-4495-47cf-acc1-abaaa8b754d7/volumes" Nov 22 07:22:53 crc kubenswrapper[4856]: E1122 07:22:53.264365 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Nov 22 07:22:53 crc kubenswrapper[4856]: E1122 07:22:53.264735 4856 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Nov 22 07:22:53 crc kubenswrapper[4856]: E1122 07:22:53.264911 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-chm6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(898257c3-b9a4-4d7b-8484-f3466c19e051): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Nov 22 07:22:53 crc kubenswrapper[4856]: E1122 07:22:53.266225 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="898257c3-b9a4-4d7b-8484-f3466c19e051" Nov 22 07:22:54 crc kubenswrapper[4856]: E1122 07:22:54.057181 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb\\\"\"" pod="openstack/kube-state-metrics-0" podUID="898257c3-b9a4-4d7b-8484-f3466c19e051" Nov 22 07:22:59 crc kubenswrapper[4856]: I1122 07:22:59.754608 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:22:59 crc kubenswrapper[4856]: I1122 07:22:59.755215 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:23:00 crc kubenswrapper[4856]: I1122 07:23:00.089227 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27ecbc9-0058-49d3-8715-826a4a1bb544","Type":"ContainerStarted","Data":"6707e619f62035dae0710f67046d5933fec25522a3bdba7ddad574baf16ec1fd"} Nov 22 07:23:00 crc kubenswrapper[4856]: I1122 07:23:00.090765 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hwrb9" event={"ID":"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3","Type":"ContainerStarted","Data":"9dcec325019ebdfce923c32261c3801484f6c45ab535eb4623bd34243cd70533"} Nov 22 07:23:00 crc kubenswrapper[4856]: I1122 07:23:00.092237 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"63e9edb8-ed05-4d0f-aff1-d59b369cd76d","Type":"ContainerStarted","Data":"63572ca1ab3b819180a4d2cdb47a2c1f194a6daee761f767b694471277028ac6"} Nov 22 07:23:00 crc kubenswrapper[4856]: I1122 07:23:00.093538 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0768fe63-c6c8-48c2-a121-7216823f73ef","Type":"ContainerStarted","Data":"4e82ed304d356b9a96c6aba3ac20d49662a3c4b69636547ee5ba622240c5a35a"} Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.103065 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0768fe63-c6c8-48c2-a121-7216823f73ef","Type":"ContainerStarted","Data":"a27e20589e4e9738c8b1ba2a88ec92db294be52ec1405bb5a02a6d451b8e8534"} Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.105669 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"55308d58-6be6-483d-bc27-2904f15d32f0","Type":"ContainerStarted","Data":"beff5c4f9865829069fb5a650f73d4daaf877eaaaf7cd411dbc96c82233e8e19"} Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.105704 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"55308d58-6be6-483d-bc27-2904f15d32f0","Type":"ContainerStarted","Data":"52b5d54be089b50ed66bdc784286dd4da166b47053361ce527fcf601f29f2b4c"} Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.107489 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8zttm" event={"ID":"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1","Type":"ContainerStarted","Data":"508f07d95f18906c3efe0a28a1a716873bf2a5fa811acd5075db09b60b6b55fb"} Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.111617 4856 generic.go:334] "Generic (PLEG): container finished" podID="285d77d1-e278-4664-97f0-7562e2740a0b" containerID="358cb685cfdf5202beff450aed4be128c31a4dc4aec2d4dd68e5c932a2da3838" exitCode=0 Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.111741 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zz5h4" event={"ID":"285d77d1-e278-4664-97f0-7562e2740a0b","Type":"ContainerDied","Data":"358cb685cfdf5202beff450aed4be128c31a4dc4aec2d4dd68e5c932a2da3838"} Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.111870 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.133470 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-8zttm" podStartSLOduration=26.547820517 podStartE2EDuration="35.133446565s" podCreationTimestamp="2025-11-22 07:22:26 +0000 UTC" firstStartedPulling="2025-11-22 07:22:49.892106359 +0000 UTC m=+1212.305499607" lastFinishedPulling="2025-11-22 07:22:58.477732397 +0000 UTC m=+1220.891125655" observedRunningTime="2025-11-22 07:23:01.126490163 +0000 UTC m=+1223.539883421" watchObservedRunningTime="2025-11-22 07:23:01.133446565 +0000 UTC m=+1223.546839833" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.189140 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hwrb9" podStartSLOduration=30.008837886 podStartE2EDuration="39.189091471s" podCreationTimestamp="2025-11-22 07:22:22 +0000 UTC" firstStartedPulling="2025-11-22 07:22:49.296012771 +0000 UTC m=+1211.709406029" lastFinishedPulling="2025-11-22 07:22:58.476266356 +0000 UTC m=+1220.889659614" observedRunningTime="2025-11-22 07:23:01.187074436 +0000 UTC m=+1223.600467704" watchObservedRunningTime="2025-11-22 07:23:01.189091471 +0000 UTC m=+1223.602484729" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.211661 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=35.045930292 podStartE2EDuration="44.211639595s" podCreationTimestamp="2025-11-22 07:22:17 +0000 UTC" firstStartedPulling="2025-11-22 07:22:49.298322215 +0000 UTC m=+1211.711715473" lastFinishedPulling="2025-11-22 07:22:58.464031518 +0000 UTC m=+1220.877424776" observedRunningTime="2025-11-22 07:23:01.20567362 +0000 UTC m=+1223.619066878" watchObservedRunningTime="2025-11-22 07:23:01.211639595 +0000 UTC m=+1223.625032853" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.419879 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-44ln7"] Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.462054 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-9mx4q"] Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.467132 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.469741 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.487264 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-9mx4q"] Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.552775 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-dns-svc\") pod \"dnsmasq-dns-6c65c5f57f-9mx4q\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.552859 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-ovsdbserver-nb\") pod \"dnsmasq-dns-6c65c5f57f-9mx4q\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.552963 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-config\") pod \"dnsmasq-dns-6c65c5f57f-9mx4q\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.553034 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxfth\" (UniqueName: \"kubernetes.io/projected/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-kube-api-access-zxfth\") pod \"dnsmasq-dns-6c65c5f57f-9mx4q\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.653945 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-dns-svc\") pod \"dnsmasq-dns-6c65c5f57f-9mx4q\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.654011 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-ovsdbserver-nb\") pod \"dnsmasq-dns-6c65c5f57f-9mx4q\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.654050 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-config\") pod \"dnsmasq-dns-6c65c5f57f-9mx4q\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.654095 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxfth\" (UniqueName: \"kubernetes.io/projected/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-kube-api-access-zxfth\") pod \"dnsmasq-dns-6c65c5f57f-9mx4q\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.655178 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-dns-svc\") pod \"dnsmasq-dns-6c65c5f57f-9mx4q\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.655695 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-ovsdbserver-nb\") pod \"dnsmasq-dns-6c65c5f57f-9mx4q\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.656239 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-config\") pod \"dnsmasq-dns-6c65c5f57f-9mx4q\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.677965 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxfth\" (UniqueName: \"kubernetes.io/projected/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-kube-api-access-zxfth\") pod \"dnsmasq-dns-6c65c5f57f-9mx4q\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.727130 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-57w79"] Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.749587 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-rnsxb"] Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.751248 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.755573 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-44ln7" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.756355 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.771200 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-rnsxb"] Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.803623 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.856632 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa7fa19-a5f6-44ca-baa2-950db382636e-dns-svc\") pod \"3fa7fa19-a5f6-44ca-baa2-950db382636e\" (UID: \"3fa7fa19-a5f6-44ca-baa2-950db382636e\") " Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.856970 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vh5r\" (UniqueName: \"kubernetes.io/projected/3fa7fa19-a5f6-44ca-baa2-950db382636e-kube-api-access-2vh5r\") pod \"3fa7fa19-a5f6-44ca-baa2-950db382636e\" (UID: \"3fa7fa19-a5f6-44ca-baa2-950db382636e\") " Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.857088 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa7fa19-a5f6-44ca-baa2-950db382636e-config\") pod \"3fa7fa19-a5f6-44ca-baa2-950db382636e\" (UID: \"3fa7fa19-a5f6-44ca-baa2-950db382636e\") " Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.857216 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa7fa19-a5f6-44ca-baa2-950db382636e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3fa7fa19-a5f6-44ca-baa2-950db382636e" (UID: "3fa7fa19-a5f6-44ca-baa2-950db382636e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.857622 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.857672 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.857773 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cb9t\" (UniqueName: \"kubernetes.io/projected/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-kube-api-access-5cb9t\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.857850 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.857958 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-config\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.858007 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3fa7fa19-a5f6-44ca-baa2-950db382636e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.858259 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa7fa19-a5f6-44ca-baa2-950db382636e-config" (OuterVolumeSpecName: "config") pod "3fa7fa19-a5f6-44ca-baa2-950db382636e" (UID: "3fa7fa19-a5f6-44ca-baa2-950db382636e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.862790 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa7fa19-a5f6-44ca-baa2-950db382636e-kube-api-access-2vh5r" (OuterVolumeSpecName: "kube-api-access-2vh5r") pod "3fa7fa19-a5f6-44ca-baa2-950db382636e" (UID: "3fa7fa19-a5f6-44ca-baa2-950db382636e"). InnerVolumeSpecName "kube-api-access-2vh5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.958870 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cb9t\" (UniqueName: \"kubernetes.io/projected/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-kube-api-access-5cb9t\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.958934 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.958972 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-config\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.959006 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.959037 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.959115 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa7fa19-a5f6-44ca-baa2-950db382636e-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.959131 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vh5r\" (UniqueName: \"kubernetes.io/projected/3fa7fa19-a5f6-44ca-baa2-950db382636e-kube-api-access-2vh5r\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.960044 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-config\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.960055 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.960072 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-dns-svc\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.960195 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:01 crc kubenswrapper[4856]: I1122 07:23:01.976110 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cb9t\" (UniqueName: \"kubernetes.io/projected/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-kube-api-access-5cb9t\") pod \"dnsmasq-dns-5c476d78c5-rnsxb\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:02 crc kubenswrapper[4856]: I1122 07:23:02.073029 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:02 crc kubenswrapper[4856]: I1122 07:23:02.128969 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zz5h4" event={"ID":"285d77d1-e278-4664-97f0-7562e2740a0b","Type":"ContainerStarted","Data":"f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470"} Nov 22 07:23:02 crc kubenswrapper[4856]: I1122 07:23:02.133851 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6486446b9f-44ln7" event={"ID":"3fa7fa19-a5f6-44ca-baa2-950db382636e","Type":"ContainerDied","Data":"516c1de75f1977cfad93b4ff59c0af37cbe3096646786ae1361d347291754645"} Nov 22 07:23:02 crc kubenswrapper[4856]: I1122 07:23:02.133943 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6486446b9f-44ln7" Nov 22 07:23:02 crc kubenswrapper[4856]: I1122 07:23:02.168923 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=30.343087377 podStartE2EDuration="39.168899s" podCreationTimestamp="2025-11-22 07:22:23 +0000 UTC" firstStartedPulling="2025-11-22 07:22:49.654902466 +0000 UTC m=+1212.068295724" lastFinishedPulling="2025-11-22 07:22:58.480714089 +0000 UTC m=+1220.894107347" observedRunningTime="2025-11-22 07:23:02.157038053 +0000 UTC m=+1224.570431331" watchObservedRunningTime="2025-11-22 07:23:02.168899 +0000 UTC m=+1224.582292258" Nov 22 07:23:02 crc kubenswrapper[4856]: I1122 07:23:02.209454 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-44ln7"] Nov 22 07:23:02 crc kubenswrapper[4856]: I1122 07:23:02.215911 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6486446b9f-44ln7"] Nov 22 07:23:02 crc kubenswrapper[4856]: I1122 07:23:02.295799 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=29.747728397 podStartE2EDuration="38.295776306s" podCreationTimestamp="2025-11-22 07:22:24 +0000 UTC" firstStartedPulling="2025-11-22 07:22:50.012643509 +0000 UTC m=+1212.426036767" lastFinishedPulling="2025-11-22 07:22:58.560691418 +0000 UTC m=+1220.974084676" observedRunningTime="2025-11-22 07:23:02.249386424 +0000 UTC m=+1224.662779702" watchObservedRunningTime="2025-11-22 07:23:02.295776306 +0000 UTC m=+1224.709169564" Nov 22 07:23:02 crc kubenswrapper[4856]: I1122 07:23:02.297371 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-9mx4q"] Nov 22 07:23:02 crc kubenswrapper[4856]: I1122 07:23:02.324588 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 22 07:23:02 crc kubenswrapper[4856]: I1122 07:23:02.369956 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 22 07:23:02 crc kubenswrapper[4856]: I1122 07:23:02.572449 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-rnsxb"] Nov 22 07:23:02 crc kubenswrapper[4856]: W1122 07:23:02.575429 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda603c362_75dd_4b1a_a7ba_b49a5da55cf0.slice/crio-90922a66bda0f01a5041d62206e45d42239841065e0289617165d744d25d68fb WatchSource:0}: Error finding container 90922a66bda0f01a5041d62206e45d42239841065e0289617165d744d25d68fb: Status 404 returned error can't find the container with id 90922a66bda0f01a5041d62206e45d42239841065e0289617165d744d25d68fb Nov 22 07:23:02 crc kubenswrapper[4856]: I1122 07:23:02.732737 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fa7fa19-a5f6-44ca-baa2-950db382636e" path="/var/lib/kubelet/pods/3fa7fa19-a5f6-44ca-baa2-950db382636e/volumes" Nov 22 07:23:03 crc kubenswrapper[4856]: I1122 07:23:03.357044 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hwrb9" Nov 22 07:23:03 crc kubenswrapper[4856]: I1122 07:23:03.378946 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" event={"ID":"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814","Type":"ContainerStarted","Data":"23a76cc851d6cb50d07a7e090e9c3eb53609be00eb97238f1ed799957e227183"} Nov 22 07:23:03 crc kubenswrapper[4856]: I1122 07:23:03.380289 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" event={"ID":"a603c362-75dd-4b1a-a7ba-b49a5da55cf0","Type":"ContainerStarted","Data":"90922a66bda0f01a5041d62206e45d42239841065e0289617165d744d25d68fb"} Nov 22 07:23:03 crc kubenswrapper[4856]: I1122 07:23:03.380798 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 22 07:23:03 crc kubenswrapper[4856]: I1122 07:23:03.879699 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 22 07:23:03 crc kubenswrapper[4856]: I1122 07:23:03.920872 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 22 07:23:04 crc kubenswrapper[4856]: I1122 07:23:04.389411 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1d3a5d31-7183-4298-87ea-4aa84aa395b4","Type":"ContainerStarted","Data":"6dbc2c42beeb03f5f93f9ca2890f1f6f74875cdba0da041cffe6c07e36ced3cf"} Nov 22 07:23:04 crc kubenswrapper[4856]: I1122 07:23:04.389782 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 22 07:23:04 crc kubenswrapper[4856]: I1122 07:23:04.432262 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.397234 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zz5h4" event={"ID":"285d77d1-e278-4664-97f0-7562e2740a0b","Type":"ContainerStarted","Data":"1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964"} Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.440528 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.611724 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.613413 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.615480 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.618638 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.618645 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.618935 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-xd5tj" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.623138 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.696950 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.697262 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxgrp\" (UniqueName: \"kubernetes.io/projected/3aa24715-1df9-4a47-9817-4a1b68679d08-kube-api-access-zxgrp\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.697294 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3aa24715-1df9-4a47-9817-4a1b68679d08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.697495 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa24715-1df9-4a47-9817-4a1b68679d08-scripts\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.697569 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.697601 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.697656 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aa24715-1df9-4a47-9817-4a1b68679d08-config\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.799454 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3aa24715-1df9-4a47-9817-4a1b68679d08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.799582 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa24715-1df9-4a47-9817-4a1b68679d08-scripts\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.799626 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.799653 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.799685 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aa24715-1df9-4a47-9817-4a1b68679d08-config\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.799741 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.799789 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxgrp\" (UniqueName: \"kubernetes.io/projected/3aa24715-1df9-4a47-9817-4a1b68679d08-kube-api-access-zxgrp\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.800201 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3aa24715-1df9-4a47-9817-4a1b68679d08-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.801036 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa24715-1df9-4a47-9817-4a1b68679d08-scripts\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.801058 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aa24715-1df9-4a47-9817-4a1b68679d08-config\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.806809 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.807201 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.812196 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.819386 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxgrp\" (UniqueName: \"kubernetes.io/projected/3aa24715-1df9-4a47-9817-4a1b68679d08-kube-api-access-zxgrp\") pod \"ovn-northd-0\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " pod="openstack/ovn-northd-0" Nov 22 07:23:05 crc kubenswrapper[4856]: I1122 07:23:05.943792 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 07:23:07 crc kubenswrapper[4856]: I1122 07:23:07.402348 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 22 07:23:07 crc kubenswrapper[4856]: I1122 07:23:07.411064 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:23:07 crc kubenswrapper[4856]: I1122 07:23:07.411109 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:23:07 crc kubenswrapper[4856]: I1122 07:23:07.480400 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-zz5h4" podStartSLOduration=19.452868017 podStartE2EDuration="45.48038217s" podCreationTimestamp="2025-11-22 07:22:22 +0000 UTC" firstStartedPulling="2025-11-22 07:22:32.430136059 +0000 UTC m=+1194.843529317" lastFinishedPulling="2025-11-22 07:22:58.457650212 +0000 UTC m=+1220.871043470" observedRunningTime="2025-11-22 07:23:07.455099764 +0000 UTC m=+1229.868493022" watchObservedRunningTime="2025-11-22 07:23:07.48038217 +0000 UTC m=+1229.893775428" Nov 22 07:23:08 crc kubenswrapper[4856]: I1122 07:23:08.856019 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-rnsxb"] Nov 22 07:23:08 crc kubenswrapper[4856]: I1122 07:23:08.888460 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-9dc6m"] Nov 22 07:23:08 crc kubenswrapper[4856]: I1122 07:23:08.890189 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:08 crc kubenswrapper[4856]: I1122 07:23:08.897251 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-9dc6m"] Nov 22 07:23:08 crc kubenswrapper[4856]: I1122 07:23:08.954993 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:08 crc kubenswrapper[4856]: I1122 07:23:08.955050 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-config\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:08 crc kubenswrapper[4856]: I1122 07:23:08.955108 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:08 crc kubenswrapper[4856]: I1122 07:23:08.955147 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:08 crc kubenswrapper[4856]: I1122 07:23:08.955172 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwg7b\" (UniqueName: \"kubernetes.io/projected/ec618b5f-bf54-4636-b50b-330cdfdfcd62-kube-api-access-lwg7b\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:09 crc kubenswrapper[4856]: I1122 07:23:09.057122 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:09 crc kubenswrapper[4856]: I1122 07:23:09.057196 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwg7b\" (UniqueName: \"kubernetes.io/projected/ec618b5f-bf54-4636-b50b-330cdfdfcd62-kube-api-access-lwg7b\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:09 crc kubenswrapper[4856]: I1122 07:23:09.057297 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:09 crc kubenswrapper[4856]: I1122 07:23:09.057331 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-config\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:09 crc kubenswrapper[4856]: I1122 07:23:09.057385 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:09 crc kubenswrapper[4856]: I1122 07:23:09.058377 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-dns-svc\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:09 crc kubenswrapper[4856]: I1122 07:23:09.059191 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:09 crc kubenswrapper[4856]: I1122 07:23:09.059281 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-config\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:09 crc kubenswrapper[4856]: I1122 07:23:09.059421 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:09 crc kubenswrapper[4856]: I1122 07:23:09.081421 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwg7b\" (UniqueName: \"kubernetes.io/projected/ec618b5f-bf54-4636-b50b-330cdfdfcd62-kube-api-access-lwg7b\") pod \"dnsmasq-dns-5c9fdb784c-9dc6m\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:09 crc kubenswrapper[4856]: I1122 07:23:09.209911 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.045818 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.051166 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.052846 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.053557 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-qcchl" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.053674 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.053766 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.083207 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.174500 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b649794-30ba-493c-9285-05a58981ed36-cache\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.174597 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.174631 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b649794-30ba-493c-9285-05a58981ed36-lock\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.174659 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.174684 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq8fr\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-kube-api-access-rq8fr\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.275825 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.275875 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b649794-30ba-493c-9285-05a58981ed36-lock\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.275904 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.275930 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq8fr\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-kube-api-access-rq8fr\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.276011 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b649794-30ba-493c-9285-05a58981ed36-cache\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: E1122 07:23:10.276186 4856 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:23:10 crc kubenswrapper[4856]: E1122 07:23:10.276221 4856 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:23:10 crc kubenswrapper[4856]: E1122 07:23:10.276300 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift podName:8b649794-30ba-493c-9285-05a58981ed36 nodeName:}" failed. No retries permitted until 2025-11-22 07:23:10.776277271 +0000 UTC m=+1233.189670529 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift") pod "swift-storage-0" (UID: "8b649794-30ba-493c-9285-05a58981ed36") : configmap "swift-ring-files" not found Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.276425 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.276524 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b649794-30ba-493c-9285-05a58981ed36-cache\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.276812 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b649794-30ba-493c-9285-05a58981ed36-lock\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.301117 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.306090 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq8fr\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-kube-api-access-rq8fr\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.571861 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-5mjrn"] Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.573329 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.575768 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.576760 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.576784 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.597593 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5mjrn"] Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.688197 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xthlc\" (UniqueName: \"kubernetes.io/projected/1c34ba2b-b0cb-4527-b651-a888c0b49d32-kube-api-access-xthlc\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.688253 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-combined-ca-bundle\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.688293 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c34ba2b-b0cb-4527-b651-a888c0b49d32-scripts\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.688313 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1c34ba2b-b0cb-4527-b651-a888c0b49d32-ring-data-devices\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.688328 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1c34ba2b-b0cb-4527-b651-a888c0b49d32-etc-swift\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.688356 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-swiftconf\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.688395 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-dispersionconf\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.790071 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xthlc\" (UniqueName: \"kubernetes.io/projected/1c34ba2b-b0cb-4527-b651-a888c0b49d32-kube-api-access-xthlc\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.790140 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.790168 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-combined-ca-bundle\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.790211 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c34ba2b-b0cb-4527-b651-a888c0b49d32-scripts\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.790241 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1c34ba2b-b0cb-4527-b651-a888c0b49d32-ring-data-devices\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.790267 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1c34ba2b-b0cb-4527-b651-a888c0b49d32-etc-swift\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.790308 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-swiftconf\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.790365 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-dispersionconf\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: E1122 07:23:10.791460 4856 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:23:10 crc kubenswrapper[4856]: E1122 07:23:10.791543 4856 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:23:10 crc kubenswrapper[4856]: E1122 07:23:10.791615 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift podName:8b649794-30ba-493c-9285-05a58981ed36 nodeName:}" failed. No retries permitted until 2025-11-22 07:23:11.791587699 +0000 UTC m=+1234.204980997 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift") pod "swift-storage-0" (UID: "8b649794-30ba-493c-9285-05a58981ed36") : configmap "swift-ring-files" not found Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.791772 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1c34ba2b-b0cb-4527-b651-a888c0b49d32-etc-swift\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.792017 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c34ba2b-b0cb-4527-b651-a888c0b49d32-scripts\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.792223 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1c34ba2b-b0cb-4527-b651-a888c0b49d32-ring-data-devices\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.794729 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-dispersionconf\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.794969 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-combined-ca-bundle\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.800195 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-swiftconf\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.811942 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xthlc\" (UniqueName: \"kubernetes.io/projected/1c34ba2b-b0cb-4527-b651-a888c0b49d32-kube-api-access-xthlc\") pod \"swift-ring-rebalance-5mjrn\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:10 crc kubenswrapper[4856]: I1122 07:23:10.904797 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:11 crc kubenswrapper[4856]: I1122 07:23:11.817034 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:11 crc kubenswrapper[4856]: E1122 07:23:11.817193 4856 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:23:11 crc kubenswrapper[4856]: E1122 07:23:11.817221 4856 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:23:11 crc kubenswrapper[4856]: E1122 07:23:11.817278 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift podName:8b649794-30ba-493c-9285-05a58981ed36 nodeName:}" failed. No retries permitted until 2025-11-22 07:23:13.817257543 +0000 UTC m=+1236.230650801 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift") pod "swift-storage-0" (UID: "8b649794-30ba-493c-9285-05a58981ed36") : configmap "swift-ring-files" not found Nov 22 07:23:13 crc kubenswrapper[4856]: I1122 07:23:13.849203 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:13 crc kubenswrapper[4856]: E1122 07:23:13.849381 4856 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:23:13 crc kubenswrapper[4856]: E1122 07:23:13.849601 4856 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:23:13 crc kubenswrapper[4856]: E1122 07:23:13.849654 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift podName:8b649794-30ba-493c-9285-05a58981ed36 nodeName:}" failed. No retries permitted until 2025-11-22 07:23:17.849637188 +0000 UTC m=+1240.263030446 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift") pod "swift-storage-0" (UID: "8b649794-30ba-493c-9285-05a58981ed36") : configmap "swift-ring-files" not found Nov 22 07:23:16 crc kubenswrapper[4856]: I1122 07:23:16.810552 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-9dc6m"] Nov 22 07:23:16 crc kubenswrapper[4856]: I1122 07:23:16.819745 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-5mjrn"] Nov 22 07:23:16 crc kubenswrapper[4856]: I1122 07:23:16.938110 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:23:16 crc kubenswrapper[4856]: W1122 07:23:16.945607 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3aa24715_1df9_4a47_9817_4a1b68679d08.slice/crio-42d5b029ad6e5e568979cf1befdc02d83feaca02fd64ca4d444389e8422eafc3 WatchSource:0}: Error finding container 42d5b029ad6e5e568979cf1befdc02d83feaca02fd64ca4d444389e8422eafc3: Status 404 returned error can't find the container with id 42d5b029ad6e5e568979cf1befdc02d83feaca02fd64ca4d444389e8422eafc3 Nov 22 07:23:17 crc kubenswrapper[4856]: I1122 07:23:17.512984 4856 generic.go:334] "Generic (PLEG): container finished" podID="a603c362-75dd-4b1a-a7ba-b49a5da55cf0" containerID="befa04b54831b9c35c148c54b8e621913675dd76361199832ffd651b0bcee91a" exitCode=0 Nov 22 07:23:17 crc kubenswrapper[4856]: I1122 07:23:17.513048 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" event={"ID":"a603c362-75dd-4b1a-a7ba-b49a5da55cf0","Type":"ContainerDied","Data":"befa04b54831b9c35c148c54b8e621913675dd76361199832ffd651b0bcee91a"} Nov 22 07:23:17 crc kubenswrapper[4856]: I1122 07:23:17.517037 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5mjrn" event={"ID":"1c34ba2b-b0cb-4527-b651-a888c0b49d32","Type":"ContainerStarted","Data":"4ba4252337d08696ab1bd4a604c12d12aa254009918764086b4f8e21b13dd6db"} Nov 22 07:23:17 crc kubenswrapper[4856]: I1122 07:23:17.522878 4856 generic.go:334] "Generic (PLEG): container finished" podID="7ed6b3c2-2924-43b9-ab1c-1da6b18cc814" containerID="91ce8e1bc2027dfb2c393d69899ca6977785f098b6227ed8a592fe49deea0b07" exitCode=0 Nov 22 07:23:17 crc kubenswrapper[4856]: I1122 07:23:17.522940 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" event={"ID":"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814","Type":"ContainerDied","Data":"91ce8e1bc2027dfb2c393d69899ca6977785f098b6227ed8a592fe49deea0b07"} Nov 22 07:23:17 crc kubenswrapper[4856]: I1122 07:23:17.530795 4856 generic.go:334] "Generic (PLEG): container finished" podID="ec618b5f-bf54-4636-b50b-330cdfdfcd62" containerID="df9db1c81948a84fcf97203f1e737013cca74f18585909f9e9b3f8bb5907c03a" exitCode=0 Nov 22 07:23:17 crc kubenswrapper[4856]: I1122 07:23:17.531009 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" event={"ID":"ec618b5f-bf54-4636-b50b-330cdfdfcd62","Type":"ContainerDied","Data":"df9db1c81948a84fcf97203f1e737013cca74f18585909f9e9b3f8bb5907c03a"} Nov 22 07:23:17 crc kubenswrapper[4856]: I1122 07:23:17.531071 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" event={"ID":"ec618b5f-bf54-4636-b50b-330cdfdfcd62","Type":"ContainerStarted","Data":"cec7a6d45622f5ea52482e93c689af078fa72222451f214f547dd9001829bf3d"} Nov 22 07:23:17 crc kubenswrapper[4856]: I1122 07:23:17.532645 4856 generic.go:334] "Generic (PLEG): container finished" podID="a2717bae-6059-463a-a2e6-eec30a5b57f4" containerID="d241e30d1a3694faed58909e112649b7c21d6bca93c8390391a6511428f3737b" exitCode=0 Nov 22 07:23:17 crc kubenswrapper[4856]: I1122 07:23:17.532778 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8746976c-57w79" event={"ID":"a2717bae-6059-463a-a2e6-eec30a5b57f4","Type":"ContainerDied","Data":"d241e30d1a3694faed58909e112649b7c21d6bca93c8390391a6511428f3737b"} Nov 22 07:23:17 crc kubenswrapper[4856]: I1122 07:23:17.538997 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3aa24715-1df9-4a47-9817-4a1b68679d08","Type":"ContainerStarted","Data":"42d5b029ad6e5e568979cf1befdc02d83feaca02fd64ca4d444389e8422eafc3"} Nov 22 07:23:17 crc kubenswrapper[4856]: I1122 07:23:17.922010 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:17 crc kubenswrapper[4856]: E1122 07:23:17.922444 4856 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:23:17 crc kubenswrapper[4856]: E1122 07:23:17.922468 4856 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:23:17 crc kubenswrapper[4856]: E1122 07:23:17.922589 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift podName:8b649794-30ba-493c-9285-05a58981ed36 nodeName:}" failed. No retries permitted until 2025-11-22 07:23:25.922575371 +0000 UTC m=+1248.335968629 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift") pod "swift-storage-0" (UID: "8b649794-30ba-493c-9285-05a58981ed36") : configmap "swift-ring-files" not found Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.547267 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" event={"ID":"a603c362-75dd-4b1a-a7ba-b49a5da55cf0","Type":"ContainerDied","Data":"90922a66bda0f01a5041d62206e45d42239841065e0289617165d744d25d68fb"} Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.547655 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90922a66bda0f01a5041d62206e45d42239841065e0289617165d744d25d68fb" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.549228 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d8746976c-57w79" event={"ID":"a2717bae-6059-463a-a2e6-eec30a5b57f4","Type":"ContainerDied","Data":"ae866f76d586e5ff151b5d60522c314ed92cb15583d2ffe4f2ba95fdae5e300b"} Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.549252 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae866f76d586e5ff151b5d60522c314ed92cb15583d2ffe4f2ba95fdae5e300b" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.567533 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8746976c-57w79" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.573859 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.638489 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2717bae-6059-463a-a2e6-eec30a5b57f4-config\") pod \"a2717bae-6059-463a-a2e6-eec30a5b57f4\" (UID: \"a2717bae-6059-463a-a2e6-eec30a5b57f4\") " Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.638575 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lckt\" (UniqueName: \"kubernetes.io/projected/a2717bae-6059-463a-a2e6-eec30a5b57f4-kube-api-access-4lckt\") pod \"a2717bae-6059-463a-a2e6-eec30a5b57f4\" (UID: \"a2717bae-6059-463a-a2e6-eec30a5b57f4\") " Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.638656 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-config\") pod \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.638689 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cb9t\" (UniqueName: \"kubernetes.io/projected/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-kube-api-access-5cb9t\") pod \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.638748 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-ovsdbserver-sb\") pod \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.638777 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2717bae-6059-463a-a2e6-eec30a5b57f4-dns-svc\") pod \"a2717bae-6059-463a-a2e6-eec30a5b57f4\" (UID: \"a2717bae-6059-463a-a2e6-eec30a5b57f4\") " Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.638812 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-dns-svc\") pod \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.638898 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-ovsdbserver-nb\") pod \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\" (UID: \"a603c362-75dd-4b1a-a7ba-b49a5da55cf0\") " Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.644481 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-kube-api-access-5cb9t" (OuterVolumeSpecName: "kube-api-access-5cb9t") pod "a603c362-75dd-4b1a-a7ba-b49a5da55cf0" (UID: "a603c362-75dd-4b1a-a7ba-b49a5da55cf0"). InnerVolumeSpecName "kube-api-access-5cb9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.644945 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2717bae-6059-463a-a2e6-eec30a5b57f4-kube-api-access-4lckt" (OuterVolumeSpecName: "kube-api-access-4lckt") pod "a2717bae-6059-463a-a2e6-eec30a5b57f4" (UID: "a2717bae-6059-463a-a2e6-eec30a5b57f4"). InnerVolumeSpecName "kube-api-access-4lckt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.658922 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a603c362-75dd-4b1a-a7ba-b49a5da55cf0" (UID: "a603c362-75dd-4b1a-a7ba-b49a5da55cf0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.664694 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2717bae-6059-463a-a2e6-eec30a5b57f4-config" (OuterVolumeSpecName: "config") pod "a2717bae-6059-463a-a2e6-eec30a5b57f4" (UID: "a2717bae-6059-463a-a2e6-eec30a5b57f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.668741 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2717bae-6059-463a-a2e6-eec30a5b57f4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a2717bae-6059-463a-a2e6-eec30a5b57f4" (UID: "a2717bae-6059-463a-a2e6-eec30a5b57f4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.671155 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-config" (OuterVolumeSpecName: "config") pod "a603c362-75dd-4b1a-a7ba-b49a5da55cf0" (UID: "a603c362-75dd-4b1a-a7ba-b49a5da55cf0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.672923 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a603c362-75dd-4b1a-a7ba-b49a5da55cf0" (UID: "a603c362-75dd-4b1a-a7ba-b49a5da55cf0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.676304 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a603c362-75dd-4b1a-a7ba-b49a5da55cf0" (UID: "a603c362-75dd-4b1a-a7ba-b49a5da55cf0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.740717 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.741384 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2717bae-6059-463a-a2e6-eec30a5b57f4-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.741476 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lckt\" (UniqueName: \"kubernetes.io/projected/a2717bae-6059-463a-a2e6-eec30a5b57f4-kube-api-access-4lckt\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.741623 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.741889 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cb9t\" (UniqueName: \"kubernetes.io/projected/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-kube-api-access-5cb9t\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.741981 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.742164 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a2717bae-6059-463a-a2e6-eec30a5b57f4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:18 crc kubenswrapper[4856]: I1122 07:23:18.742253 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a603c362-75dd-4b1a-a7ba-b49a5da55cf0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:19 crc kubenswrapper[4856]: I1122 07:23:19.563577 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c476d78c5-rnsxb" Nov 22 07:23:19 crc kubenswrapper[4856]: I1122 07:23:19.563663 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d8746976c-57w79" Nov 22 07:23:19 crc kubenswrapper[4856]: I1122 07:23:19.614893 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-57w79"] Nov 22 07:23:19 crc kubenswrapper[4856]: I1122 07:23:19.621166 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d8746976c-57w79"] Nov 22 07:23:19 crc kubenswrapper[4856]: I1122 07:23:19.653314 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-rnsxb"] Nov 22 07:23:19 crc kubenswrapper[4856]: I1122 07:23:19.660531 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c476d78c5-rnsxb"] Nov 22 07:23:20 crc kubenswrapper[4856]: I1122 07:23:20.573963 4856 generic.go:334] "Generic (PLEG): container finished" podID="1d3a5d31-7183-4298-87ea-4aa84aa395b4" containerID="6dbc2c42beeb03f5f93f9ca2890f1f6f74875cdba0da041cffe6c07e36ced3cf" exitCode=0 Nov 22 07:23:20 crc kubenswrapper[4856]: I1122 07:23:20.574051 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1d3a5d31-7183-4298-87ea-4aa84aa395b4","Type":"ContainerDied","Data":"6dbc2c42beeb03f5f93f9ca2890f1f6f74875cdba0da041cffe6c07e36ced3cf"} Nov 22 07:23:20 crc kubenswrapper[4856]: I1122 07:23:20.576795 4856 generic.go:334] "Generic (PLEG): container finished" podID="b27ecbc9-0058-49d3-8715-826a4a1bb544" containerID="6707e619f62035dae0710f67046d5933fec25522a3bdba7ddad574baf16ec1fd" exitCode=0 Nov 22 07:23:20 crc kubenswrapper[4856]: I1122 07:23:20.576824 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27ecbc9-0058-49d3-8715-826a4a1bb544","Type":"ContainerDied","Data":"6707e619f62035dae0710f67046d5933fec25522a3bdba7ddad574baf16ec1fd"} Nov 22 07:23:20 crc kubenswrapper[4856]: I1122 07:23:20.722356 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2717bae-6059-463a-a2e6-eec30a5b57f4" path="/var/lib/kubelet/pods/a2717bae-6059-463a-a2e6-eec30a5b57f4/volumes" Nov 22 07:23:20 crc kubenswrapper[4856]: I1122 07:23:20.723172 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a603c362-75dd-4b1a-a7ba-b49a5da55cf0" path="/var/lib/kubelet/pods/a603c362-75dd-4b1a-a7ba-b49a5da55cf0/volumes" Nov 22 07:23:23 crc kubenswrapper[4856]: I1122 07:23:23.605188 4856 generic.go:334] "Generic (PLEG): container finished" podID="0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" containerID="c10cec0c537e858b480226f22e5be592da7a6e4e6ce33e779e0e631dde2f8987" exitCode=0 Nov 22 07:23:23 crc kubenswrapper[4856]: I1122 07:23:23.605488 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89","Type":"ContainerDied","Data":"c10cec0c537e858b480226f22e5be592da7a6e4e6ce33e779e0e631dde2f8987"} Nov 22 07:23:23 crc kubenswrapper[4856]: I1122 07:23:23.609385 4856 generic.go:334] "Generic (PLEG): container finished" podID="4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" containerID="f5173b778bc6df84dd44ccb0081f7b0478ee848a30a82116594357ab8bd607c4" exitCode=0 Nov 22 07:23:23 crc kubenswrapper[4856]: I1122 07:23:23.609441 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429","Type":"ContainerDied","Data":"f5173b778bc6df84dd44ccb0081f7b0478ee848a30a82116594357ab8bd607c4"} Nov 22 07:23:24 crc kubenswrapper[4856]: I1122 07:23:24.619285 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" event={"ID":"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814","Type":"ContainerStarted","Data":"91befb77543f481e5b3fa54c689a0a07ebc807e71eea598d0dca1e86e747500d"} Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.633841 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429","Type":"ContainerStarted","Data":"fe053dc6b4b700a119cd588385a844042a2dde38e5a679600fc61619199db0cc"} Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.634438 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.636656 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1d3a5d31-7183-4298-87ea-4aa84aa395b4","Type":"ContainerStarted","Data":"cde1d5e34fed489806a536b0abe875c6d7151093d591a234d52ed41c693e2b63"} Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.638020 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27ecbc9-0058-49d3-8715-826a4a1bb544","Type":"ContainerStarted","Data":"bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7"} Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.639302 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"898257c3-b9a4-4d7b-8484-f3466c19e051","Type":"ContainerStarted","Data":"f7971572f0255cf6911c06156edf962b38287cca61b7d41c5cd4c9d5ecd2a048"} Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.639546 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.640613 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3aa24715-1df9-4a47-9817-4a1b68679d08","Type":"ContainerStarted","Data":"bef68756d75607bcf49b118ee011e2d46c1fca15a0f4988d5490ac2121c7d6ec"} Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.640640 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3aa24715-1df9-4a47-9817-4a1b68679d08","Type":"ContainerStarted","Data":"0837d9798ef5bdddf9e9d11f1d4578cefe9b49abb3e9b5697828bae554298534"} Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.640763 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.642256 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89","Type":"ContainerStarted","Data":"1a983d61b8dfe6b5b848b2945b31f7053bd5045dbc03ba4867c1e7855f9b3dcd"} Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.642410 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.643543 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5mjrn" event={"ID":"1c34ba2b-b0cb-4527-b651-a888c0b49d32","Type":"ContainerStarted","Data":"310c395b35f6d8ce91619f7277306489a7826437eb66bb531fd9e9b73c33c26d"} Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.645816 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" event={"ID":"ec618b5f-bf54-4636-b50b-330cdfdfcd62","Type":"ContainerStarted","Data":"b3b6e71e300a03394fe34c27f1ab9fdba9c2acea13d798b97a51ff2be3f5e36c"} Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.645866 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.645878 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.672615 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.005195277 podStartE2EDuration="1m13.67259558s" podCreationTimestamp="2025-11-22 07:22:12 +0000 UTC" firstStartedPulling="2025-11-22 07:22:14.640500587 +0000 UTC m=+1177.053893845" lastFinishedPulling="2025-11-22 07:22:49.30790089 +0000 UTC m=+1211.721294148" observedRunningTime="2025-11-22 07:23:25.66075048 +0000 UTC m=+1248.074143728" watchObservedRunningTime="2025-11-22 07:23:25.67259558 +0000 UTC m=+1248.085988838" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.701360 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-5mjrn" podStartSLOduration=7.700815783 podStartE2EDuration="15.701344936s" podCreationTimestamp="2025-11-22 07:23:10 +0000 UTC" firstStartedPulling="2025-11-22 07:23:16.850324029 +0000 UTC m=+1239.263717287" lastFinishedPulling="2025-11-22 07:23:24.850853182 +0000 UTC m=+1247.264246440" observedRunningTime="2025-11-22 07:23:25.699856383 +0000 UTC m=+1248.113249641" watchObservedRunningTime="2025-11-22 07:23:25.701344936 +0000 UTC m=+1248.114738194" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.719877 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=17.036227435 podStartE2EDuration="1m7.719859328s" podCreationTimestamp="2025-11-22 07:22:18 +0000 UTC" firstStartedPulling="2025-11-22 07:22:32.423719531 +0000 UTC m=+1194.837112789" lastFinishedPulling="2025-11-22 07:23:23.107351414 +0000 UTC m=+1245.520744682" observedRunningTime="2025-11-22 07:23:25.716130881 +0000 UTC m=+1248.129524139" watchObservedRunningTime="2025-11-22 07:23:25.719859328 +0000 UTC m=+1248.133252586" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.739729 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371966.115067 podStartE2EDuration="1m10.739708448s" podCreationTimestamp="2025-11-22 07:22:15 +0000 UTC" firstStartedPulling="2025-11-22 07:22:23.687254265 +0000 UTC m=+1186.100647513" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:23:25.734077896 +0000 UTC m=+1248.147471154" watchObservedRunningTime="2025-11-22 07:23:25.739708448 +0000 UTC m=+1248.153101706" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.761021 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.27216021 podStartE2EDuration="1m13.761000129s" podCreationTimestamp="2025-11-22 07:22:12 +0000 UTC" firstStartedPulling="2025-11-22 07:22:14.819965225 +0000 UTC m=+1177.233358483" lastFinishedPulling="2025-11-22 07:22:49.308805144 +0000 UTC m=+1211.722198402" observedRunningTime="2025-11-22 07:23:25.754876714 +0000 UTC m=+1248.168269982" watchObservedRunningTime="2025-11-22 07:23:25.761000129 +0000 UTC m=+1248.174393387" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.788114 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=30.188979169 podStartE2EDuration="1m11.788098777s" podCreationTimestamp="2025-11-22 07:22:14 +0000 UTC" firstStartedPulling="2025-11-22 07:22:16.069961578 +0000 UTC m=+1178.483354836" lastFinishedPulling="2025-11-22 07:22:57.669081176 +0000 UTC m=+1220.082474444" observedRunningTime="2025-11-22 07:23:25.786641845 +0000 UTC m=+1248.200035103" watchObservedRunningTime="2025-11-22 07:23:25.788098777 +0000 UTC m=+1248.201492035" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.817133 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=14.659568543 podStartE2EDuration="20.81711491s" podCreationTimestamp="2025-11-22 07:23:05 +0000 UTC" firstStartedPulling="2025-11-22 07:23:16.949162568 +0000 UTC m=+1239.362555826" lastFinishedPulling="2025-11-22 07:23:23.106708935 +0000 UTC m=+1245.520102193" observedRunningTime="2025-11-22 07:23:25.812601091 +0000 UTC m=+1248.225994349" watchObservedRunningTime="2025-11-22 07:23:25.81711491 +0000 UTC m=+1248.230508168" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.836259 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" podStartSLOduration=10.767609253 podStartE2EDuration="24.83623685s" podCreationTimestamp="2025-11-22 07:23:01 +0000 UTC" firstStartedPulling="2025-11-22 07:23:02.303356615 +0000 UTC m=+1224.716749873" lastFinishedPulling="2025-11-22 07:23:16.371984222 +0000 UTC m=+1238.785377470" observedRunningTime="2025-11-22 07:23:25.830869416 +0000 UTC m=+1248.244262684" watchObservedRunningTime="2025-11-22 07:23:25.83623685 +0000 UTC m=+1248.249630108" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.857948 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" podStartSLOduration=17.857929382000002 podStartE2EDuration="17.857929382s" podCreationTimestamp="2025-11-22 07:23:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:23:25.853204197 +0000 UTC m=+1248.266597455" watchObservedRunningTime="2025-11-22 07:23:25.857929382 +0000 UTC m=+1248.271322640" Nov 22 07:23:25 crc kubenswrapper[4856]: I1122 07:23:25.994147 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:25 crc kubenswrapper[4856]: E1122 07:23:25.994356 4856 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 22 07:23:25 crc kubenswrapper[4856]: E1122 07:23:25.994412 4856 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 22 07:23:25 crc kubenswrapper[4856]: E1122 07:23:25.994465 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift podName:8b649794-30ba-493c-9285-05a58981ed36 nodeName:}" failed. No retries permitted until 2025-11-22 07:23:41.994446783 +0000 UTC m=+1264.407840041 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift") pod "swift-storage-0" (UID: "8b649794-30ba-493c-9285-05a58981ed36") : configmap "swift-ring-files" not found Nov 22 07:23:26 crc kubenswrapper[4856]: I1122 07:23:26.970278 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 22 07:23:26 crc kubenswrapper[4856]: I1122 07:23:26.970314 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 22 07:23:29 crc kubenswrapper[4856]: I1122 07:23:29.072839 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 22 07:23:29 crc kubenswrapper[4856]: I1122 07:23:29.212563 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:23:29 crc kubenswrapper[4856]: I1122 07:23:29.304961 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-9mx4q"] Nov 22 07:23:29 crc kubenswrapper[4856]: I1122 07:23:29.305167 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" podUID="7ed6b3c2-2924-43b9-ab1c-1da6b18cc814" containerName="dnsmasq-dns" containerID="cri-o://91befb77543f481e5b3fa54c689a0a07ebc807e71eea598d0dca1e86e747500d" gracePeriod=10 Nov 22 07:23:29 crc kubenswrapper[4856]: I1122 07:23:29.308340 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:29 crc kubenswrapper[4856]: I1122 07:23:29.754658 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:23:29 crc kubenswrapper[4856]: I1122 07:23:29.755015 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:23:29 crc kubenswrapper[4856]: I1122 07:23:29.755070 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:23:29 crc kubenswrapper[4856]: I1122 07:23:29.755801 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4366d97abee77d6bcf27f0824324e78ad727912da8d9c8585365d5f93d21ed74"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:23:29 crc kubenswrapper[4856]: I1122 07:23:29.755864 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://4366d97abee77d6bcf27f0824324e78ad727912da8d9c8585365d5f93d21ed74" gracePeriod=600 Nov 22 07:23:30 crc kubenswrapper[4856]: I1122 07:23:30.695176 4856 generic.go:334] "Generic (PLEG): container finished" podID="7ed6b3c2-2924-43b9-ab1c-1da6b18cc814" containerID="91befb77543f481e5b3fa54c689a0a07ebc807e71eea598d0dca1e86e747500d" exitCode=0 Nov 22 07:23:30 crc kubenswrapper[4856]: I1122 07:23:30.695267 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" event={"ID":"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814","Type":"ContainerDied","Data":"91befb77543f481e5b3fa54c689a0a07ebc807e71eea598d0dca1e86e747500d"} Nov 22 07:23:30 crc kubenswrapper[4856]: I1122 07:23:30.698265 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="4366d97abee77d6bcf27f0824324e78ad727912da8d9c8585365d5f93d21ed74" exitCode=0 Nov 22 07:23:30 crc kubenswrapper[4856]: I1122 07:23:30.698305 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"4366d97abee77d6bcf27f0824324e78ad727912da8d9c8585365d5f93d21ed74"} Nov 22 07:23:30 crc kubenswrapper[4856]: I1122 07:23:30.698362 4856 scope.go:117] "RemoveContainer" containerID="b2ea5ccf83836498246295e06fea7da0e6ecc690c06aeac649547d0e64344abd" Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.038573 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.081888 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-ovsdbserver-nb\") pod \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.081967 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-dns-svc\") pod \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.082132 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-config\") pod \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.082239 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxfth\" (UniqueName: \"kubernetes.io/projected/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-kube-api-access-zxfth\") pod \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\" (UID: \"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814\") " Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.088326 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-kube-api-access-zxfth" (OuterVolumeSpecName: "kube-api-access-zxfth") pod "7ed6b3c2-2924-43b9-ab1c-1da6b18cc814" (UID: "7ed6b3c2-2924-43b9-ab1c-1da6b18cc814"). InnerVolumeSpecName "kube-api-access-zxfth". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.122815 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-config" (OuterVolumeSpecName: "config") pod "7ed6b3c2-2924-43b9-ab1c-1da6b18cc814" (UID: "7ed6b3c2-2924-43b9-ab1c-1da6b18cc814"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.152996 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ed6b3c2-2924-43b9-ab1c-1da6b18cc814" (UID: "7ed6b3c2-2924-43b9-ab1c-1da6b18cc814"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.173152 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ed6b3c2-2924-43b9-ab1c-1da6b18cc814" (UID: "7ed6b3c2-2924-43b9-ab1c-1da6b18cc814"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.184477 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxfth\" (UniqueName: \"kubernetes.io/projected/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-kube-api-access-zxfth\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.184572 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.184590 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.184604 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.710728 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.710723 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c65c5f57f-9mx4q" event={"ID":"7ed6b3c2-2924-43b9-ab1c-1da6b18cc814","Type":"ContainerDied","Data":"23a76cc851d6cb50d07a7e090e9c3eb53609be00eb97238f1ed799957e227183"} Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.711246 4856 scope.go:117] "RemoveContainer" containerID="91befb77543f481e5b3fa54c689a0a07ebc807e71eea598d0dca1e86e747500d" Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.714239 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"b2d6ca7441dd492e3a581af2bfbc9e9d1023d20289aecd1a0ad5d8af62f035ce"} Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.739956 4856 scope.go:117] "RemoveContainer" containerID="91ce8e1bc2027dfb2c393d69899ca6977785f098b6227ed8a592fe49deea0b07" Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.759145 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-9mx4q"] Nov 22 07:23:31 crc kubenswrapper[4856]: I1122 07:23:31.767169 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c65c5f57f-9mx4q"] Nov 22 07:23:32 crc kubenswrapper[4856]: I1122 07:23:32.732712 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ed6b3c2-2924-43b9-ab1c-1da6b18cc814" path="/var/lib/kubelet/pods/7ed6b3c2-2924-43b9-ab1c-1da6b18cc814/volumes" Nov 22 07:23:33 crc kubenswrapper[4856]: I1122 07:23:33.027541 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hwrb9" podUID="e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" containerName="ovn-controller" probeResult="failure" output=< Nov 22 07:23:33 crc kubenswrapper[4856]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 22 07:23:33 crc kubenswrapper[4856]: > Nov 22 07:23:33 crc kubenswrapper[4856]: I1122 07:23:33.210347 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:23:33 crc kubenswrapper[4856]: I1122 07:23:33.749095 4856 generic.go:334] "Generic (PLEG): container finished" podID="1c34ba2b-b0cb-4527-b651-a888c0b49d32" containerID="310c395b35f6d8ce91619f7277306489a7826437eb66bb531fd9e9b73c33c26d" exitCode=0 Nov 22 07:23:33 crc kubenswrapper[4856]: I1122 07:23:33.749178 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5mjrn" event={"ID":"1c34ba2b-b0cb-4527-b651-a888c0b49d32","Type":"ContainerDied","Data":"310c395b35f6d8ce91619f7277306489a7826437eb66bb531fd9e9b73c33c26d"} Nov 22 07:23:34 crc kubenswrapper[4856]: I1122 07:23:34.033757 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:23:34 crc kubenswrapper[4856]: I1122 07:23:34.371211 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.110128 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.154142 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1c34ba2b-b0cb-4527-b651-a888c0b49d32-ring-data-devices\") pod \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.154201 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-dispersionconf\") pod \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.154273 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xthlc\" (UniqueName: \"kubernetes.io/projected/1c34ba2b-b0cb-4527-b651-a888c0b49d32-kube-api-access-xthlc\") pod \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.154353 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c34ba2b-b0cb-4527-b651-a888c0b49d32-scripts\") pod \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.154374 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1c34ba2b-b0cb-4527-b651-a888c0b49d32-etc-swift\") pod \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.154391 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-combined-ca-bundle\") pod \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.154432 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-swiftconf\") pod \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\" (UID: \"1c34ba2b-b0cb-4527-b651-a888c0b49d32\") " Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.156167 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c34ba2b-b0cb-4527-b651-a888c0b49d32-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "1c34ba2b-b0cb-4527-b651-a888c0b49d32" (UID: "1c34ba2b-b0cb-4527-b651-a888c0b49d32"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.156605 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c34ba2b-b0cb-4527-b651-a888c0b49d32-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1c34ba2b-b0cb-4527-b651-a888c0b49d32" (UID: "1c34ba2b-b0cb-4527-b651-a888c0b49d32"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.178959 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c34ba2b-b0cb-4527-b651-a888c0b49d32-kube-api-access-xthlc" (OuterVolumeSpecName: "kube-api-access-xthlc") pod "1c34ba2b-b0cb-4527-b651-a888c0b49d32" (UID: "1c34ba2b-b0cb-4527-b651-a888c0b49d32"). InnerVolumeSpecName "kube-api-access-xthlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.179886 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c34ba2b-b0cb-4527-b651-a888c0b49d32-scripts" (OuterVolumeSpecName: "scripts") pod "1c34ba2b-b0cb-4527-b651-a888c0b49d32" (UID: "1c34ba2b-b0cb-4527-b651-a888c0b49d32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.182139 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "1c34ba2b-b0cb-4527-b651-a888c0b49d32" (UID: "1c34ba2b-b0cb-4527-b651-a888c0b49d32"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.183065 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "1c34ba2b-b0cb-4527-b651-a888c0b49d32" (UID: "1c34ba2b-b0cb-4527-b651-a888c0b49d32"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.183112 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c34ba2b-b0cb-4527-b651-a888c0b49d32" (UID: "1c34ba2b-b0cb-4527-b651-a888c0b49d32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.256330 4856 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1c34ba2b-b0cb-4527-b651-a888c0b49d32-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.256375 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.256386 4856 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.256396 4856 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1c34ba2b-b0cb-4527-b651-a888c0b49d32-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.256406 4856 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1c34ba2b-b0cb-4527-b651-a888c0b49d32-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.256414 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xthlc\" (UniqueName: \"kubernetes.io/projected/1c34ba2b-b0cb-4527-b651-a888c0b49d32-kube-api-access-xthlc\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.256422 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1c34ba2b-b0cb-4527-b651-a888c0b49d32-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.571666 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.572278 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.581355 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.733785 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.768440 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.798375 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-5mjrn" event={"ID":"1c34ba2b-b0cb-4527-b651-a888c0b49d32","Type":"ContainerDied","Data":"4ba4252337d08696ab1bd4a604c12d12aa254009918764086b4f8e21b13dd6db"} Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.798720 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ba4252337d08696ab1bd4a604c12d12aa254009918764086b4f8e21b13dd6db" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.798891 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-5mjrn" Nov 22 07:23:35 crc kubenswrapper[4856]: I1122 07:23:35.945605 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 22 07:23:36 crc kubenswrapper[4856]: I1122 07:23:36.034293 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.059768 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-39ec-account-create-sh72f"] Nov 22 07:23:37 crc kubenswrapper[4856]: E1122 07:23:37.060066 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c34ba2b-b0cb-4527-b651-a888c0b49d32" containerName="swift-ring-rebalance" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.060077 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c34ba2b-b0cb-4527-b651-a888c0b49d32" containerName="swift-ring-rebalance" Nov 22 07:23:37 crc kubenswrapper[4856]: E1122 07:23:37.060087 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed6b3c2-2924-43b9-ab1c-1da6b18cc814" containerName="dnsmasq-dns" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.060092 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed6b3c2-2924-43b9-ab1c-1da6b18cc814" containerName="dnsmasq-dns" Nov 22 07:23:37 crc kubenswrapper[4856]: E1122 07:23:37.060108 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a603c362-75dd-4b1a-a7ba-b49a5da55cf0" containerName="init" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.060114 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a603c362-75dd-4b1a-a7ba-b49a5da55cf0" containerName="init" Nov 22 07:23:37 crc kubenswrapper[4856]: E1122 07:23:37.060125 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2717bae-6059-463a-a2e6-eec30a5b57f4" containerName="init" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.060132 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2717bae-6059-463a-a2e6-eec30a5b57f4" containerName="init" Nov 22 07:23:37 crc kubenswrapper[4856]: E1122 07:23:37.060144 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed6b3c2-2924-43b9-ab1c-1da6b18cc814" containerName="init" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.060149 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed6b3c2-2924-43b9-ab1c-1da6b18cc814" containerName="init" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.060315 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c34ba2b-b0cb-4527-b651-a888c0b49d32" containerName="swift-ring-rebalance" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.060340 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ed6b3c2-2924-43b9-ab1c-1da6b18cc814" containerName="dnsmasq-dns" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.060355 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2717bae-6059-463a-a2e6-eec30a5b57f4" containerName="init" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.060368 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a603c362-75dd-4b1a-a7ba-b49a5da55cf0" containerName="init" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.060842 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-39ec-account-create-sh72f" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.062840 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.114095 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-39ec-account-create-sh72f"] Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.151314 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-8278h"] Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.152332 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8278h" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.162695 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8278h"] Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.215748 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5de4ba-e26d-45de-a653-8cc9be68d5c3-operator-scripts\") pod \"keystone-39ec-account-create-sh72f\" (UID: \"8b5de4ba-e26d-45de-a653-8cc9be68d5c3\") " pod="openstack/keystone-39ec-account-create-sh72f" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.215796 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7q2\" (UniqueName: \"kubernetes.io/projected/8b5de4ba-e26d-45de-a653-8cc9be68d5c3-kube-api-access-hw7q2\") pod \"keystone-39ec-account-create-sh72f\" (UID: \"8b5de4ba-e26d-45de-a653-8cc9be68d5c3\") " pod="openstack/keystone-39ec-account-create-sh72f" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.317173 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdn8z\" (UniqueName: \"kubernetes.io/projected/e04b9723-304a-46a5-a230-2daf9bcd6c3c-kube-api-access-cdn8z\") pod \"keystone-db-create-8278h\" (UID: \"e04b9723-304a-46a5-a230-2daf9bcd6c3c\") " pod="openstack/keystone-db-create-8278h" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.317912 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e04b9723-304a-46a5-a230-2daf9bcd6c3c-operator-scripts\") pod \"keystone-db-create-8278h\" (UID: \"e04b9723-304a-46a5-a230-2daf9bcd6c3c\") " pod="openstack/keystone-db-create-8278h" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.317994 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5de4ba-e26d-45de-a653-8cc9be68d5c3-operator-scripts\") pod \"keystone-39ec-account-create-sh72f\" (UID: \"8b5de4ba-e26d-45de-a653-8cc9be68d5c3\") " pod="openstack/keystone-39ec-account-create-sh72f" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.318019 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw7q2\" (UniqueName: \"kubernetes.io/projected/8b5de4ba-e26d-45de-a653-8cc9be68d5c3-kube-api-access-hw7q2\") pod \"keystone-39ec-account-create-sh72f\" (UID: \"8b5de4ba-e26d-45de-a653-8cc9be68d5c3\") " pod="openstack/keystone-39ec-account-create-sh72f" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.319249 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5de4ba-e26d-45de-a653-8cc9be68d5c3-operator-scripts\") pod \"keystone-39ec-account-create-sh72f\" (UID: \"8b5de4ba-e26d-45de-a653-8cc9be68d5c3\") " pod="openstack/keystone-39ec-account-create-sh72f" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.331962 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-lwbks"] Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.333241 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lwbks" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.343557 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lwbks"] Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.348264 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw7q2\" (UniqueName: \"kubernetes.io/projected/8b5de4ba-e26d-45de-a653-8cc9be68d5c3-kube-api-access-hw7q2\") pod \"keystone-39ec-account-create-sh72f\" (UID: \"8b5de4ba-e26d-45de-a653-8cc9be68d5c3\") " pod="openstack/keystone-39ec-account-create-sh72f" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.420006 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb599bf-1bc1-4497-82a8-2165e566aaa4-operator-scripts\") pod \"placement-db-create-lwbks\" (UID: \"7cb599bf-1bc1-4497-82a8-2165e566aaa4\") " pod="openstack/placement-db-create-lwbks" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.420090 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdn8z\" (UniqueName: \"kubernetes.io/projected/e04b9723-304a-46a5-a230-2daf9bcd6c3c-kube-api-access-cdn8z\") pod \"keystone-db-create-8278h\" (UID: \"e04b9723-304a-46a5-a230-2daf9bcd6c3c\") " pod="openstack/keystone-db-create-8278h" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.420176 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb2rj\" (UniqueName: \"kubernetes.io/projected/7cb599bf-1bc1-4497-82a8-2165e566aaa4-kube-api-access-cb2rj\") pod \"placement-db-create-lwbks\" (UID: \"7cb599bf-1bc1-4497-82a8-2165e566aaa4\") " pod="openstack/placement-db-create-lwbks" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.420227 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e04b9723-304a-46a5-a230-2daf9bcd6c3c-operator-scripts\") pod \"keystone-db-create-8278h\" (UID: \"e04b9723-304a-46a5-a230-2daf9bcd6c3c\") " pod="openstack/keystone-db-create-8278h" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.421139 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e04b9723-304a-46a5-a230-2daf9bcd6c3c-operator-scripts\") pod \"keystone-db-create-8278h\" (UID: \"e04b9723-304a-46a5-a230-2daf9bcd6c3c\") " pod="openstack/keystone-db-create-8278h" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.423672 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-39ec-account-create-sh72f" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.438603 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdn8z\" (UniqueName: \"kubernetes.io/projected/e04b9723-304a-46a5-a230-2daf9bcd6c3c-kube-api-access-cdn8z\") pod \"keystone-db-create-8278h\" (UID: \"e04b9723-304a-46a5-a230-2daf9bcd6c3c\") " pod="openstack/keystone-db-create-8278h" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.467715 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8278h" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.471930 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-00a5-account-create-tpf6w"] Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.474264 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00a5-account-create-tpf6w" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.477321 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.495108 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-00a5-account-create-tpf6w"] Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.523683 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb2rj\" (UniqueName: \"kubernetes.io/projected/7cb599bf-1bc1-4497-82a8-2165e566aaa4-kube-api-access-cb2rj\") pod \"placement-db-create-lwbks\" (UID: \"7cb599bf-1bc1-4497-82a8-2165e566aaa4\") " pod="openstack/placement-db-create-lwbks" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.523791 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb599bf-1bc1-4497-82a8-2165e566aaa4-operator-scripts\") pod \"placement-db-create-lwbks\" (UID: \"7cb599bf-1bc1-4497-82a8-2165e566aaa4\") " pod="openstack/placement-db-create-lwbks" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.524481 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb599bf-1bc1-4497-82a8-2165e566aaa4-operator-scripts\") pod \"placement-db-create-lwbks\" (UID: \"7cb599bf-1bc1-4497-82a8-2165e566aaa4\") " pod="openstack/placement-db-create-lwbks" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.551182 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb2rj\" (UniqueName: \"kubernetes.io/projected/7cb599bf-1bc1-4497-82a8-2165e566aaa4-kube-api-access-cb2rj\") pod \"placement-db-create-lwbks\" (UID: \"7cb599bf-1bc1-4497-82a8-2165e566aaa4\") " pod="openstack/placement-db-create-lwbks" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.595122 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4896z"] Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.600838 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4896z" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.609204 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4896z"] Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.629197 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5bd7b67-77ce-4a59-a510-f5b39de503d8-operator-scripts\") pod \"placement-00a5-account-create-tpf6w\" (UID: \"d5bd7b67-77ce-4a59-a510-f5b39de503d8\") " pod="openstack/placement-00a5-account-create-tpf6w" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.629251 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jllk5\" (UniqueName: \"kubernetes.io/projected/d5bd7b67-77ce-4a59-a510-f5b39de503d8-kube-api-access-jllk5\") pod \"placement-00a5-account-create-tpf6w\" (UID: \"d5bd7b67-77ce-4a59-a510-f5b39de503d8\") " pod="openstack/placement-00a5-account-create-tpf6w" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.648910 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lwbks" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.690627 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-1a70-account-create-dp24l"] Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.691655 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1a70-account-create-dp24l" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.694856 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.698136 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1a70-account-create-dp24l"] Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.733734 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5bd7b67-77ce-4a59-a510-f5b39de503d8-operator-scripts\") pod \"placement-00a5-account-create-tpf6w\" (UID: \"d5bd7b67-77ce-4a59-a510-f5b39de503d8\") " pod="openstack/placement-00a5-account-create-tpf6w" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.733848 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jllk5\" (UniqueName: \"kubernetes.io/projected/d5bd7b67-77ce-4a59-a510-f5b39de503d8-kube-api-access-jllk5\") pod \"placement-00a5-account-create-tpf6w\" (UID: \"d5bd7b67-77ce-4a59-a510-f5b39de503d8\") " pod="openstack/placement-00a5-account-create-tpf6w" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.733889 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0079df7-afe2-44a1-9c44-aabed35e0920-operator-scripts\") pod \"glance-db-create-4896z\" (UID: \"d0079df7-afe2-44a1-9c44-aabed35e0920\") " pod="openstack/glance-db-create-4896z" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.733921 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4q6s\" (UniqueName: \"kubernetes.io/projected/d0079df7-afe2-44a1-9c44-aabed35e0920-kube-api-access-g4q6s\") pod \"glance-db-create-4896z\" (UID: \"d0079df7-afe2-44a1-9c44-aabed35e0920\") " pod="openstack/glance-db-create-4896z" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.737240 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5bd7b67-77ce-4a59-a510-f5b39de503d8-operator-scripts\") pod \"placement-00a5-account-create-tpf6w\" (UID: \"d5bd7b67-77ce-4a59-a510-f5b39de503d8\") " pod="openstack/placement-00a5-account-create-tpf6w" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.762430 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jllk5\" (UniqueName: \"kubernetes.io/projected/d5bd7b67-77ce-4a59-a510-f5b39de503d8-kube-api-access-jllk5\") pod \"placement-00a5-account-create-tpf6w\" (UID: \"d5bd7b67-77ce-4a59-a510-f5b39de503d8\") " pod="openstack/placement-00a5-account-create-tpf6w" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.839524 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v6rn\" (UniqueName: \"kubernetes.io/projected/f061b34a-dff9-42e7-8b22-2cce81c12234-kube-api-access-5v6rn\") pod \"glance-1a70-account-create-dp24l\" (UID: \"f061b34a-dff9-42e7-8b22-2cce81c12234\") " pod="openstack/glance-1a70-account-create-dp24l" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.839615 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f061b34a-dff9-42e7-8b22-2cce81c12234-operator-scripts\") pod \"glance-1a70-account-create-dp24l\" (UID: \"f061b34a-dff9-42e7-8b22-2cce81c12234\") " pod="openstack/glance-1a70-account-create-dp24l" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.839742 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0079df7-afe2-44a1-9c44-aabed35e0920-operator-scripts\") pod \"glance-db-create-4896z\" (UID: \"d0079df7-afe2-44a1-9c44-aabed35e0920\") " pod="openstack/glance-db-create-4896z" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.839776 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4q6s\" (UniqueName: \"kubernetes.io/projected/d0079df7-afe2-44a1-9c44-aabed35e0920-kube-api-access-g4q6s\") pod \"glance-db-create-4896z\" (UID: \"d0079df7-afe2-44a1-9c44-aabed35e0920\") " pod="openstack/glance-db-create-4896z" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.843617 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0079df7-afe2-44a1-9c44-aabed35e0920-operator-scripts\") pod \"glance-db-create-4896z\" (UID: \"d0079df7-afe2-44a1-9c44-aabed35e0920\") " pod="openstack/glance-db-create-4896z" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.859322 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4q6s\" (UniqueName: \"kubernetes.io/projected/d0079df7-afe2-44a1-9c44-aabed35e0920-kube-api-access-g4q6s\") pod \"glance-db-create-4896z\" (UID: \"d0079df7-afe2-44a1-9c44-aabed35e0920\") " pod="openstack/glance-db-create-4896z" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.940790 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v6rn\" (UniqueName: \"kubernetes.io/projected/f061b34a-dff9-42e7-8b22-2cce81c12234-kube-api-access-5v6rn\") pod \"glance-1a70-account-create-dp24l\" (UID: \"f061b34a-dff9-42e7-8b22-2cce81c12234\") " pod="openstack/glance-1a70-account-create-dp24l" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.940860 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f061b34a-dff9-42e7-8b22-2cce81c12234-operator-scripts\") pod \"glance-1a70-account-create-dp24l\" (UID: \"f061b34a-dff9-42e7-8b22-2cce81c12234\") " pod="openstack/glance-1a70-account-create-dp24l" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.945192 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f061b34a-dff9-42e7-8b22-2cce81c12234-operator-scripts\") pod \"glance-1a70-account-create-dp24l\" (UID: \"f061b34a-dff9-42e7-8b22-2cce81c12234\") " pod="openstack/glance-1a70-account-create-dp24l" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.945437 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00a5-account-create-tpf6w" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.961723 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v6rn\" (UniqueName: \"kubernetes.io/projected/f061b34a-dff9-42e7-8b22-2cce81c12234-kube-api-access-5v6rn\") pod \"glance-1a70-account-create-dp24l\" (UID: \"f061b34a-dff9-42e7-8b22-2cce81c12234\") " pod="openstack/glance-1a70-account-create-dp24l" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.972223 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4896z" Nov 22 07:23:37 crc kubenswrapper[4856]: I1122 07:23:37.983232 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-39ec-account-create-sh72f"] Nov 22 07:23:38 crc kubenswrapper[4856]: W1122 07:23:38.008521 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b5de4ba_e26d_45de_a653_8cc9be68d5c3.slice/crio-106f63100156557154f93b3c708d99b446712c0826c2fe94f1a3ec335bab021e WatchSource:0}: Error finding container 106f63100156557154f93b3c708d99b446712c0826c2fe94f1a3ec335bab021e: Status 404 returned error can't find the container with id 106f63100156557154f93b3c708d99b446712c0826c2fe94f1a3ec335bab021e Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.016082 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1a70-account-create-dp24l" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.033784 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hwrb9" podUID="e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" containerName="ovn-controller" probeResult="failure" output=< Nov 22 07:23:38 crc kubenswrapper[4856]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 22 07:23:38 crc kubenswrapper[4856]: > Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.132013 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-8278h"] Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.201323 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-lwbks"] Nov 22 07:23:38 crc kubenswrapper[4856]: W1122 07:23:38.234737 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cb599bf_1bc1_4497_82a8_2165e566aaa4.slice/crio-ac3040ac8e438b5b25d76f2666e7ef5396efb9586bca2608c7faf8cb0eb33502 WatchSource:0}: Error finding container ac3040ac8e438b5b25d76f2666e7ef5396efb9586bca2608c7faf8cb0eb33502: Status 404 returned error can't find the container with id ac3040ac8e438b5b25d76f2666e7ef5396efb9586bca2608c7faf8cb0eb33502 Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.256405 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-00a5-account-create-tpf6w"] Nov 22 07:23:38 crc kubenswrapper[4856]: W1122 07:23:38.286655 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5bd7b67_77ce_4a59_a510_f5b39de503d8.slice/crio-9c0dfb0261d3f1af48dd58dada991c1ef828e40c3a13c59cc22bc6ce4e2a4a47 WatchSource:0}: Error finding container 9c0dfb0261d3f1af48dd58dada991c1ef828e40c3a13c59cc22bc6ce4e2a4a47: Status 404 returned error can't find the container with id 9c0dfb0261d3f1af48dd58dada991c1ef828e40c3a13c59cc22bc6ce4e2a4a47 Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.293064 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.512935 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hwrb9-config-mzsjk"] Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.515734 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.525051 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.532126 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hwrb9-config-mzsjk"] Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.586570 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4896z"] Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.602177 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-1a70-account-create-dp24l"] Nov 22 07:23:38 crc kubenswrapper[4856]: W1122 07:23:38.644547 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0079df7_afe2_44a1_9c44_aabed35e0920.slice/crio-6b3046f1dbcd3d248c8125703a1eedf66af6d06c54d3c6b5425bcf0170295ade WatchSource:0}: Error finding container 6b3046f1dbcd3d248c8125703a1eedf66af6d06c54d3c6b5425bcf0170295ade: Status 404 returned error can't find the container with id 6b3046f1dbcd3d248c8125703a1eedf66af6d06c54d3c6b5425bcf0170295ade Nov 22 07:23:38 crc kubenswrapper[4856]: W1122 07:23:38.645314 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf061b34a_dff9_42e7_8b22_2cce81c12234.slice/crio-da6dc0747d758f373df70988d6ece1e4357ba2b548456355a77153c09524a5c2 WatchSource:0}: Error finding container da6dc0747d758f373df70988d6ece1e4357ba2b548456355a77153c09524a5c2: Status 404 returned error can't find the container with id da6dc0747d758f373df70988d6ece1e4357ba2b548456355a77153c09524a5c2 Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.658994 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.661431 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/259ff6f3-7a32-446c-b8d0-799a11191319-scripts\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.661595 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgclc\" (UniqueName: \"kubernetes.io/projected/259ff6f3-7a32-446c-b8d0-799a11191319-kube-api-access-wgclc\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.661651 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/259ff6f3-7a32-446c-b8d0-799a11191319-additional-scripts\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.661741 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-run-ovn\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.661795 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-log-ovn\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.661829 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-run\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.765895 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-run-ovn\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.765974 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-log-ovn\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.766015 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-run\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.766105 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/259ff6f3-7a32-446c-b8d0-799a11191319-scripts\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.766150 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgclc\" (UniqueName: \"kubernetes.io/projected/259ff6f3-7a32-446c-b8d0-799a11191319-kube-api-access-wgclc\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.766176 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/259ff6f3-7a32-446c-b8d0-799a11191319-additional-scripts\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.766685 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-run-ovn\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.766763 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-log-ovn\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.766813 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-run\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.769308 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/259ff6f3-7a32-446c-b8d0-799a11191319-scripts\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.770454 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.776989 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/259ff6f3-7a32-446c-b8d0-799a11191319-additional-scripts\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.794615 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgclc\" (UniqueName: \"kubernetes.io/projected/259ff6f3-7a32-446c-b8d0-799a11191319-kube-api-access-wgclc\") pod \"ovn-controller-hwrb9-config-mzsjk\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.825779 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b5de4ba-e26d-45de-a653-8cc9be68d5c3" containerID="208c215251646795ab6cb26edb516b97fe496400e51e1e8f665ed937642f1204" exitCode=0 Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.825885 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-39ec-account-create-sh72f" event={"ID":"8b5de4ba-e26d-45de-a653-8cc9be68d5c3","Type":"ContainerDied","Data":"208c215251646795ab6cb26edb516b97fe496400e51e1e8f665ed937642f1204"} Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.825931 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-39ec-account-create-sh72f" event={"ID":"8b5de4ba-e26d-45de-a653-8cc9be68d5c3","Type":"ContainerStarted","Data":"106f63100156557154f93b3c708d99b446712c0826c2fe94f1a3ec335bab021e"} Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.834991 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00a5-account-create-tpf6w" event={"ID":"d5bd7b67-77ce-4a59-a510-f5b39de503d8","Type":"ContainerStarted","Data":"ea317fed3d371307a9aff011a5dcf70ef5c76887c02d1086551cc16eb012b860"} Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.835060 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00a5-account-create-tpf6w" event={"ID":"d5bd7b67-77ce-4a59-a510-f5b39de503d8","Type":"ContainerStarted","Data":"9c0dfb0261d3f1af48dd58dada991c1ef828e40c3a13c59cc22bc6ce4e2a4a47"} Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.838787 4856 generic.go:334] "Generic (PLEG): container finished" podID="e04b9723-304a-46a5-a230-2daf9bcd6c3c" containerID="a47e77fe08fca49e6ceafb9d80866fae9b23a969620c35a33c829ec365ae8186" exitCode=0 Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.838923 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8278h" event={"ID":"e04b9723-304a-46a5-a230-2daf9bcd6c3c","Type":"ContainerDied","Data":"a47e77fe08fca49e6ceafb9d80866fae9b23a969620c35a33c829ec365ae8186"} Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.838962 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8278h" event={"ID":"e04b9723-304a-46a5-a230-2daf9bcd6c3c","Type":"ContainerStarted","Data":"c1fbe75bac2b7967d99dafcb1b34a45fc6b291172caac1e3974de7db01732833"} Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.844967 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lwbks" event={"ID":"7cb599bf-1bc1-4497-82a8-2165e566aaa4","Type":"ContainerStarted","Data":"c4f22629590a58ea96054eca0236b16ae796cd096384d9dace24279356f1b90a"} Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.845015 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lwbks" event={"ID":"7cb599bf-1bc1-4497-82a8-2165e566aaa4","Type":"ContainerStarted","Data":"ac3040ac8e438b5b25d76f2666e7ef5396efb9586bca2608c7faf8cb0eb33502"} Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.848947 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4896z" event={"ID":"d0079df7-afe2-44a1-9c44-aabed35e0920","Type":"ContainerStarted","Data":"6b3046f1dbcd3d248c8125703a1eedf66af6d06c54d3c6b5425bcf0170295ade"} Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.850235 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1a70-account-create-dp24l" event={"ID":"f061b34a-dff9-42e7-8b22-2cce81c12234","Type":"ContainerStarted","Data":"da6dc0747d758f373df70988d6ece1e4357ba2b548456355a77153c09524a5c2"} Nov 22 07:23:38 crc kubenswrapper[4856]: I1122 07:23:38.861235 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:39 crc kubenswrapper[4856]: I1122 07:23:39.335449 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hwrb9-config-mzsjk"] Nov 22 07:23:39 crc kubenswrapper[4856]: I1122 07:23:39.860457 4856 generic.go:334] "Generic (PLEG): container finished" podID="d5bd7b67-77ce-4a59-a510-f5b39de503d8" containerID="ea317fed3d371307a9aff011a5dcf70ef5c76887c02d1086551cc16eb012b860" exitCode=0 Nov 22 07:23:39 crc kubenswrapper[4856]: I1122 07:23:39.860649 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00a5-account-create-tpf6w" event={"ID":"d5bd7b67-77ce-4a59-a510-f5b39de503d8","Type":"ContainerDied","Data":"ea317fed3d371307a9aff011a5dcf70ef5c76887c02d1086551cc16eb012b860"} Nov 22 07:23:39 crc kubenswrapper[4856]: I1122 07:23:39.862475 4856 generic.go:334] "Generic (PLEG): container finished" podID="7cb599bf-1bc1-4497-82a8-2165e566aaa4" containerID="c4f22629590a58ea96054eca0236b16ae796cd096384d9dace24279356f1b90a" exitCode=0 Nov 22 07:23:39 crc kubenswrapper[4856]: I1122 07:23:39.862549 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lwbks" event={"ID":"7cb599bf-1bc1-4497-82a8-2165e566aaa4","Type":"ContainerDied","Data":"c4f22629590a58ea96054eca0236b16ae796cd096384d9dace24279356f1b90a"} Nov 22 07:23:39 crc kubenswrapper[4856]: I1122 07:23:39.864869 4856 generic.go:334] "Generic (PLEG): container finished" podID="d0079df7-afe2-44a1-9c44-aabed35e0920" containerID="0db464afbcabb57a2015f38a0ea5f2f9f6a53038f4ec14b448d5d42cb67e6f59" exitCode=0 Nov 22 07:23:39 crc kubenswrapper[4856]: I1122 07:23:39.864953 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4896z" event={"ID":"d0079df7-afe2-44a1-9c44-aabed35e0920","Type":"ContainerDied","Data":"0db464afbcabb57a2015f38a0ea5f2f9f6a53038f4ec14b448d5d42cb67e6f59"} Nov 22 07:23:39 crc kubenswrapper[4856]: I1122 07:23:39.866943 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hwrb9-config-mzsjk" event={"ID":"259ff6f3-7a32-446c-b8d0-799a11191319","Type":"ContainerStarted","Data":"509825c540533ac55bf923080287a8b9f0531f9bae14c1af67afe971497af123"} Nov 22 07:23:39 crc kubenswrapper[4856]: I1122 07:23:39.867016 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hwrb9-config-mzsjk" event={"ID":"259ff6f3-7a32-446c-b8d0-799a11191319","Type":"ContainerStarted","Data":"841fff4b4c546b8352642a5e6b72a580d21b22be8a7ecb9eabbdb55ce012b75c"} Nov 22 07:23:39 crc kubenswrapper[4856]: I1122 07:23:39.868789 4856 generic.go:334] "Generic (PLEG): container finished" podID="f061b34a-dff9-42e7-8b22-2cce81c12234" containerID="4525c416276fb4175bdbacfe90bd2046611ed1f320269576d4c9ceac24f98c99" exitCode=0 Nov 22 07:23:39 crc kubenswrapper[4856]: I1122 07:23:39.868855 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1a70-account-create-dp24l" event={"ID":"f061b34a-dff9-42e7-8b22-2cce81c12234","Type":"ContainerDied","Data":"4525c416276fb4175bdbacfe90bd2046611ed1f320269576d4c9ceac24f98c99"} Nov 22 07:23:39 crc kubenswrapper[4856]: I1122 07:23:39.909939 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hwrb9-config-mzsjk" podStartSLOduration=1.909905186 podStartE2EDuration="1.909905186s" podCreationTimestamp="2025-11-22 07:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:23:39.902463183 +0000 UTC m=+1262.315856441" watchObservedRunningTime="2025-11-22 07:23:39.909905186 +0000 UTC m=+1262.323298444" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.251333 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00a5-account-create-tpf6w" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.396942 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jllk5\" (UniqueName: \"kubernetes.io/projected/d5bd7b67-77ce-4a59-a510-f5b39de503d8-kube-api-access-jllk5\") pod \"d5bd7b67-77ce-4a59-a510-f5b39de503d8\" (UID: \"d5bd7b67-77ce-4a59-a510-f5b39de503d8\") " Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.397031 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5bd7b67-77ce-4a59-a510-f5b39de503d8-operator-scripts\") pod \"d5bd7b67-77ce-4a59-a510-f5b39de503d8\" (UID: \"d5bd7b67-77ce-4a59-a510-f5b39de503d8\") " Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.397947 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5bd7b67-77ce-4a59-a510-f5b39de503d8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5bd7b67-77ce-4a59-a510-f5b39de503d8" (UID: "d5bd7b67-77ce-4a59-a510-f5b39de503d8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.402901 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5bd7b67-77ce-4a59-a510-f5b39de503d8-kube-api-access-jllk5" (OuterVolumeSpecName: "kube-api-access-jllk5") pod "d5bd7b67-77ce-4a59-a510-f5b39de503d8" (UID: "d5bd7b67-77ce-4a59-a510-f5b39de503d8"). InnerVolumeSpecName "kube-api-access-jllk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.438197 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-39ec-account-create-sh72f" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.449942 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8278h" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.464113 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lwbks" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.498692 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jllk5\" (UniqueName: \"kubernetes.io/projected/d5bd7b67-77ce-4a59-a510-f5b39de503d8-kube-api-access-jllk5\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.498730 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5bd7b67-77ce-4a59-a510-f5b39de503d8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.600836 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdn8z\" (UniqueName: \"kubernetes.io/projected/e04b9723-304a-46a5-a230-2daf9bcd6c3c-kube-api-access-cdn8z\") pod \"e04b9723-304a-46a5-a230-2daf9bcd6c3c\" (UID: \"e04b9723-304a-46a5-a230-2daf9bcd6c3c\") " Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.600947 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb599bf-1bc1-4497-82a8-2165e566aaa4-operator-scripts\") pod \"7cb599bf-1bc1-4497-82a8-2165e566aaa4\" (UID: \"7cb599bf-1bc1-4497-82a8-2165e566aaa4\") " Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.601021 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5de4ba-e26d-45de-a653-8cc9be68d5c3-operator-scripts\") pod \"8b5de4ba-e26d-45de-a653-8cc9be68d5c3\" (UID: \"8b5de4ba-e26d-45de-a653-8cc9be68d5c3\") " Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.601162 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw7q2\" (UniqueName: \"kubernetes.io/projected/8b5de4ba-e26d-45de-a653-8cc9be68d5c3-kube-api-access-hw7q2\") pod \"8b5de4ba-e26d-45de-a653-8cc9be68d5c3\" (UID: \"8b5de4ba-e26d-45de-a653-8cc9be68d5c3\") " Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.601242 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e04b9723-304a-46a5-a230-2daf9bcd6c3c-operator-scripts\") pod \"e04b9723-304a-46a5-a230-2daf9bcd6c3c\" (UID: \"e04b9723-304a-46a5-a230-2daf9bcd6c3c\") " Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.601343 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb2rj\" (UniqueName: \"kubernetes.io/projected/7cb599bf-1bc1-4497-82a8-2165e566aaa4-kube-api-access-cb2rj\") pod \"7cb599bf-1bc1-4497-82a8-2165e566aaa4\" (UID: \"7cb599bf-1bc1-4497-82a8-2165e566aaa4\") " Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.601918 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b5de4ba-e26d-45de-a653-8cc9be68d5c3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b5de4ba-e26d-45de-a653-8cc9be68d5c3" (UID: "8b5de4ba-e26d-45de-a653-8cc9be68d5c3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.603445 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b5de4ba-e26d-45de-a653-8cc9be68d5c3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.605830 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e04b9723-304a-46a5-a230-2daf9bcd6c3c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e04b9723-304a-46a5-a230-2daf9bcd6c3c" (UID: "e04b9723-304a-46a5-a230-2daf9bcd6c3c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.606578 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb599bf-1bc1-4497-82a8-2165e566aaa4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7cb599bf-1bc1-4497-82a8-2165e566aaa4" (UID: "7cb599bf-1bc1-4497-82a8-2165e566aaa4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.618070 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e04b9723-304a-46a5-a230-2daf9bcd6c3c-kube-api-access-cdn8z" (OuterVolumeSpecName: "kube-api-access-cdn8z") pod "e04b9723-304a-46a5-a230-2daf9bcd6c3c" (UID: "e04b9723-304a-46a5-a230-2daf9bcd6c3c"). InnerVolumeSpecName "kube-api-access-cdn8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.621250 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb599bf-1bc1-4497-82a8-2165e566aaa4-kube-api-access-cb2rj" (OuterVolumeSpecName: "kube-api-access-cb2rj") pod "7cb599bf-1bc1-4497-82a8-2165e566aaa4" (UID: "7cb599bf-1bc1-4497-82a8-2165e566aaa4"). InnerVolumeSpecName "kube-api-access-cb2rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.625639 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5de4ba-e26d-45de-a653-8cc9be68d5c3-kube-api-access-hw7q2" (OuterVolumeSpecName: "kube-api-access-hw7q2") pod "8b5de4ba-e26d-45de-a653-8cc9be68d5c3" (UID: "8b5de4ba-e26d-45de-a653-8cc9be68d5c3"). InnerVolumeSpecName "kube-api-access-hw7q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.705382 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdn8z\" (UniqueName: \"kubernetes.io/projected/e04b9723-304a-46a5-a230-2daf9bcd6c3c-kube-api-access-cdn8z\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.705670 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7cb599bf-1bc1-4497-82a8-2165e566aaa4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.705680 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw7q2\" (UniqueName: \"kubernetes.io/projected/8b5de4ba-e26d-45de-a653-8cc9be68d5c3-kube-api-access-hw7q2\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.705689 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e04b9723-304a-46a5-a230-2daf9bcd6c3c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.705699 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb2rj\" (UniqueName: \"kubernetes.io/projected/7cb599bf-1bc1-4497-82a8-2165e566aaa4-kube-api-access-cb2rj\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.877755 4856 generic.go:334] "Generic (PLEG): container finished" podID="259ff6f3-7a32-446c-b8d0-799a11191319" containerID="509825c540533ac55bf923080287a8b9f0531f9bae14c1af67afe971497af123" exitCode=0 Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.877791 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hwrb9-config-mzsjk" event={"ID":"259ff6f3-7a32-446c-b8d0-799a11191319","Type":"ContainerDied","Data":"509825c540533ac55bf923080287a8b9f0531f9bae14c1af67afe971497af123"} Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.879955 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-39ec-account-create-sh72f" event={"ID":"8b5de4ba-e26d-45de-a653-8cc9be68d5c3","Type":"ContainerDied","Data":"106f63100156557154f93b3c708d99b446712c0826c2fe94f1a3ec335bab021e"} Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.879992 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-39ec-account-create-sh72f" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.880012 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="106f63100156557154f93b3c708d99b446712c0826c2fe94f1a3ec335bab021e" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.881422 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00a5-account-create-tpf6w" event={"ID":"d5bd7b67-77ce-4a59-a510-f5b39de503d8","Type":"ContainerDied","Data":"9c0dfb0261d3f1af48dd58dada991c1ef828e40c3a13c59cc22bc6ce4e2a4a47"} Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.881440 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00a5-account-create-tpf6w" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.881496 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c0dfb0261d3f1af48dd58dada991c1ef828e40c3a13c59cc22bc6ce4e2a4a47" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.883822 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-8278h" event={"ID":"e04b9723-304a-46a5-a230-2daf9bcd6c3c","Type":"ContainerDied","Data":"c1fbe75bac2b7967d99dafcb1b34a45fc6b291172caac1e3974de7db01732833"} Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.883862 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1fbe75bac2b7967d99dafcb1b34a45fc6b291172caac1e3974de7db01732833" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.884005 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-8278h" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.885969 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-lwbks" event={"ID":"7cb599bf-1bc1-4497-82a8-2165e566aaa4","Type":"ContainerDied","Data":"ac3040ac8e438b5b25d76f2666e7ef5396efb9586bca2608c7faf8cb0eb33502"} Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.886072 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-lwbks" Nov 22 07:23:40 crc kubenswrapper[4856]: I1122 07:23:40.886668 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac3040ac8e438b5b25d76f2666e7ef5396efb9586bca2608c7faf8cb0eb33502" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.230344 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4896z" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.239930 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1a70-account-create-dp24l" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.316123 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0079df7-afe2-44a1-9c44-aabed35e0920-operator-scripts\") pod \"d0079df7-afe2-44a1-9c44-aabed35e0920\" (UID: \"d0079df7-afe2-44a1-9c44-aabed35e0920\") " Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.316196 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4q6s\" (UniqueName: \"kubernetes.io/projected/d0079df7-afe2-44a1-9c44-aabed35e0920-kube-api-access-g4q6s\") pod \"d0079df7-afe2-44a1-9c44-aabed35e0920\" (UID: \"d0079df7-afe2-44a1-9c44-aabed35e0920\") " Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.317122 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0079df7-afe2-44a1-9c44-aabed35e0920-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0079df7-afe2-44a1-9c44-aabed35e0920" (UID: "d0079df7-afe2-44a1-9c44-aabed35e0920"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.323188 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0079df7-afe2-44a1-9c44-aabed35e0920-kube-api-access-g4q6s" (OuterVolumeSpecName: "kube-api-access-g4q6s") pod "d0079df7-afe2-44a1-9c44-aabed35e0920" (UID: "d0079df7-afe2-44a1-9c44-aabed35e0920"). InnerVolumeSpecName "kube-api-access-g4q6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.417643 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v6rn\" (UniqueName: \"kubernetes.io/projected/f061b34a-dff9-42e7-8b22-2cce81c12234-kube-api-access-5v6rn\") pod \"f061b34a-dff9-42e7-8b22-2cce81c12234\" (UID: \"f061b34a-dff9-42e7-8b22-2cce81c12234\") " Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.418745 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f061b34a-dff9-42e7-8b22-2cce81c12234-operator-scripts\") pod \"f061b34a-dff9-42e7-8b22-2cce81c12234\" (UID: \"f061b34a-dff9-42e7-8b22-2cce81c12234\") " Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.419194 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0079df7-afe2-44a1-9c44-aabed35e0920-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.419216 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4q6s\" (UniqueName: \"kubernetes.io/projected/d0079df7-afe2-44a1-9c44-aabed35e0920-kube-api-access-g4q6s\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.419238 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f061b34a-dff9-42e7-8b22-2cce81c12234-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f061b34a-dff9-42e7-8b22-2cce81c12234" (UID: "f061b34a-dff9-42e7-8b22-2cce81c12234"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.420818 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f061b34a-dff9-42e7-8b22-2cce81c12234-kube-api-access-5v6rn" (OuterVolumeSpecName: "kube-api-access-5v6rn") pod "f061b34a-dff9-42e7-8b22-2cce81c12234" (UID: "f061b34a-dff9-42e7-8b22-2cce81c12234"). InnerVolumeSpecName "kube-api-access-5v6rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.521052 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v6rn\" (UniqueName: \"kubernetes.io/projected/f061b34a-dff9-42e7-8b22-2cce81c12234-kube-api-access-5v6rn\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.521093 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f061b34a-dff9-42e7-8b22-2cce81c12234-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.905230 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4896z" event={"ID":"d0079df7-afe2-44a1-9c44-aabed35e0920","Type":"ContainerDied","Data":"6b3046f1dbcd3d248c8125703a1eedf66af6d06c54d3c6b5425bcf0170295ade"} Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.905339 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b3046f1dbcd3d248c8125703a1eedf66af6d06c54d3c6b5425bcf0170295ade" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.905559 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4896z" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.908632 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-1a70-account-create-dp24l" Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.908653 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-1a70-account-create-dp24l" event={"ID":"f061b34a-dff9-42e7-8b22-2cce81c12234","Type":"ContainerDied","Data":"da6dc0747d758f373df70988d6ece1e4357ba2b548456355a77153c09524a5c2"} Nov 22 07:23:41 crc kubenswrapper[4856]: I1122 07:23:41.908710 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da6dc0747d758f373df70988d6ece1e4357ba2b548456355a77153c09524a5c2" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.030359 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.042418 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift\") pod \"swift-storage-0\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " pod="openstack/swift-storage-0" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.173222 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.221798 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.334808 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-run-ovn\") pod \"259ff6f3-7a32-446c-b8d0-799a11191319\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.334920 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/259ff6f3-7a32-446c-b8d0-799a11191319-scripts\") pod \"259ff6f3-7a32-446c-b8d0-799a11191319\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.334993 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/259ff6f3-7a32-446c-b8d0-799a11191319-additional-scripts\") pod \"259ff6f3-7a32-446c-b8d0-799a11191319\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.335009 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-run\") pod \"259ff6f3-7a32-446c-b8d0-799a11191319\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.334906 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "259ff6f3-7a32-446c-b8d0-799a11191319" (UID: "259ff6f3-7a32-446c-b8d0-799a11191319"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.335234 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-run" (OuterVolumeSpecName: "var-run") pod "259ff6f3-7a32-446c-b8d0-799a11191319" (UID: "259ff6f3-7a32-446c-b8d0-799a11191319"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.335988 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/259ff6f3-7a32-446c-b8d0-799a11191319-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "259ff6f3-7a32-446c-b8d0-799a11191319" (UID: "259ff6f3-7a32-446c-b8d0-799a11191319"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.336071 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-log-ovn\") pod \"259ff6f3-7a32-446c-b8d0-799a11191319\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.336119 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgclc\" (UniqueName: \"kubernetes.io/projected/259ff6f3-7a32-446c-b8d0-799a11191319-kube-api-access-wgclc\") pod \"259ff6f3-7a32-446c-b8d0-799a11191319\" (UID: \"259ff6f3-7a32-446c-b8d0-799a11191319\") " Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.336204 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/259ff6f3-7a32-446c-b8d0-799a11191319-scripts" (OuterVolumeSpecName: "scripts") pod "259ff6f3-7a32-446c-b8d0-799a11191319" (UID: "259ff6f3-7a32-446c-b8d0-799a11191319"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.336276 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "259ff6f3-7a32-446c-b8d0-799a11191319" (UID: "259ff6f3-7a32-446c-b8d0-799a11191319"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.336440 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/259ff6f3-7a32-446c-b8d0-799a11191319-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.336458 4856 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/259ff6f3-7a32-446c-b8d0-799a11191319-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.336469 4856 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.336478 4856 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.336490 4856 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/259ff6f3-7a32-446c-b8d0-799a11191319-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.340845 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/259ff6f3-7a32-446c-b8d0-799a11191319-kube-api-access-wgclc" (OuterVolumeSpecName: "kube-api-access-wgclc") pod "259ff6f3-7a32-446c-b8d0-799a11191319" (UID: "259ff6f3-7a32-446c-b8d0-799a11191319"). InnerVolumeSpecName "kube-api-access-wgclc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.447662 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgclc\" (UniqueName: \"kubernetes.io/projected/259ff6f3-7a32-446c-b8d0-799a11191319-kube-api-access-wgclc\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.848383 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-vqbwk"] Nov 22 07:23:42 crc kubenswrapper[4856]: E1122 07:23:42.849379 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="259ff6f3-7a32-446c-b8d0-799a11191319" containerName="ovn-config" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849403 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="259ff6f3-7a32-446c-b8d0-799a11191319" containerName="ovn-config" Nov 22 07:23:42 crc kubenswrapper[4856]: E1122 07:23:42.849424 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e04b9723-304a-46a5-a230-2daf9bcd6c3c" containerName="mariadb-database-create" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849431 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e04b9723-304a-46a5-a230-2daf9bcd6c3c" containerName="mariadb-database-create" Nov 22 07:23:42 crc kubenswrapper[4856]: E1122 07:23:42.849448 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0079df7-afe2-44a1-9c44-aabed35e0920" containerName="mariadb-database-create" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849456 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0079df7-afe2-44a1-9c44-aabed35e0920" containerName="mariadb-database-create" Nov 22 07:23:42 crc kubenswrapper[4856]: E1122 07:23:42.849471 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f061b34a-dff9-42e7-8b22-2cce81c12234" containerName="mariadb-account-create" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849477 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f061b34a-dff9-42e7-8b22-2cce81c12234" containerName="mariadb-account-create" Nov 22 07:23:42 crc kubenswrapper[4856]: E1122 07:23:42.849489 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cb599bf-1bc1-4497-82a8-2165e566aaa4" containerName="mariadb-database-create" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849496 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cb599bf-1bc1-4497-82a8-2165e566aaa4" containerName="mariadb-database-create" Nov 22 07:23:42 crc kubenswrapper[4856]: E1122 07:23:42.849526 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5de4ba-e26d-45de-a653-8cc9be68d5c3" containerName="mariadb-account-create" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849540 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5de4ba-e26d-45de-a653-8cc9be68d5c3" containerName="mariadb-account-create" Nov 22 07:23:42 crc kubenswrapper[4856]: E1122 07:23:42.849558 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5bd7b67-77ce-4a59-a510-f5b39de503d8" containerName="mariadb-account-create" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849565 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5bd7b67-77ce-4a59-a510-f5b39de503d8" containerName="mariadb-account-create" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849871 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="259ff6f3-7a32-446c-b8d0-799a11191319" containerName="ovn-config" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849904 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cb599bf-1bc1-4497-82a8-2165e566aaa4" containerName="mariadb-database-create" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849918 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e04b9723-304a-46a5-a230-2daf9bcd6c3c" containerName="mariadb-database-create" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849934 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5de4ba-e26d-45de-a653-8cc9be68d5c3" containerName="mariadb-account-create" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849945 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0079df7-afe2-44a1-9c44-aabed35e0920" containerName="mariadb-database-create" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849959 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f061b34a-dff9-42e7-8b22-2cce81c12234" containerName="mariadb-account-create" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.849970 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5bd7b67-77ce-4a59-a510-f5b39de503d8" containerName="mariadb-account-create" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.850763 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vqbwk" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.853244 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.853676 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.853849 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.853919 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7mv7p" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.858358 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vqbwk"] Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.929010 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-kvfrn"] Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.929921 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hwrb9-config-mzsjk" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.931090 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hwrb9-config-mzsjk" event={"ID":"259ff6f3-7a32-446c-b8d0-799a11191319","Type":"ContainerDied","Data":"841fff4b4c546b8352642a5e6b72a580d21b22be8a7ecb9eabbdb55ce012b75c"} Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.931117 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="841fff4b4c546b8352642a5e6b72a580d21b22be8a7ecb9eabbdb55ce012b75c" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.931171 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.935270 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.935728 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5wct7" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.940826 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kvfrn"] Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.957561 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdz4j\" (UniqueName: \"kubernetes.io/projected/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-kube-api-access-pdz4j\") pod \"keystone-db-sync-vqbwk\" (UID: \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\") " pod="openstack/keystone-db-sync-vqbwk" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.957614 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-combined-ca-bundle\") pod \"keystone-db-sync-vqbwk\" (UID: \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\") " pod="openstack/keystone-db-sync-vqbwk" Nov 22 07:23:42 crc kubenswrapper[4856]: I1122 07:23:42.957644 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-config-data\") pod \"keystone-db-sync-vqbwk\" (UID: \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\") " pod="openstack/keystone-db-sync-vqbwk" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.023257 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hwrb9" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.059310 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxgt4\" (UniqueName: \"kubernetes.io/projected/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-kube-api-access-zxgt4\") pod \"glance-db-sync-kvfrn\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.059375 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-combined-ca-bundle\") pod \"glance-db-sync-kvfrn\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.059431 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-db-sync-config-data\") pod \"glance-db-sync-kvfrn\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.059689 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdz4j\" (UniqueName: \"kubernetes.io/projected/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-kube-api-access-pdz4j\") pod \"keystone-db-sync-vqbwk\" (UID: \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\") " pod="openstack/keystone-db-sync-vqbwk" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.059758 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-combined-ca-bundle\") pod \"keystone-db-sync-vqbwk\" (UID: \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\") " pod="openstack/keystone-db-sync-vqbwk" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.059801 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-config-data\") pod \"keystone-db-sync-vqbwk\" (UID: \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\") " pod="openstack/keystone-db-sync-vqbwk" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.059920 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-config-data\") pod \"glance-db-sync-kvfrn\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.065226 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-combined-ca-bundle\") pod \"keystone-db-sync-vqbwk\" (UID: \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\") " pod="openstack/keystone-db-sync-vqbwk" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.065323 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-config-data\") pod \"keystone-db-sync-vqbwk\" (UID: \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\") " pod="openstack/keystone-db-sync-vqbwk" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.082537 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdz4j\" (UniqueName: \"kubernetes.io/projected/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-kube-api-access-pdz4j\") pod \"keystone-db-sync-vqbwk\" (UID: \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\") " pod="openstack/keystone-db-sync-vqbwk" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.161664 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxgt4\" (UniqueName: \"kubernetes.io/projected/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-kube-api-access-zxgt4\") pod \"glance-db-sync-kvfrn\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.161734 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-combined-ca-bundle\") pod \"glance-db-sync-kvfrn\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.162404 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-db-sync-config-data\") pod \"glance-db-sync-kvfrn\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.162808 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-config-data\") pod \"glance-db-sync-kvfrn\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.167131 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-config-data\") pod \"glance-db-sync-kvfrn\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.167337 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-db-sync-config-data\") pod \"glance-db-sync-kvfrn\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.168159 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-combined-ca-bundle\") pod \"glance-db-sync-kvfrn\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.168673 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.185806 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxgt4\" (UniqueName: \"kubernetes.io/projected/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-kube-api-access-zxgt4\") pod \"glance-db-sync-kvfrn\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.193968 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vqbwk" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.255955 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kvfrn" Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.362894 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hwrb9-config-mzsjk"] Nov 22 07:23:43 crc kubenswrapper[4856]: I1122 07:23:43.372377 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hwrb9-config-mzsjk"] Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.547619 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hwrb9-config-flktw"] Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.548577 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.552646 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.556361 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hwrb9-config-flktw"] Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.671901 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-run-ovn\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.672186 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24ee4794-dea5-460a-8dbe-01bb2b376432-scripts\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.672211 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-log-ovn\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.672235 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-run\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.672283 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/24ee4794-dea5-460a-8dbe-01bb2b376432-additional-scripts\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.672318 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdkc9\" (UniqueName: \"kubernetes.io/projected/24ee4794-dea5-460a-8dbe-01bb2b376432-kube-api-access-vdkc9\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.679782 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vqbwk"] Nov 22 07:23:44 crc kubenswrapper[4856]: W1122 07:23:43.686267 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62be1cd5_ba89_49d3_8f57_6ab0bf20848a.slice/crio-3603b102baae23a0aedc6580de7f32b9b81eb3bc7c25fad23a35cdaed4a8619e WatchSource:0}: Error finding container 3603b102baae23a0aedc6580de7f32b9b81eb3bc7c25fad23a35cdaed4a8619e: Status 404 returned error can't find the container with id 3603b102baae23a0aedc6580de7f32b9b81eb3bc7c25fad23a35cdaed4a8619e Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.774106 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdkc9\" (UniqueName: \"kubernetes.io/projected/24ee4794-dea5-460a-8dbe-01bb2b376432-kube-api-access-vdkc9\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.774202 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-run-ovn\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.774234 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24ee4794-dea5-460a-8dbe-01bb2b376432-scripts\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.774260 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-log-ovn\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.774280 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-run\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.774310 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/24ee4794-dea5-460a-8dbe-01bb2b376432-additional-scripts\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.774695 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-run\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.774705 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-log-ovn\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.774712 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-run-ovn\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.775243 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/24ee4794-dea5-460a-8dbe-01bb2b376432-additional-scripts\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.776608 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24ee4794-dea5-460a-8dbe-01bb2b376432-scripts\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.795732 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdkc9\" (UniqueName: \"kubernetes.io/projected/24ee4794-dea5-460a-8dbe-01bb2b376432-kube-api-access-vdkc9\") pod \"ovn-controller-hwrb9-config-flktw\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.887006 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.965680 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vqbwk" event={"ID":"62be1cd5-ba89-49d3-8f57-6ab0bf20848a","Type":"ContainerStarted","Data":"3603b102baae23a0aedc6580de7f32b9b81eb3bc7c25fad23a35cdaed4a8619e"} Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:43.968121 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"c3a82fe013330aee4b49a20895b7832fbd9f0ff8a51956b8475b88650d0ca91f"} Nov 22 07:23:44 crc kubenswrapper[4856]: I1122 07:23:44.737868 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="259ff6f3-7a32-446c-b8d0-799a11191319" path="/var/lib/kubelet/pods/259ff6f3-7a32-446c-b8d0-799a11191319/volumes" Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.187784 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-kvfrn"] Nov 22 07:23:45 crc kubenswrapper[4856]: W1122 07:23:45.192953 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10cb606c_6ef8_49e7_9fe4_08dd07fbd0fb.slice/crio-8cbd2b9f59d2d013a91b9f8e46b21cd06fae8e820ede4733356baef05c62dad6 WatchSource:0}: Error finding container 8cbd2b9f59d2d013a91b9f8e46b21cd06fae8e820ede4733356baef05c62dad6: Status 404 returned error can't find the container with id 8cbd2b9f59d2d013a91b9f8e46b21cd06fae8e820ede4733356baef05c62dad6 Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.314333 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hwrb9-config-flktw"] Nov 22 07:23:45 crc kubenswrapper[4856]: W1122 07:23:45.322833 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24ee4794_dea5_460a_8dbe_01bb2b376432.slice/crio-9f374a456298250a902a5d2657d7a0a4492a87c42336bde884bb03f5c65b8b43 WatchSource:0}: Error finding container 9f374a456298250a902a5d2657d7a0a4492a87c42336bde884bb03f5c65b8b43: Status 404 returned error can't find the container with id 9f374a456298250a902a5d2657d7a0a4492a87c42336bde884bb03f5c65b8b43 Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.774254 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-t4bxw"] Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.779246 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-t4bxw" Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.786815 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-t4bxw"] Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.859749 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-vkfmv"] Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.860857 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vkfmv" Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.867734 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vkfmv"] Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.924541 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e55d62f-386c-4731-870a-a4909fb100b9-operator-scripts\") pod \"cinder-db-create-t4bxw\" (UID: \"7e55d62f-386c-4731-870a-a4909fb100b9\") " pod="openstack/cinder-db-create-t4bxw" Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.924598 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sjvs\" (UniqueName: \"kubernetes.io/projected/a1ae4cc7-5c62-4d6d-a578-ed26f892a159-kube-api-access-2sjvs\") pod \"barbican-db-create-vkfmv\" (UID: \"a1ae4cc7-5c62-4d6d-a578-ed26f892a159\") " pod="openstack/barbican-db-create-vkfmv" Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.924636 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ae4cc7-5c62-4d6d-a578-ed26f892a159-operator-scripts\") pod \"barbican-db-create-vkfmv\" (UID: \"a1ae4cc7-5c62-4d6d-a578-ed26f892a159\") " pod="openstack/barbican-db-create-vkfmv" Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.924673 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7vzb\" (UniqueName: \"kubernetes.io/projected/7e55d62f-386c-4731-870a-a4909fb100b9-kube-api-access-x7vzb\") pod \"cinder-db-create-t4bxw\" (UID: \"7e55d62f-386c-4731-870a-a4909fb100b9\") " pod="openstack/cinder-db-create-t4bxw" Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.964826 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ceda-account-create-p77h2"] Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.966543 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceda-account-create-p77h2" Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.972875 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 22 07:23:45 crc kubenswrapper[4856]: I1122 07:23:45.974893 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceda-account-create-p77h2"] Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.000042 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kvfrn" event={"ID":"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb","Type":"ContainerStarted","Data":"8cbd2b9f59d2d013a91b9f8e46b21cd06fae8e820ede4733356baef05c62dad6"} Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.003857 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"199edfe080cf33b200ed5effe88b6a79246b1c89eb804c543da87be52e6c569e"} Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.003904 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"507063dad370d0aa753a3a159944ec9f090dd4d59c3360495ed98d90f8250c2e"} Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.003918 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"c22be9584965ebc42abd66c9bfe89aca421bd210a908db30115541e641df706a"} Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.008193 4856 generic.go:334] "Generic (PLEG): container finished" podID="24ee4794-dea5-460a-8dbe-01bb2b376432" containerID="37e2066a7b84c2ef9c3d9d79dc7de5c5f24f7dfb09f03d6781d7007196e58e36" exitCode=0 Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.008240 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hwrb9-config-flktw" event={"ID":"24ee4794-dea5-460a-8dbe-01bb2b376432","Type":"ContainerDied","Data":"37e2066a7b84c2ef9c3d9d79dc7de5c5f24f7dfb09f03d6781d7007196e58e36"} Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.008264 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hwrb9-config-flktw" event={"ID":"24ee4794-dea5-460a-8dbe-01bb2b376432","Type":"ContainerStarted","Data":"9f374a456298250a902a5d2657d7a0a4492a87c42336bde884bb03f5c65b8b43"} Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.025930 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ae4cc7-5c62-4d6d-a578-ed26f892a159-operator-scripts\") pod \"barbican-db-create-vkfmv\" (UID: \"a1ae4cc7-5c62-4d6d-a578-ed26f892a159\") " pod="openstack/barbican-db-create-vkfmv" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.026018 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7vzb\" (UniqueName: \"kubernetes.io/projected/7e55d62f-386c-4731-870a-a4909fb100b9-kube-api-access-x7vzb\") pod \"cinder-db-create-t4bxw\" (UID: \"7e55d62f-386c-4731-870a-a4909fb100b9\") " pod="openstack/cinder-db-create-t4bxw" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.026050 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckn4t\" (UniqueName: \"kubernetes.io/projected/5db71edd-7a64-44d0-abda-ffc266851549-kube-api-access-ckn4t\") pod \"cinder-ceda-account-create-p77h2\" (UID: \"5db71edd-7a64-44d0-abda-ffc266851549\") " pod="openstack/cinder-ceda-account-create-p77h2" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.026191 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db71edd-7a64-44d0-abda-ffc266851549-operator-scripts\") pod \"cinder-ceda-account-create-p77h2\" (UID: \"5db71edd-7a64-44d0-abda-ffc266851549\") " pod="openstack/cinder-ceda-account-create-p77h2" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.026225 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e55d62f-386c-4731-870a-a4909fb100b9-operator-scripts\") pod \"cinder-db-create-t4bxw\" (UID: \"7e55d62f-386c-4731-870a-a4909fb100b9\") " pod="openstack/cinder-db-create-t4bxw" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.026253 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sjvs\" (UniqueName: \"kubernetes.io/projected/a1ae4cc7-5c62-4d6d-a578-ed26f892a159-kube-api-access-2sjvs\") pod \"barbican-db-create-vkfmv\" (UID: \"a1ae4cc7-5c62-4d6d-a578-ed26f892a159\") " pod="openstack/barbican-db-create-vkfmv" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.027987 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ae4cc7-5c62-4d6d-a578-ed26f892a159-operator-scripts\") pod \"barbican-db-create-vkfmv\" (UID: \"a1ae4cc7-5c62-4d6d-a578-ed26f892a159\") " pod="openstack/barbican-db-create-vkfmv" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.028861 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e55d62f-386c-4731-870a-a4909fb100b9-operator-scripts\") pod \"cinder-db-create-t4bxw\" (UID: \"7e55d62f-386c-4731-870a-a4909fb100b9\") " pod="openstack/cinder-db-create-t4bxw" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.053327 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sjvs\" (UniqueName: \"kubernetes.io/projected/a1ae4cc7-5c62-4d6d-a578-ed26f892a159-kube-api-access-2sjvs\") pod \"barbican-db-create-vkfmv\" (UID: \"a1ae4cc7-5c62-4d6d-a578-ed26f892a159\") " pod="openstack/barbican-db-create-vkfmv" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.054382 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7vzb\" (UniqueName: \"kubernetes.io/projected/7e55d62f-386c-4731-870a-a4909fb100b9-kube-api-access-x7vzb\") pod \"cinder-db-create-t4bxw\" (UID: \"7e55d62f-386c-4731-870a-a4909fb100b9\") " pod="openstack/cinder-db-create-t4bxw" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.055750 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-29ba-account-create-xlkjx"] Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.056890 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-29ba-account-create-xlkjx" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.060042 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.069517 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-29ba-account-create-xlkjx"] Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.132984 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckn4t\" (UniqueName: \"kubernetes.io/projected/5db71edd-7a64-44d0-abda-ffc266851549-kube-api-access-ckn4t\") pod \"cinder-ceda-account-create-p77h2\" (UID: \"5db71edd-7a64-44d0-abda-ffc266851549\") " pod="openstack/cinder-ceda-account-create-p77h2" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.133180 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cb1e06c-d7a8-4456-8614-d71e182d6ad2-operator-scripts\") pod \"barbican-29ba-account-create-xlkjx\" (UID: \"1cb1e06c-d7a8-4456-8614-d71e182d6ad2\") " pod="openstack/barbican-29ba-account-create-xlkjx" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.133353 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkz8j\" (UniqueName: \"kubernetes.io/projected/1cb1e06c-d7a8-4456-8614-d71e182d6ad2-kube-api-access-gkz8j\") pod \"barbican-29ba-account-create-xlkjx\" (UID: \"1cb1e06c-d7a8-4456-8614-d71e182d6ad2\") " pod="openstack/barbican-29ba-account-create-xlkjx" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.133423 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db71edd-7a64-44d0-abda-ffc266851549-operator-scripts\") pod \"cinder-ceda-account-create-p77h2\" (UID: \"5db71edd-7a64-44d0-abda-ffc266851549\") " pod="openstack/cinder-ceda-account-create-p77h2" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.143460 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-t4bxw" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.155577 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db71edd-7a64-44d0-abda-ffc266851549-operator-scripts\") pod \"cinder-ceda-account-create-p77h2\" (UID: \"5db71edd-7a64-44d0-abda-ffc266851549\") " pod="openstack/cinder-ceda-account-create-p77h2" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.169110 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-rt7kb"] Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.170229 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckn4t\" (UniqueName: \"kubernetes.io/projected/5db71edd-7a64-44d0-abda-ffc266851549-kube-api-access-ckn4t\") pod \"cinder-ceda-account-create-p77h2\" (UID: \"5db71edd-7a64-44d0-abda-ffc266851549\") " pod="openstack/cinder-ceda-account-create-p77h2" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.170884 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rt7kb" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.178624 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-73f8-account-create-scphz"] Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.182698 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-73f8-account-create-scphz" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.185138 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.185211 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rt7kb"] Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.219056 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-73f8-account-create-scphz"] Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.243871 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/102e4706-2696-459a-88e6-b6cd95733094-operator-scripts\") pod \"neutron-db-create-rt7kb\" (UID: \"102e4706-2696-459a-88e6-b6cd95733094\") " pod="openstack/neutron-db-create-rt7kb" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.243940 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cb1e06c-d7a8-4456-8614-d71e182d6ad2-operator-scripts\") pod \"barbican-29ba-account-create-xlkjx\" (UID: \"1cb1e06c-d7a8-4456-8614-d71e182d6ad2\") " pod="openstack/barbican-29ba-account-create-xlkjx" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.244043 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkz8j\" (UniqueName: \"kubernetes.io/projected/1cb1e06c-d7a8-4456-8614-d71e182d6ad2-kube-api-access-gkz8j\") pod \"barbican-29ba-account-create-xlkjx\" (UID: \"1cb1e06c-d7a8-4456-8614-d71e182d6ad2\") " pod="openstack/barbican-29ba-account-create-xlkjx" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.244146 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vpjp\" (UniqueName: \"kubernetes.io/projected/102e4706-2696-459a-88e6-b6cd95733094-kube-api-access-7vpjp\") pod \"neutron-db-create-rt7kb\" (UID: \"102e4706-2696-459a-88e6-b6cd95733094\") " pod="openstack/neutron-db-create-rt7kb" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.266111 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vkfmv" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.272740 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cb1e06c-d7a8-4456-8614-d71e182d6ad2-operator-scripts\") pod \"barbican-29ba-account-create-xlkjx\" (UID: \"1cb1e06c-d7a8-4456-8614-d71e182d6ad2\") " pod="openstack/barbican-29ba-account-create-xlkjx" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.280111 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkz8j\" (UniqueName: \"kubernetes.io/projected/1cb1e06c-d7a8-4456-8614-d71e182d6ad2-kube-api-access-gkz8j\") pod \"barbican-29ba-account-create-xlkjx\" (UID: \"1cb1e06c-d7a8-4456-8614-d71e182d6ad2\") " pod="openstack/barbican-29ba-account-create-xlkjx" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.301154 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceda-account-create-p77h2" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.346555 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vpjp\" (UniqueName: \"kubernetes.io/projected/102e4706-2696-459a-88e6-b6cd95733094-kube-api-access-7vpjp\") pod \"neutron-db-create-rt7kb\" (UID: \"102e4706-2696-459a-88e6-b6cd95733094\") " pod="openstack/neutron-db-create-rt7kb" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.346956 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42e9bfff-515e-462a-9a73-a9514676f9f8-operator-scripts\") pod \"neutron-73f8-account-create-scphz\" (UID: \"42e9bfff-515e-462a-9a73-a9514676f9f8\") " pod="openstack/neutron-73f8-account-create-scphz" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.348155 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/102e4706-2696-459a-88e6-b6cd95733094-operator-scripts\") pod \"neutron-db-create-rt7kb\" (UID: \"102e4706-2696-459a-88e6-b6cd95733094\") " pod="openstack/neutron-db-create-rt7kb" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.346997 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/102e4706-2696-459a-88e6-b6cd95733094-operator-scripts\") pod \"neutron-db-create-rt7kb\" (UID: \"102e4706-2696-459a-88e6-b6cd95733094\") " pod="openstack/neutron-db-create-rt7kb" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.348627 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlzst\" (UniqueName: \"kubernetes.io/projected/42e9bfff-515e-462a-9a73-a9514676f9f8-kube-api-access-dlzst\") pod \"neutron-73f8-account-create-scphz\" (UID: \"42e9bfff-515e-462a-9a73-a9514676f9f8\") " pod="openstack/neutron-73f8-account-create-scphz" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.374062 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vpjp\" (UniqueName: \"kubernetes.io/projected/102e4706-2696-459a-88e6-b6cd95733094-kube-api-access-7vpjp\") pod \"neutron-db-create-rt7kb\" (UID: \"102e4706-2696-459a-88e6-b6cd95733094\") " pod="openstack/neutron-db-create-rt7kb" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.377170 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-29ba-account-create-xlkjx" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.449714 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlzst\" (UniqueName: \"kubernetes.io/projected/42e9bfff-515e-462a-9a73-a9514676f9f8-kube-api-access-dlzst\") pod \"neutron-73f8-account-create-scphz\" (UID: \"42e9bfff-515e-462a-9a73-a9514676f9f8\") " pod="openstack/neutron-73f8-account-create-scphz" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.449841 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42e9bfff-515e-462a-9a73-a9514676f9f8-operator-scripts\") pod \"neutron-73f8-account-create-scphz\" (UID: \"42e9bfff-515e-462a-9a73-a9514676f9f8\") " pod="openstack/neutron-73f8-account-create-scphz" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.451326 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42e9bfff-515e-462a-9a73-a9514676f9f8-operator-scripts\") pod \"neutron-73f8-account-create-scphz\" (UID: \"42e9bfff-515e-462a-9a73-a9514676f9f8\") " pod="openstack/neutron-73f8-account-create-scphz" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.470916 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlzst\" (UniqueName: \"kubernetes.io/projected/42e9bfff-515e-462a-9a73-a9514676f9f8-kube-api-access-dlzst\") pod \"neutron-73f8-account-create-scphz\" (UID: \"42e9bfff-515e-462a-9a73-a9514676f9f8\") " pod="openstack/neutron-73f8-account-create-scphz" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.663106 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rt7kb" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.667394 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-t4bxw"] Nov 22 07:23:46 crc kubenswrapper[4856]: W1122 07:23:46.674903 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e55d62f_386c_4731_870a_a4909fb100b9.slice/crio-c53af9b234d66e1f8d0f181f807fe67a00ccd01c0ea86de76204325f2f0d5f24 WatchSource:0}: Error finding container c53af9b234d66e1f8d0f181f807fe67a00ccd01c0ea86de76204325f2f0d5f24: Status 404 returned error can't find the container with id c53af9b234d66e1f8d0f181f807fe67a00ccd01c0ea86de76204325f2f0d5f24 Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.677282 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-73f8-account-create-scphz" Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.693847 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ceda-account-create-p77h2"] Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.721103 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-29ba-account-create-xlkjx"] Nov 22 07:23:46 crc kubenswrapper[4856]: I1122 07:23:46.801461 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-vkfmv"] Nov 22 07:23:47 crc kubenswrapper[4856]: I1122 07:23:47.017575 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-t4bxw" event={"ID":"7e55d62f-386c-4731-870a-a4909fb100b9","Type":"ContainerStarted","Data":"c53af9b234d66e1f8d0f181f807fe67a00ccd01c0ea86de76204325f2f0d5f24"} Nov 22 07:23:47 crc kubenswrapper[4856]: I1122 07:23:47.020763 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"a5a09f33961facab4f00ff54e2e02326d023fd20d2ac164e6dacaf7131204425"} Nov 22 07:23:52 crc kubenswrapper[4856]: W1122 07:23:52.402077 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5db71edd_7a64_44d0_abda_ffc266851549.slice/crio-1aa153ea353763e9c24b81ca7ac05b26c08d8ce7447699438e84d6579d22ac31 WatchSource:0}: Error finding container 1aa153ea353763e9c24b81ca7ac05b26c08d8ce7447699438e84d6579d22ac31: Status 404 returned error can't find the container with id 1aa153ea353763e9c24b81ca7ac05b26c08d8ce7447699438e84d6579d22ac31 Nov 22 07:23:52 crc kubenswrapper[4856]: W1122 07:23:52.407317 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cb1e06c_d7a8_4456_8614_d71e182d6ad2.slice/crio-9018e1deb18c2b794dde1449262ec622a9c22b6af7d8e3264660f8f9ba29a944 WatchSource:0}: Error finding container 9018e1deb18c2b794dde1449262ec622a9c22b6af7d8e3264660f8f9ba29a944: Status 404 returned error can't find the container with id 9018e1deb18c2b794dde1449262ec622a9c22b6af7d8e3264660f8f9ba29a944 Nov 22 07:23:52 crc kubenswrapper[4856]: W1122 07:23:52.408425 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1ae4cc7_5c62_4d6d_a578_ed26f892a159.slice/crio-80372e7ab58a33a03bbcefc0679609a52262ce011de55c9ff65a07043f869ec3 WatchSource:0}: Error finding container 80372e7ab58a33a03bbcefc0679609a52262ce011de55c9ff65a07043f869ec3: Status 404 returned error can't find the container with id 80372e7ab58a33a03bbcefc0679609a52262ce011de55c9ff65a07043f869ec3 Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.497840 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.665709 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/24ee4794-dea5-460a-8dbe-01bb2b376432-additional-scripts\") pod \"24ee4794-dea5-460a-8dbe-01bb2b376432\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.665796 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-run\") pod \"24ee4794-dea5-460a-8dbe-01bb2b376432\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.665841 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-run-ovn\") pod \"24ee4794-dea5-460a-8dbe-01bb2b376432\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.665916 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-log-ovn\") pod \"24ee4794-dea5-460a-8dbe-01bb2b376432\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.665936 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24ee4794-dea5-460a-8dbe-01bb2b376432-scripts\") pod \"24ee4794-dea5-460a-8dbe-01bb2b376432\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.666003 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdkc9\" (UniqueName: \"kubernetes.io/projected/24ee4794-dea5-460a-8dbe-01bb2b376432-kube-api-access-vdkc9\") pod \"24ee4794-dea5-460a-8dbe-01bb2b376432\" (UID: \"24ee4794-dea5-460a-8dbe-01bb2b376432\") " Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.666268 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-run" (OuterVolumeSpecName: "var-run") pod "24ee4794-dea5-460a-8dbe-01bb2b376432" (UID: "24ee4794-dea5-460a-8dbe-01bb2b376432"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.666270 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "24ee4794-dea5-460a-8dbe-01bb2b376432" (UID: "24ee4794-dea5-460a-8dbe-01bb2b376432"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.666339 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "24ee4794-dea5-460a-8dbe-01bb2b376432" (UID: "24ee4794-dea5-460a-8dbe-01bb2b376432"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.667263 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ee4794-dea5-460a-8dbe-01bb2b376432-scripts" (OuterVolumeSpecName: "scripts") pod "24ee4794-dea5-460a-8dbe-01bb2b376432" (UID: "24ee4794-dea5-460a-8dbe-01bb2b376432"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.667995 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24ee4794-dea5-460a-8dbe-01bb2b376432-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "24ee4794-dea5-460a-8dbe-01bb2b376432" (UID: "24ee4794-dea5-460a-8dbe-01bb2b376432"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.699896 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ee4794-dea5-460a-8dbe-01bb2b376432-kube-api-access-vdkc9" (OuterVolumeSpecName: "kube-api-access-vdkc9") pod "24ee4794-dea5-460a-8dbe-01bb2b376432" (UID: "24ee4794-dea5-460a-8dbe-01bb2b376432"). InnerVolumeSpecName "kube-api-access-vdkc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.768197 4856 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/24ee4794-dea5-460a-8dbe-01bb2b376432-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.768239 4856 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.768250 4856 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.768260 4856 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/24ee4794-dea5-460a-8dbe-01bb2b376432-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.768269 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24ee4794-dea5-460a-8dbe-01bb2b376432-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.768280 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdkc9\" (UniqueName: \"kubernetes.io/projected/24ee4794-dea5-460a-8dbe-01bb2b376432-kube-api-access-vdkc9\") on node \"crc\" DevicePath \"\"" Nov 22 07:23:52 crc kubenswrapper[4856]: I1122 07:23:52.840675 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-rt7kb"] Nov 22 07:23:53 crc kubenswrapper[4856]: I1122 07:23:53.071562 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hwrb9-config-flktw" event={"ID":"24ee4794-dea5-460a-8dbe-01bb2b376432","Type":"ContainerDied","Data":"9f374a456298250a902a5d2657d7a0a4492a87c42336bde884bb03f5c65b8b43"} Nov 22 07:23:53 crc kubenswrapper[4856]: I1122 07:23:53.071926 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f374a456298250a902a5d2657d7a0a4492a87c42336bde884bb03f5c65b8b43" Nov 22 07:23:53 crc kubenswrapper[4856]: I1122 07:23:53.071585 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hwrb9-config-flktw" Nov 22 07:23:53 crc kubenswrapper[4856]: I1122 07:23:53.074484 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-29ba-account-create-xlkjx" event={"ID":"1cb1e06c-d7a8-4456-8614-d71e182d6ad2","Type":"ContainerStarted","Data":"9018e1deb18c2b794dde1449262ec622a9c22b6af7d8e3264660f8f9ba29a944"} Nov 22 07:23:53 crc kubenswrapper[4856]: I1122 07:23:53.076768 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceda-account-create-p77h2" event={"ID":"5db71edd-7a64-44d0-abda-ffc266851549","Type":"ContainerStarted","Data":"1aa153ea353763e9c24b81ca7ac05b26c08d8ce7447699438e84d6579d22ac31"} Nov 22 07:23:53 crc kubenswrapper[4856]: I1122 07:23:53.077721 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vkfmv" event={"ID":"a1ae4cc7-5c62-4d6d-a578-ed26f892a159","Type":"ContainerStarted","Data":"80372e7ab58a33a03bbcefc0679609a52262ce011de55c9ff65a07043f869ec3"} Nov 22 07:23:53 crc kubenswrapper[4856]: I1122 07:23:53.575782 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hwrb9-config-flktw"] Nov 22 07:23:53 crc kubenswrapper[4856]: I1122 07:23:53.584899 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hwrb9-config-flktw"] Nov 22 07:23:54 crc kubenswrapper[4856]: I1122 07:23:54.727620 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ee4794-dea5-460a-8dbe-01bb2b376432" path="/var/lib/kubelet/pods/24ee4794-dea5-460a-8dbe-01bb2b376432/volumes" Nov 22 07:24:06 crc kubenswrapper[4856]: E1122 07:24:06.318120 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:26bd7b0bd6070856aefef6fe754c547d55c056396ea30d879d34c2d49b5a1d29" Nov 22 07:24:06 crc kubenswrapper[4856]: E1122 07:24:06.318884 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:26bd7b0bd6070856aefef6fe754c547d55c056396ea30d879d34c2d49b5a1d29,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zxgt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-kvfrn_openstack(10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:24:06 crc kubenswrapper[4856]: E1122 07:24:06.320387 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-kvfrn" podUID="10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb" Nov 22 07:24:07 crc kubenswrapper[4856]: I1122 07:24:07.163975 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-73f8-account-create-scphz"] Nov 22 07:24:07 crc kubenswrapper[4856]: I1122 07:24:07.192186 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rt7kb" event={"ID":"102e4706-2696-459a-88e6-b6cd95733094","Type":"ContainerStarted","Data":"11a71e8740478b004ee05d76aa8a991089a3d76f2881c32cb397fd639092603e"} Nov 22 07:24:07 crc kubenswrapper[4856]: I1122 07:24:07.194044 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vkfmv" event={"ID":"a1ae4cc7-5c62-4d6d-a578-ed26f892a159","Type":"ContainerStarted","Data":"6525b4e2de9799c74ff23a66dac29f1d95107568c6850b31be0fdcb315d454e7"} Nov 22 07:24:07 crc kubenswrapper[4856]: I1122 07:24:07.197736 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-t4bxw" event={"ID":"7e55d62f-386c-4731-870a-a4909fb100b9","Type":"ContainerStarted","Data":"85cf79e96fc13c34ea3abd9d2877f21dd93203c7de56aaa2097e3d1a062e3a4e"} Nov 22 07:24:07 crc kubenswrapper[4856]: I1122 07:24:07.200571 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-29ba-account-create-xlkjx" event={"ID":"1cb1e06c-d7a8-4456-8614-d71e182d6ad2","Type":"ContainerStarted","Data":"a214e899de1fa12d232a5f7ae7432c6684e5ff6c933f40705502235cb59cf8ba"} Nov 22 07:24:07 crc kubenswrapper[4856]: E1122 07:24:07.204245 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:26bd7b0bd6070856aefef6fe754c547d55c056396ea30d879d34c2d49b5a1d29\\\"\"" pod="openstack/glance-db-sync-kvfrn" podUID="10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb" Nov 22 07:24:07 crc kubenswrapper[4856]: I1122 07:24:07.222615 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-vkfmv" podStartSLOduration=22.222595911 podStartE2EDuration="22.222595911s" podCreationTimestamp="2025-11-22 07:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:24:07.215395654 +0000 UTC m=+1289.628788912" watchObservedRunningTime="2025-11-22 07:24:07.222595911 +0000 UTC m=+1289.635989169" Nov 22 07:24:07 crc kubenswrapper[4856]: I1122 07:24:07.268400 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-t4bxw" podStartSLOduration=22.268381216 podStartE2EDuration="22.268381216s" podCreationTimestamp="2025-11-22 07:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:24:07.253834889 +0000 UTC m=+1289.667228147" watchObservedRunningTime="2025-11-22 07:24:07.268381216 +0000 UTC m=+1289.681774464" Nov 22 07:24:07 crc kubenswrapper[4856]: I1122 07:24:07.268473 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-29ba-account-create-xlkjx" podStartSLOduration=21.268470159 podStartE2EDuration="21.268470159s" podCreationTimestamp="2025-11-22 07:23:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:24:07.268363626 +0000 UTC m=+1289.681756884" watchObservedRunningTime="2025-11-22 07:24:07.268470159 +0000 UTC m=+1289.681863417" Nov 22 07:24:07 crc kubenswrapper[4856]: E1122 07:24:07.658561 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75" Nov 22 07:24:07 crc kubenswrapper[4856]: E1122 07:24:07.659144 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-server,Image:quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75,Command:[/usr/bin/swift-container-server /etc/swift/container-server.conf.d -v],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:container,HostPort:0,ContainerPort:6201,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b7h56h9dh94h67bh697h95h55hbh555h556h675h5fdh57dh579h5fbh64fh5c9h687hb6h678h5d4h549h54h98h8ch564h5bh5bch55dhc8hf8q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:swift,ReadOnly:false,MountPath:/srv/node/pv,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cache,ReadOnly:false,MountPath:/var/cache/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lock,ReadOnly:false,MountPath:/var/lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rq8fr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-storage-0_openstack(8b649794-30ba-493c-9285-05a58981ed36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.210466 4856 generic.go:334] "Generic (PLEG): container finished" podID="102e4706-2696-459a-88e6-b6cd95733094" containerID="bea09664f4a7eb9a8d241c32f7456d5ae5ff024cd6a89a93dc01db73a7452dd2" exitCode=0 Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.210536 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rt7kb" event={"ID":"102e4706-2696-459a-88e6-b6cd95733094","Type":"ContainerDied","Data":"bea09664f4a7eb9a8d241c32f7456d5ae5ff024cd6a89a93dc01db73a7452dd2"} Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.211761 4856 generic.go:334] "Generic (PLEG): container finished" podID="5db71edd-7a64-44d0-abda-ffc266851549" containerID="6e1eddbe04b1be2ec22be58a8fe8aa1417daa37d9b0e151e04b44582e14fb8d3" exitCode=0 Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.211807 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceda-account-create-p77h2" event={"ID":"5db71edd-7a64-44d0-abda-ffc266851549","Type":"ContainerDied","Data":"6e1eddbe04b1be2ec22be58a8fe8aa1417daa37d9b0e151e04b44582e14fb8d3"} Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.213425 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vqbwk" event={"ID":"62be1cd5-ba89-49d3-8f57-6ab0bf20848a","Type":"ContainerStarted","Data":"c86e59d45b1d7c3c0e2462f84ad716038842e4262fc6e161703b245f174c63d7"} Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.215102 4856 generic.go:334] "Generic (PLEG): container finished" podID="a1ae4cc7-5c62-4d6d-a578-ed26f892a159" containerID="6525b4e2de9799c74ff23a66dac29f1d95107568c6850b31be0fdcb315d454e7" exitCode=0 Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.215252 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vkfmv" event={"ID":"a1ae4cc7-5c62-4d6d-a578-ed26f892a159","Type":"ContainerDied","Data":"6525b4e2de9799c74ff23a66dac29f1d95107568c6850b31be0fdcb315d454e7"} Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.223578 4856 generic.go:334] "Generic (PLEG): container finished" podID="42e9bfff-515e-462a-9a73-a9514676f9f8" containerID="d90855eebce3d108812258ad5edea4fee4c4190885d76c83348fd0a3eef22ab3" exitCode=0 Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.223636 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-73f8-account-create-scphz" event={"ID":"42e9bfff-515e-462a-9a73-a9514676f9f8","Type":"ContainerDied","Data":"d90855eebce3d108812258ad5edea4fee4c4190885d76c83348fd0a3eef22ab3"} Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.223705 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-73f8-account-create-scphz" event={"ID":"42e9bfff-515e-462a-9a73-a9514676f9f8","Type":"ContainerStarted","Data":"62cc3f6b08cf8e9edb9f4ee839c69a7f9c901a1481eb68a5fdafa00bc977562a"} Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.227794 4856 generic.go:334] "Generic (PLEG): container finished" podID="7e55d62f-386c-4731-870a-a4909fb100b9" containerID="85cf79e96fc13c34ea3abd9d2877f21dd93203c7de56aaa2097e3d1a062e3a4e" exitCode=0 Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.227878 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-t4bxw" event={"ID":"7e55d62f-386c-4731-870a-a4909fb100b9","Type":"ContainerDied","Data":"85cf79e96fc13c34ea3abd9d2877f21dd93203c7de56aaa2097e3d1a062e3a4e"} Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.235217 4856 generic.go:334] "Generic (PLEG): container finished" podID="1cb1e06c-d7a8-4456-8614-d71e182d6ad2" containerID="a214e899de1fa12d232a5f7ae7432c6684e5ff6c933f40705502235cb59cf8ba" exitCode=0 Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.235297 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-29ba-account-create-xlkjx" event={"ID":"1cb1e06c-d7a8-4456-8614-d71e182d6ad2","Type":"ContainerDied","Data":"a214e899de1fa12d232a5f7ae7432c6684e5ff6c933f40705502235cb59cf8ba"} Nov 22 07:24:08 crc kubenswrapper[4856]: I1122 07:24:08.313770 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-vqbwk" podStartSLOduration=3.224192579 podStartE2EDuration="26.313714816s" podCreationTimestamp="2025-11-22 07:23:42 +0000 UTC" firstStartedPulling="2025-11-22 07:23:43.690407562 +0000 UTC m=+1266.103800820" lastFinishedPulling="2025-11-22 07:24:06.779929799 +0000 UTC m=+1289.193323057" observedRunningTime="2025-11-22 07:24:08.308108305 +0000 UTC m=+1290.721501563" watchObservedRunningTime="2025-11-22 07:24:08.313714816 +0000 UTC m=+1290.727108094" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.626562 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rt7kb" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.674775 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/102e4706-2696-459a-88e6-b6cd95733094-operator-scripts\") pod \"102e4706-2696-459a-88e6-b6cd95733094\" (UID: \"102e4706-2696-459a-88e6-b6cd95733094\") " Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.675315 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/102e4706-2696-459a-88e6-b6cd95733094-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "102e4706-2696-459a-88e6-b6cd95733094" (UID: "102e4706-2696-459a-88e6-b6cd95733094"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.675352 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vpjp\" (UniqueName: \"kubernetes.io/projected/102e4706-2696-459a-88e6-b6cd95733094-kube-api-access-7vpjp\") pod \"102e4706-2696-459a-88e6-b6cd95733094\" (UID: \"102e4706-2696-459a-88e6-b6cd95733094\") " Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.675886 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/102e4706-2696-459a-88e6-b6cd95733094-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.682985 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102e4706-2696-459a-88e6-b6cd95733094-kube-api-access-7vpjp" (OuterVolumeSpecName: "kube-api-access-7vpjp") pod "102e4706-2696-459a-88e6-b6cd95733094" (UID: "102e4706-2696-459a-88e6-b6cd95733094"). InnerVolumeSpecName "kube-api-access-7vpjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.783614 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vpjp\" (UniqueName: \"kubernetes.io/projected/102e4706-2696-459a-88e6-b6cd95733094-kube-api-access-7vpjp\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.787690 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-73f8-account-create-scphz" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.792256 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceda-account-create-p77h2" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.801256 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-t4bxw" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.804943 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-29ba-account-create-xlkjx" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.886779 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlzst\" (UniqueName: \"kubernetes.io/projected/42e9bfff-515e-462a-9a73-a9514676f9f8-kube-api-access-dlzst\") pod \"42e9bfff-515e-462a-9a73-a9514676f9f8\" (UID: \"42e9bfff-515e-462a-9a73-a9514676f9f8\") " Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.886923 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db71edd-7a64-44d0-abda-ffc266851549-operator-scripts\") pod \"5db71edd-7a64-44d0-abda-ffc266851549\" (UID: \"5db71edd-7a64-44d0-abda-ffc266851549\") " Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.886957 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckn4t\" (UniqueName: \"kubernetes.io/projected/5db71edd-7a64-44d0-abda-ffc266851549-kube-api-access-ckn4t\") pod \"5db71edd-7a64-44d0-abda-ffc266851549\" (UID: \"5db71edd-7a64-44d0-abda-ffc266851549\") " Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.886989 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cb1e06c-d7a8-4456-8614-d71e182d6ad2-operator-scripts\") pod \"1cb1e06c-d7a8-4456-8614-d71e182d6ad2\" (UID: \"1cb1e06c-d7a8-4456-8614-d71e182d6ad2\") " Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.887032 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e55d62f-386c-4731-870a-a4909fb100b9-operator-scripts\") pod \"7e55d62f-386c-4731-870a-a4909fb100b9\" (UID: \"7e55d62f-386c-4731-870a-a4909fb100b9\") " Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.887072 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkz8j\" (UniqueName: \"kubernetes.io/projected/1cb1e06c-d7a8-4456-8614-d71e182d6ad2-kube-api-access-gkz8j\") pod \"1cb1e06c-d7a8-4456-8614-d71e182d6ad2\" (UID: \"1cb1e06c-d7a8-4456-8614-d71e182d6ad2\") " Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.887126 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7vzb\" (UniqueName: \"kubernetes.io/projected/7e55d62f-386c-4731-870a-a4909fb100b9-kube-api-access-x7vzb\") pod \"7e55d62f-386c-4731-870a-a4909fb100b9\" (UID: \"7e55d62f-386c-4731-870a-a4909fb100b9\") " Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.887148 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42e9bfff-515e-462a-9a73-a9514676f9f8-operator-scripts\") pod \"42e9bfff-515e-462a-9a73-a9514676f9f8\" (UID: \"42e9bfff-515e-462a-9a73-a9514676f9f8\") " Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.892985 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5db71edd-7a64-44d0-abda-ffc266851549-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5db71edd-7a64-44d0-abda-ffc266851549" (UID: "5db71edd-7a64-44d0-abda-ffc266851549"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.897008 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cb1e06c-d7a8-4456-8614-d71e182d6ad2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1cb1e06c-d7a8-4456-8614-d71e182d6ad2" (UID: "1cb1e06c-d7a8-4456-8614-d71e182d6ad2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.901715 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e55d62f-386c-4731-870a-a4909fb100b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7e55d62f-386c-4731-870a-a4909fb100b9" (UID: "7e55d62f-386c-4731-870a-a4909fb100b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.901835 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42e9bfff-515e-462a-9a73-a9514676f9f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42e9bfff-515e-462a-9a73-a9514676f9f8" (UID: "42e9bfff-515e-462a-9a73-a9514676f9f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.915026 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db71edd-7a64-44d0-abda-ffc266851549-kube-api-access-ckn4t" (OuterVolumeSpecName: "kube-api-access-ckn4t") pod "5db71edd-7a64-44d0-abda-ffc266851549" (UID: "5db71edd-7a64-44d0-abda-ffc266851549"). InnerVolumeSpecName "kube-api-access-ckn4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.967842 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e9bfff-515e-462a-9a73-a9514676f9f8-kube-api-access-dlzst" (OuterVolumeSpecName: "kube-api-access-dlzst") pod "42e9bfff-515e-462a-9a73-a9514676f9f8" (UID: "42e9bfff-515e-462a-9a73-a9514676f9f8"). InnerVolumeSpecName "kube-api-access-dlzst". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.987739 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb1e06c-d7a8-4456-8614-d71e182d6ad2-kube-api-access-gkz8j" (OuterVolumeSpecName: "kube-api-access-gkz8j") pod "1cb1e06c-d7a8-4456-8614-d71e182d6ad2" (UID: "1cb1e06c-d7a8-4456-8614-d71e182d6ad2"). InnerVolumeSpecName "kube-api-access-gkz8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.987866 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e55d62f-386c-4731-870a-a4909fb100b9-kube-api-access-x7vzb" (OuterVolumeSpecName: "kube-api-access-x7vzb") pod "7e55d62f-386c-4731-870a-a4909fb100b9" (UID: "7e55d62f-386c-4731-870a-a4909fb100b9"). InnerVolumeSpecName "kube-api-access-x7vzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.992824 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkz8j\" (UniqueName: \"kubernetes.io/projected/1cb1e06c-d7a8-4456-8614-d71e182d6ad2-kube-api-access-gkz8j\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.992855 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7vzb\" (UniqueName: \"kubernetes.io/projected/7e55d62f-386c-4731-870a-a4909fb100b9-kube-api-access-x7vzb\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.992867 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42e9bfff-515e-462a-9a73-a9514676f9f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.992875 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlzst\" (UniqueName: \"kubernetes.io/projected/42e9bfff-515e-462a-9a73-a9514676f9f8-kube-api-access-dlzst\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.992883 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5db71edd-7a64-44d0-abda-ffc266851549-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.992892 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckn4t\" (UniqueName: \"kubernetes.io/projected/5db71edd-7a64-44d0-abda-ffc266851549-kube-api-access-ckn4t\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.992900 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cb1e06c-d7a8-4456-8614-d71e182d6ad2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:09 crc kubenswrapper[4856]: I1122 07:24:09.992908 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7e55d62f-386c-4731-870a-a4909fb100b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.048359 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vkfmv" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.093791 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sjvs\" (UniqueName: \"kubernetes.io/projected/a1ae4cc7-5c62-4d6d-a578-ed26f892a159-kube-api-access-2sjvs\") pod \"a1ae4cc7-5c62-4d6d-a578-ed26f892a159\" (UID: \"a1ae4cc7-5c62-4d6d-a578-ed26f892a159\") " Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.093981 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ae4cc7-5c62-4d6d-a578-ed26f892a159-operator-scripts\") pod \"a1ae4cc7-5c62-4d6d-a578-ed26f892a159\" (UID: \"a1ae4cc7-5c62-4d6d-a578-ed26f892a159\") " Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.094620 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1ae4cc7-5c62-4d6d-a578-ed26f892a159-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a1ae4cc7-5c62-4d6d-a578-ed26f892a159" (UID: "a1ae4cc7-5c62-4d6d-a578-ed26f892a159"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.101748 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1ae4cc7-5c62-4d6d-a578-ed26f892a159-kube-api-access-2sjvs" (OuterVolumeSpecName: "kube-api-access-2sjvs") pod "a1ae4cc7-5c62-4d6d-a578-ed26f892a159" (UID: "a1ae4cc7-5c62-4d6d-a578-ed26f892a159"). InnerVolumeSpecName "kube-api-access-2sjvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.196213 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a1ae4cc7-5c62-4d6d-a578-ed26f892a159-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.196239 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sjvs\" (UniqueName: \"kubernetes.io/projected/a1ae4cc7-5c62-4d6d-a578-ed26f892a159-kube-api-access-2sjvs\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.251640 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-vkfmv" event={"ID":"a1ae4cc7-5c62-4d6d-a578-ed26f892a159","Type":"ContainerDied","Data":"80372e7ab58a33a03bbcefc0679609a52262ce011de55c9ff65a07043f869ec3"} Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.251722 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80372e7ab58a33a03bbcefc0679609a52262ce011de55c9ff65a07043f869ec3" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.251758 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-vkfmv" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.253901 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-73f8-account-create-scphz" event={"ID":"42e9bfff-515e-462a-9a73-a9514676f9f8","Type":"ContainerDied","Data":"62cc3f6b08cf8e9edb9f4ee839c69a7f9c901a1481eb68a5fdafa00bc977562a"} Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.253926 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-73f8-account-create-scphz" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.253944 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62cc3f6b08cf8e9edb9f4ee839c69a7f9c901a1481eb68a5fdafa00bc977562a" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.255566 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-t4bxw" event={"ID":"7e55d62f-386c-4731-870a-a4909fb100b9","Type":"ContainerDied","Data":"c53af9b234d66e1f8d0f181f807fe67a00ccd01c0ea86de76204325f2f0d5f24"} Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.255593 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c53af9b234d66e1f8d0f181f807fe67a00ccd01c0ea86de76204325f2f0d5f24" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.255662 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-t4bxw" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.259413 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-29ba-account-create-xlkjx" event={"ID":"1cb1e06c-d7a8-4456-8614-d71e182d6ad2","Type":"ContainerDied","Data":"9018e1deb18c2b794dde1449262ec622a9c22b6af7d8e3264660f8f9ba29a944"} Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.259452 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9018e1deb18c2b794dde1449262ec622a9c22b6af7d8e3264660f8f9ba29a944" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.259501 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-29ba-account-create-xlkjx" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.266116 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-rt7kb" event={"ID":"102e4706-2696-459a-88e6-b6cd95733094","Type":"ContainerDied","Data":"11a71e8740478b004ee05d76aa8a991089a3d76f2881c32cb397fd639092603e"} Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.266164 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11a71e8740478b004ee05d76aa8a991089a3d76f2881c32cb397fd639092603e" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.266215 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-rt7kb" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.271631 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ceda-account-create-p77h2" event={"ID":"5db71edd-7a64-44d0-abda-ffc266851549","Type":"ContainerDied","Data":"1aa153ea353763e9c24b81ca7ac05b26c08d8ce7447699438e84d6579d22ac31"} Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.271685 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aa153ea353763e9c24b81ca7ac05b26c08d8ce7447699438e84d6579d22ac31" Nov 22 07:24:10 crc kubenswrapper[4856]: I1122 07:24:10.271755 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ceda-account-create-p77h2" Nov 22 07:24:11 crc kubenswrapper[4856]: I1122 07:24:11.281832 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"7739725925a289b294a1260a2963889a83f70dbfee02df9ebc4a046996eec165"} Nov 22 07:24:11 crc kubenswrapper[4856]: I1122 07:24:11.282434 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"5aab3b9349e7624b4bdd58b9ddc145142c8697523405f28d16e4f3c04ea145ae"} Nov 22 07:24:12 crc kubenswrapper[4856]: I1122 07:24:12.292702 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"4011c89f0b6803e45417d4182117f87df790db47e51c6dc417714bdbab0d9328"} Nov 22 07:24:12 crc kubenswrapper[4856]: I1122 07:24:12.292743 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"be283db24da6932b997e62df069e78ce522bed9042d62990be78c405a0d8baff"} Nov 22 07:24:15 crc kubenswrapper[4856]: E1122 07:24:15.250453 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"container-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75\\\"\", failed to \"StartContainer\" for \"container-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75\\\"\", failed to \"StartContainer\" for \"container-updater\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75\\\"\"]" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" Nov 22 07:24:15 crc kubenswrapper[4856]: I1122 07:24:15.323541 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"f6b36d1ad73481da60eada98f0cdb3c61e2e68ee475247d1ff9682f6f708afb3"} Nov 22 07:24:15 crc kubenswrapper[4856]: I1122 07:24:15.323590 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"ecc44836c8466c6fbcc848350b1a769fe7507c5c9ee03a0001c9685bf0cd78bc"} Nov 22 07:24:15 crc kubenswrapper[4856]: I1122 07:24:15.323599 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"9f435952eb044c7ab5dcb833fc12c8685ca6e3fd82a9405acc66ff7e0a5e1488"} Nov 22 07:24:15 crc kubenswrapper[4856]: E1122 07:24:15.330637 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75\\\"\", failed to \"StartContainer\" for \"container-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75\\\"\", failed to \"StartContainer\" for \"container-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75\\\"\", failed to \"StartContainer\" for \"container-updater\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75\\\"\"]" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" Nov 22 07:24:16 crc kubenswrapper[4856]: E1122 07:24:16.337792 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"container-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75\\\"\", failed to \"StartContainer\" for \"container-replicator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75\\\"\", failed to \"StartContainer\" for \"container-auditor\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75\\\"\", failed to \"StartContainer\" for \"container-updater\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-swift-container@sha256:273fe8c27d08d0f62773a02f8cef6a761a7768116ee1a4be611f93bbf63f2b75\\\"\"]" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" Nov 22 07:24:21 crc kubenswrapper[4856]: I1122 07:24:21.369753 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kvfrn" event={"ID":"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb","Type":"ContainerStarted","Data":"1d30ff58d06db234c6dbea2039224f88949d93b4b3b10ffc82e7d58969c01365"} Nov 22 07:24:21 crc kubenswrapper[4856]: I1122 07:24:21.377694 4856 generic.go:334] "Generic (PLEG): container finished" podID="62be1cd5-ba89-49d3-8f57-6ab0bf20848a" containerID="c86e59d45b1d7c3c0e2462f84ad716038842e4262fc6e161703b245f174c63d7" exitCode=0 Nov 22 07:24:21 crc kubenswrapper[4856]: I1122 07:24:21.377740 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vqbwk" event={"ID":"62be1cd5-ba89-49d3-8f57-6ab0bf20848a","Type":"ContainerDied","Data":"c86e59d45b1d7c3c0e2462f84ad716038842e4262fc6e161703b245f174c63d7"} Nov 22 07:24:21 crc kubenswrapper[4856]: I1122 07:24:21.410293 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-kvfrn" podStartSLOduration=4.436124751 podStartE2EDuration="39.4102238s" podCreationTimestamp="2025-11-22 07:23:42 +0000 UTC" firstStartedPulling="2025-11-22 07:23:45.195311789 +0000 UTC m=+1267.608705047" lastFinishedPulling="2025-11-22 07:24:20.169410838 +0000 UTC m=+1302.582804096" observedRunningTime="2025-11-22 07:24:21.391555304 +0000 UTC m=+1303.804948572" watchObservedRunningTime="2025-11-22 07:24:21.4102238 +0000 UTC m=+1303.823617088" Nov 22 07:24:22 crc kubenswrapper[4856]: I1122 07:24:22.678645 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vqbwk" Nov 22 07:24:22 crc kubenswrapper[4856]: I1122 07:24:22.816149 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-combined-ca-bundle\") pod \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\" (UID: \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\") " Nov 22 07:24:22 crc kubenswrapper[4856]: I1122 07:24:22.816266 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-config-data\") pod \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\" (UID: \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\") " Nov 22 07:24:22 crc kubenswrapper[4856]: I1122 07:24:22.816344 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdz4j\" (UniqueName: \"kubernetes.io/projected/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-kube-api-access-pdz4j\") pod \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\" (UID: \"62be1cd5-ba89-49d3-8f57-6ab0bf20848a\") " Nov 22 07:24:22 crc kubenswrapper[4856]: I1122 07:24:22.822940 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-kube-api-access-pdz4j" (OuterVolumeSpecName: "kube-api-access-pdz4j") pod "62be1cd5-ba89-49d3-8f57-6ab0bf20848a" (UID: "62be1cd5-ba89-49d3-8f57-6ab0bf20848a"). InnerVolumeSpecName "kube-api-access-pdz4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:22 crc kubenswrapper[4856]: I1122 07:24:22.850525 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62be1cd5-ba89-49d3-8f57-6ab0bf20848a" (UID: "62be1cd5-ba89-49d3-8f57-6ab0bf20848a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:24:22 crc kubenswrapper[4856]: I1122 07:24:22.866769 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-config-data" (OuterVolumeSpecName: "config-data") pod "62be1cd5-ba89-49d3-8f57-6ab0bf20848a" (UID: "62be1cd5-ba89-49d3-8f57-6ab0bf20848a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:24:22 crc kubenswrapper[4856]: I1122 07:24:22.918323 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:22 crc kubenswrapper[4856]: I1122 07:24:22.918816 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdz4j\" (UniqueName: \"kubernetes.io/projected/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-kube-api-access-pdz4j\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:22 crc kubenswrapper[4856]: I1122 07:24:22.918832 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62be1cd5-ba89-49d3-8f57-6ab0bf20848a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.397670 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vqbwk" event={"ID":"62be1cd5-ba89-49d3-8f57-6ab0bf20848a","Type":"ContainerDied","Data":"3603b102baae23a0aedc6580de7f32b9b81eb3bc7c25fad23a35cdaed4a8619e"} Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.397718 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3603b102baae23a0aedc6580de7f32b9b81eb3bc7c25fad23a35cdaed4a8619e" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.397761 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vqbwk" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673114 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c9f7b7b67-27k8v"] Nov 22 07:24:23 crc kubenswrapper[4856]: E1122 07:24:23.673455 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e9bfff-515e-462a-9a73-a9514676f9f8" containerName="mariadb-account-create" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673468 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e9bfff-515e-462a-9a73-a9514676f9f8" containerName="mariadb-account-create" Nov 22 07:24:23 crc kubenswrapper[4856]: E1122 07:24:23.673478 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62be1cd5-ba89-49d3-8f57-6ab0bf20848a" containerName="keystone-db-sync" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673485 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="62be1cd5-ba89-49d3-8f57-6ab0bf20848a" containerName="keystone-db-sync" Nov 22 07:24:23 crc kubenswrapper[4856]: E1122 07:24:23.673493 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102e4706-2696-459a-88e6-b6cd95733094" containerName="mariadb-database-create" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673499 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="102e4706-2696-459a-88e6-b6cd95733094" containerName="mariadb-database-create" Nov 22 07:24:23 crc kubenswrapper[4856]: E1122 07:24:23.673523 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db71edd-7a64-44d0-abda-ffc266851549" containerName="mariadb-account-create" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673529 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db71edd-7a64-44d0-abda-ffc266851549" containerName="mariadb-account-create" Nov 22 07:24:23 crc kubenswrapper[4856]: E1122 07:24:23.673544 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ee4794-dea5-460a-8dbe-01bb2b376432" containerName="ovn-config" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673551 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ee4794-dea5-460a-8dbe-01bb2b376432" containerName="ovn-config" Nov 22 07:24:23 crc kubenswrapper[4856]: E1122 07:24:23.673561 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb1e06c-d7a8-4456-8614-d71e182d6ad2" containerName="mariadb-account-create" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673566 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb1e06c-d7a8-4456-8614-d71e182d6ad2" containerName="mariadb-account-create" Nov 22 07:24:23 crc kubenswrapper[4856]: E1122 07:24:23.673576 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1ae4cc7-5c62-4d6d-a578-ed26f892a159" containerName="mariadb-database-create" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673581 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1ae4cc7-5c62-4d6d-a578-ed26f892a159" containerName="mariadb-database-create" Nov 22 07:24:23 crc kubenswrapper[4856]: E1122 07:24:23.673600 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e55d62f-386c-4731-870a-a4909fb100b9" containerName="mariadb-database-create" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673607 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e55d62f-386c-4731-870a-a4909fb100b9" containerName="mariadb-database-create" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673754 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="102e4706-2696-459a-88e6-b6cd95733094" containerName="mariadb-database-create" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673767 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e9bfff-515e-462a-9a73-a9514676f9f8" containerName="mariadb-account-create" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673782 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ee4794-dea5-460a-8dbe-01bb2b376432" containerName="ovn-config" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673792 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1ae4cc7-5c62-4d6d-a578-ed26f892a159" containerName="mariadb-database-create" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673803 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb1e06c-d7a8-4456-8614-d71e182d6ad2" containerName="mariadb-account-create" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673814 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e55d62f-386c-4731-870a-a4909fb100b9" containerName="mariadb-database-create" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673826 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db71edd-7a64-44d0-abda-ffc266851549" containerName="mariadb-account-create" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.673836 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="62be1cd5-ba89-49d3-8f57-6ab0bf20848a" containerName="keystone-db-sync" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.674685 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.699849 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-jthw8"] Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.701738 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.705974 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.706133 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.706153 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.717670 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.717675 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7mv7p" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.733550 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-ovsdbserver-nb\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.733652 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-config\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.733692 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-dns-svc\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.733767 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-ovsdbserver-sb\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.733800 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbnt6\" (UniqueName: \"kubernetes.io/projected/8a92f1c3-e6ff-433b-8892-cab5bc99555a-kube-api-access-pbnt6\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.738350 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c9f7b7b67-27k8v"] Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.752403 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jthw8"] Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.835213 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-scripts\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.836238 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-ovsdbserver-sb\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.836301 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbnt6\" (UniqueName: \"kubernetes.io/projected/8a92f1c3-e6ff-433b-8892-cab5bc99555a-kube-api-access-pbnt6\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.836363 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-combined-ca-bundle\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.836432 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-ovsdbserver-nb\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.836491 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8j4b\" (UniqueName: \"kubernetes.io/projected/d3f8176b-8158-442b-b2fe-b34810ecef99-kube-api-access-r8j4b\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.836670 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-config\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.836806 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-dns-svc\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.836853 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-fernet-keys\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.836907 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-config-data\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.836961 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-credential-keys\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.837962 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-ovsdbserver-nb\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.839386 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-dns-svc\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.839740 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-config\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.840056 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-ovsdbserver-sb\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.894922 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.899320 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbnt6\" (UniqueName: \"kubernetes.io/projected/8a92f1c3-e6ff-433b-8892-cab5bc99555a-kube-api-access-pbnt6\") pod \"dnsmasq-dns-c9f7b7b67-27k8v\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.901918 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.908997 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.917647 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.920962 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-n9nhw"] Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.922242 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.924113 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.924753 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.925844 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-khp4b" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.938566 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-scripts\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.938651 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-combined-ca-bundle\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.938788 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8j4b\" (UniqueName: \"kubernetes.io/projected/d3f8176b-8158-442b-b2fe-b34810ecef99-kube-api-access-r8j4b\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.938967 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-fernet-keys\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.939003 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-config-data\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.939041 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-credential-keys\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.950004 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-scripts\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.950270 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-credential-keys\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.950615 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-combined-ca-bundle\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.950700 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-fernet-keys\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.952633 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-config-data\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.959845 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-w7rvq"] Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.961123 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-w7rvq" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.965663 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d46d7" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.965890 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.966487 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.971295 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8j4b\" (UniqueName: \"kubernetes.io/projected/d3f8176b-8158-442b-b2fe-b34810ecef99-kube-api-access-r8j4b\") pod \"keystone-bootstrap-jthw8\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.976202 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:24:23 crc kubenswrapper[4856]: I1122 07:24:23.995064 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.009251 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-w7rvq"] Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.022606 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n9nhw"] Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.033536 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040203 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040247 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-combined-ca-bundle\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040278 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e60644a-2d82-40ed-9d0b-bb144837842a-log-httpd\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040297 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-etc-machine-id\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040335 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjc85\" (UniqueName: \"kubernetes.io/projected/3e60644a-2d82-40ed-9d0b-bb144837842a-kube-api-access-jjc85\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040361 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-combined-ca-bundle\") pod \"neutron-db-sync-w7rvq\" (UID: \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\") " pod="openstack/neutron-db-sync-w7rvq" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040383 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-config-data\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040404 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-scripts\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040420 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbxwc\" (UniqueName: \"kubernetes.io/projected/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-kube-api-access-fbxwc\") pod \"neutron-db-sync-w7rvq\" (UID: \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\") " pod="openstack/neutron-db-sync-w7rvq" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040436 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-scripts\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040471 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-config-data\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040492 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h44lk\" (UniqueName: \"kubernetes.io/projected/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-kube-api-access-h44lk\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040522 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-db-sync-config-data\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040562 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040579 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-config\") pod \"neutron-db-sync-w7rvq\" (UID: \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\") " pod="openstack/neutron-db-sync-w7rvq" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.040603 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e60644a-2d82-40ed-9d0b-bb144837842a-run-httpd\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.140724 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-298l7"] Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146470 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146502 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-config\") pod \"neutron-db-sync-w7rvq\" (UID: \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\") " pod="openstack/neutron-db-sync-w7rvq" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146547 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e60644a-2d82-40ed-9d0b-bb144837842a-run-httpd\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146569 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146597 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-combined-ca-bundle\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146622 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e60644a-2d82-40ed-9d0b-bb144837842a-log-httpd\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146638 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-etc-machine-id\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146653 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjc85\" (UniqueName: \"kubernetes.io/projected/3e60644a-2d82-40ed-9d0b-bb144837842a-kube-api-access-jjc85\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146670 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-combined-ca-bundle\") pod \"neutron-db-sync-w7rvq\" (UID: \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\") " pod="openstack/neutron-db-sync-w7rvq" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146687 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-config-data\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146711 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-scripts\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146726 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbxwc\" (UniqueName: \"kubernetes.io/projected/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-kube-api-access-fbxwc\") pod \"neutron-db-sync-w7rvq\" (UID: \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\") " pod="openstack/neutron-db-sync-w7rvq" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146740 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-scripts\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146776 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-config-data\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146798 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h44lk\" (UniqueName: \"kubernetes.io/projected/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-kube-api-access-h44lk\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.146817 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-db-sync-config-data\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.148379 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-298l7" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.156303 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-combined-ca-bundle\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.156695 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e60644a-2d82-40ed-9d0b-bb144837842a-log-httpd\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.156745 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-etc-machine-id\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.157371 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.157839 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qslwv" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.158103 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.168688 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-298l7"] Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.183012 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.185556 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-config-data\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.186440 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e60644a-2d82-40ed-9d0b-bb144837842a-run-httpd\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.188240 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-combined-ca-bundle\") pod \"neutron-db-sync-w7rvq\" (UID: \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\") " pod="openstack/neutron-db-sync-w7rvq" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.192409 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-db-sync-config-data\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.192857 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-config\") pod \"neutron-db-sync-w7rvq\" (UID: \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\") " pod="openstack/neutron-db-sync-w7rvq" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.193052 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-config-data\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.193697 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-scripts\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.197349 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbxwc\" (UniqueName: \"kubernetes.io/projected/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-kube-api-access-fbxwc\") pod \"neutron-db-sync-w7rvq\" (UID: \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\") " pod="openstack/neutron-db-sync-w7rvq" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.197367 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-scripts\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.198114 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjc85\" (UniqueName: \"kubernetes.io/projected/3e60644a-2d82-40ed-9d0b-bb144837842a-kube-api-access-jjc85\") pod \"ceilometer-0\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.204031 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h44lk\" (UniqueName: \"kubernetes.io/projected/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-kube-api-access-h44lk\") pod \"cinder-db-sync-n9nhw\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.204057 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-ckxn9"] Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.218110 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9f7b7b67-27k8v"] Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.218209 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.221260 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.221267 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ckxn9"] Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.221383 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2mnmh" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.221439 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.228596 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-794df4974f-bqzxn"] Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.230064 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.231419 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.243877 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794df4974f-bqzxn"] Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.243902 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.248425 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62cc6af-1032-4593-a11f-0dde4a6020ae-combined-ca-bundle\") pod \"barbican-db-sync-298l7\" (UID: \"f62cc6af-1032-4593-a11f-0dde4a6020ae\") " pod="openstack/barbican-db-sync-298l7" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.248490 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chzvc\" (UniqueName: \"kubernetes.io/projected/f62cc6af-1032-4593-a11f-0dde4a6020ae-kube-api-access-chzvc\") pod \"barbican-db-sync-298l7\" (UID: \"f62cc6af-1032-4593-a11f-0dde4a6020ae\") " pod="openstack/barbican-db-sync-298l7" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.248602 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f62cc6af-1032-4593-a11f-0dde4a6020ae-db-sync-config-data\") pod \"barbican-db-sync-298l7\" (UID: \"f62cc6af-1032-4593-a11f-0dde4a6020ae\") " pod="openstack/barbican-db-sync-298l7" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.350016 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f62cc6af-1032-4593-a11f-0dde4a6020ae-db-sync-config-data\") pod \"barbican-db-sync-298l7\" (UID: \"f62cc6af-1032-4593-a11f-0dde4a6020ae\") " pod="openstack/barbican-db-sync-298l7" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.350057 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-logs\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.350095 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-ovsdbserver-nb\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.350116 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62cc6af-1032-4593-a11f-0dde4a6020ae-combined-ca-bundle\") pod \"barbican-db-sync-298l7\" (UID: \"f62cc6af-1032-4593-a11f-0dde4a6020ae\") " pod="openstack/barbican-db-sync-298l7" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.350229 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-combined-ca-bundle\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.350302 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-config-data\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.350329 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-458kg\" (UniqueName: \"kubernetes.io/projected/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-kube-api-access-458kg\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.350347 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-config\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.350412 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chzvc\" (UniqueName: \"kubernetes.io/projected/f62cc6af-1032-4593-a11f-0dde4a6020ae-kube-api-access-chzvc\") pod \"barbican-db-sync-298l7\" (UID: \"f62cc6af-1032-4593-a11f-0dde4a6020ae\") " pod="openstack/barbican-db-sync-298l7" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.350430 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-ovsdbserver-sb\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.350476 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-dns-svc\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.350516 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-scripts\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.350650 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj7mg\" (UniqueName: \"kubernetes.io/projected/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-kube-api-access-jj7mg\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.353943 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f62cc6af-1032-4593-a11f-0dde4a6020ae-db-sync-config-data\") pod \"barbican-db-sync-298l7\" (UID: \"f62cc6af-1032-4593-a11f-0dde4a6020ae\") " pod="openstack/barbican-db-sync-298l7" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.354090 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62cc6af-1032-4593-a11f-0dde4a6020ae-combined-ca-bundle\") pod \"barbican-db-sync-298l7\" (UID: \"f62cc6af-1032-4593-a11f-0dde4a6020ae\") " pod="openstack/barbican-db-sync-298l7" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.389149 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chzvc\" (UniqueName: \"kubernetes.io/projected/f62cc6af-1032-4593-a11f-0dde4a6020ae-kube-api-access-chzvc\") pod \"barbican-db-sync-298l7\" (UID: \"f62cc6af-1032-4593-a11f-0dde4a6020ae\") " pod="openstack/barbican-db-sync-298l7" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.452216 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj7mg\" (UniqueName: \"kubernetes.io/projected/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-kube-api-access-jj7mg\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.452276 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-logs\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.452305 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-ovsdbserver-nb\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.452344 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-combined-ca-bundle\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.452370 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-config-data\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.452392 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-458kg\" (UniqueName: \"kubernetes.io/projected/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-kube-api-access-458kg\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.452409 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-config\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.452450 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-ovsdbserver-sb\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.452482 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-dns-svc\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.452525 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-scripts\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.453208 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-logs\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.454383 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-ovsdbserver-nb\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.454459 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-config\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.454923 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-dns-svc\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.455611 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-config-data\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.456562 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-scripts\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.457108 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-ovsdbserver-sb\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.462029 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-combined-ca-bundle\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.470273 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-458kg\" (UniqueName: \"kubernetes.io/projected/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-kube-api-access-458kg\") pod \"dnsmasq-dns-794df4974f-bqzxn\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.470448 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj7mg\" (UniqueName: \"kubernetes.io/projected/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-kube-api-access-jj7mg\") pod \"placement-db-sync-ckxn9\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.470800 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-w7rvq" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.515219 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-298l7" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.604081 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ckxn9" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.608399 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.659849 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9f7b7b67-27k8v"] Nov 22 07:24:24 crc kubenswrapper[4856]: W1122 07:24:24.675487 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a92f1c3_e6ff_433b_8892_cab5bc99555a.slice/crio-9397484773c53ce87c7fbfa96428d71ba61efaa5ea3a61cbfa262d5ec112788f WatchSource:0}: Error finding container 9397484773c53ce87c7fbfa96428d71ba61efaa5ea3a61cbfa262d5ec112788f: Status 404 returned error can't find the container with id 9397484773c53ce87c7fbfa96428d71ba61efaa5ea3a61cbfa262d5ec112788f Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.731586 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-jthw8"] Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.781748 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.847480 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-n9nhw"] Nov 22 07:24:24 crc kubenswrapper[4856]: W1122 07:24:24.868355 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e60644a_2d82_40ed_9d0b_bb144837842a.slice/crio-3b2272adc0ae334e531057d730af0ee18e91344a241ae15700a59fb9ff6915f5 WatchSource:0}: Error finding container 3b2272adc0ae334e531057d730af0ee18e91344a241ae15700a59fb9ff6915f5: Status 404 returned error can't find the container with id 3b2272adc0ae334e531057d730af0ee18e91344a241ae15700a59fb9ff6915f5 Nov 22 07:24:24 crc kubenswrapper[4856]: I1122 07:24:24.952107 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-w7rvq"] Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.063674 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-298l7"] Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.136717 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-ckxn9"] Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.235865 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-794df4974f-bqzxn"] Nov 22 07:24:25 crc kubenswrapper[4856]: W1122 07:24:25.244695 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1748cfc_ac6b_454c_9dbd_1e18f3b16d82.slice/crio-50013ece6a1fe425ee65c97430756957b2c922cba2c2c80ff9ec64cb1bf55c3b WatchSource:0}: Error finding container 50013ece6a1fe425ee65c97430756957b2c922cba2c2c80ff9ec64cb1bf55c3b: Status 404 returned error can't find the container with id 50013ece6a1fe425ee65c97430756957b2c922cba2c2c80ff9ec64cb1bf55c3b Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.424197 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-w7rvq" event={"ID":"b446b176-7d24-4bb1-ab69-7d78c1c1e99f","Type":"ContainerStarted","Data":"1e67d8cd584ceeb200c9518aba1f39886ff3c391d12da5f8ac55f49863259170"} Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.424552 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-w7rvq" event={"ID":"b446b176-7d24-4bb1-ab69-7d78c1c1e99f","Type":"ContainerStarted","Data":"4a145fd635b3af96adc1a373b0ff0dcd6c362589ac650981c611e09a5cef50b4"} Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.434165 4856 generic.go:334] "Generic (PLEG): container finished" podID="8a92f1c3-e6ff-433b-8892-cab5bc99555a" containerID="dd98aa0ccc152c0e762b8cf396f7eed0a7973f29feeec86dec8316daacd56bb2" exitCode=0 Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.434267 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" event={"ID":"8a92f1c3-e6ff-433b-8892-cab5bc99555a","Type":"ContainerDied","Data":"dd98aa0ccc152c0e762b8cf396f7eed0a7973f29feeec86dec8316daacd56bb2"} Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.434338 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" event={"ID":"8a92f1c3-e6ff-433b-8892-cab5bc99555a","Type":"ContainerStarted","Data":"9397484773c53ce87c7fbfa96428d71ba61efaa5ea3a61cbfa262d5ec112788f"} Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.435800 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" event={"ID":"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82","Type":"ContainerStarted","Data":"50013ece6a1fe425ee65c97430756957b2c922cba2c2c80ff9ec64cb1bf55c3b"} Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.440981 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n9nhw" event={"ID":"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c","Type":"ContainerStarted","Data":"ee6f4bb3d77c6bb784c9b5850eae8ece617005f7cdaaac2e6df4774a4413ab71"} Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.443027 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-298l7" event={"ID":"f62cc6af-1032-4593-a11f-0dde4a6020ae","Type":"ContainerStarted","Data":"fbb7a32b505ad0f80f9487229e47feac27c2a855957cabf6899a6c15d5173c4e"} Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.446611 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-w7rvq" podStartSLOduration=2.446589524 podStartE2EDuration="2.446589524s" podCreationTimestamp="2025-11-22 07:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:24:25.441225159 +0000 UTC m=+1307.854618417" watchObservedRunningTime="2025-11-22 07:24:25.446589524 +0000 UTC m=+1307.859982782" Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.448129 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jthw8" event={"ID":"d3f8176b-8158-442b-b2fe-b34810ecef99","Type":"ContainerStarted","Data":"ab5ac1bf362ac787b5174e82db4e12310e323fb9748e101c85540403217c1862"} Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.448169 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jthw8" event={"ID":"d3f8176b-8158-442b-b2fe-b34810ecef99","Type":"ContainerStarted","Data":"18cb4206759fc44e30fbe9df47747842f8d8e0d6903cc8213c3f86c6aef2d7c4"} Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.454107 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ckxn9" event={"ID":"ffb19735-07df-4fbd-9f9a-4d3aa861e03a","Type":"ContainerStarted","Data":"e26620867e598a5bfa415df96470fe005c1640b27fa71ebce34fe3224d92df45"} Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.472648 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e60644a-2d82-40ed-9d0b-bb144837842a","Type":"ContainerStarted","Data":"3b2272adc0ae334e531057d730af0ee18e91344a241ae15700a59fb9ff6915f5"} Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.488995 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-jthw8" podStartSLOduration=2.488977611 podStartE2EDuration="2.488977611s" podCreationTimestamp="2025-11-22 07:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:24:25.485583934 +0000 UTC m=+1307.898977202" watchObservedRunningTime="2025-11-22 07:24:25.488977611 +0000 UTC m=+1307.902370869" Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.816621 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.879485 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-ovsdbserver-sb\") pod \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.879926 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-config\") pod \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.879965 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-dns-svc\") pod \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.879997 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbnt6\" (UniqueName: \"kubernetes.io/projected/8a92f1c3-e6ff-433b-8892-cab5bc99555a-kube-api-access-pbnt6\") pod \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.880073 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-ovsdbserver-nb\") pod \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\" (UID: \"8a92f1c3-e6ff-433b-8892-cab5bc99555a\") " Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.887218 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a92f1c3-e6ff-433b-8892-cab5bc99555a-kube-api-access-pbnt6" (OuterVolumeSpecName: "kube-api-access-pbnt6") pod "8a92f1c3-e6ff-433b-8892-cab5bc99555a" (UID: "8a92f1c3-e6ff-433b-8892-cab5bc99555a"). InnerVolumeSpecName "kube-api-access-pbnt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.904088 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8a92f1c3-e6ff-433b-8892-cab5bc99555a" (UID: "8a92f1c3-e6ff-433b-8892-cab5bc99555a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.910079 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8a92f1c3-e6ff-433b-8892-cab5bc99555a" (UID: "8a92f1c3-e6ff-433b-8892-cab5bc99555a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.912295 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a92f1c3-e6ff-433b-8892-cab5bc99555a" (UID: "8a92f1c3-e6ff-433b-8892-cab5bc99555a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.931298 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-config" (OuterVolumeSpecName: "config") pod "8a92f1c3-e6ff-433b-8892-cab5bc99555a" (UID: "8a92f1c3-e6ff-433b-8892-cab5bc99555a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.984036 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbnt6\" (UniqueName: \"kubernetes.io/projected/8a92f1c3-e6ff-433b-8892-cab5bc99555a-kube-api-access-pbnt6\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.984082 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.984096 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.984111 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:25 crc kubenswrapper[4856]: I1122 07:24:25.984124 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a92f1c3-e6ff-433b-8892-cab5bc99555a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:26 crc kubenswrapper[4856]: I1122 07:24:26.485651 4856 generic.go:334] "Generic (PLEG): container finished" podID="a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" containerID="d1872b43d48a7dd796d09ce684c61535aa2f733c293336f262363d2a33ae0724" exitCode=0 Nov 22 07:24:26 crc kubenswrapper[4856]: I1122 07:24:26.485997 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" event={"ID":"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82","Type":"ContainerDied","Data":"d1872b43d48a7dd796d09ce684c61535aa2f733c293336f262363d2a33ae0724"} Nov 22 07:24:26 crc kubenswrapper[4856]: I1122 07:24:26.489136 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" Nov 22 07:24:26 crc kubenswrapper[4856]: I1122 07:24:26.489718 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9f7b7b67-27k8v" event={"ID":"8a92f1c3-e6ff-433b-8892-cab5bc99555a","Type":"ContainerDied","Data":"9397484773c53ce87c7fbfa96428d71ba61efaa5ea3a61cbfa262d5ec112788f"} Nov 22 07:24:26 crc kubenswrapper[4856]: I1122 07:24:26.489751 4856 scope.go:117] "RemoveContainer" containerID="dd98aa0ccc152c0e762b8cf396f7eed0a7973f29feeec86dec8316daacd56bb2" Nov 22 07:24:26 crc kubenswrapper[4856]: I1122 07:24:26.624686 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9f7b7b67-27k8v"] Nov 22 07:24:26 crc kubenswrapper[4856]: I1122 07:24:26.630412 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c9f7b7b67-27k8v"] Nov 22 07:24:26 crc kubenswrapper[4856]: I1122 07:24:26.727030 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a92f1c3-e6ff-433b-8892-cab5bc99555a" path="/var/lib/kubelet/pods/8a92f1c3-e6ff-433b-8892-cab5bc99555a/volumes" Nov 22 07:24:27 crc kubenswrapper[4856]: I1122 07:24:27.497535 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:24:27 crc kubenswrapper[4856]: I1122 07:24:27.505730 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" event={"ID":"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82","Type":"ContainerStarted","Data":"d08b0f39d313ca2bfb10a627ce9f6382f91298f77a6a3de7131c3e98a404d232"} Nov 22 07:24:27 crc kubenswrapper[4856]: I1122 07:24:27.505910 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:27 crc kubenswrapper[4856]: I1122 07:24:27.540042 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" podStartSLOduration=3.540020181 podStartE2EDuration="3.540020181s" podCreationTimestamp="2025-11-22 07:24:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:24:27.523051114 +0000 UTC m=+1309.936444372" watchObservedRunningTime="2025-11-22 07:24:27.540020181 +0000 UTC m=+1309.953413439" Nov 22 07:24:29 crc kubenswrapper[4856]: I1122 07:24:29.525938 4856 generic.go:334] "Generic (PLEG): container finished" podID="d3f8176b-8158-442b-b2fe-b34810ecef99" containerID="ab5ac1bf362ac787b5174e82db4e12310e323fb9748e101c85540403217c1862" exitCode=0 Nov 22 07:24:29 crc kubenswrapper[4856]: I1122 07:24:29.525946 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jthw8" event={"ID":"d3f8176b-8158-442b-b2fe-b34810ecef99","Type":"ContainerDied","Data":"ab5ac1bf362ac787b5174e82db4e12310e323fb9748e101c85540403217c1862"} Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.610579 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.671397 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-9dc6m"] Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.680264 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" podUID="ec618b5f-bf54-4636-b50b-330cdfdfcd62" containerName="dnsmasq-dns" containerID="cri-o://b3b6e71e300a03394fe34c27f1ab9fdba9c2acea13d798b97a51ff2be3f5e36c" gracePeriod=10 Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.692995 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.750499 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-scripts\") pod \"d3f8176b-8158-442b-b2fe-b34810ecef99\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.750563 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-config-data\") pod \"d3f8176b-8158-442b-b2fe-b34810ecef99\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.750631 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-credential-keys\") pod \"d3f8176b-8158-442b-b2fe-b34810ecef99\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.750786 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8j4b\" (UniqueName: \"kubernetes.io/projected/d3f8176b-8158-442b-b2fe-b34810ecef99-kube-api-access-r8j4b\") pod \"d3f8176b-8158-442b-b2fe-b34810ecef99\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.750834 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-fernet-keys\") pod \"d3f8176b-8158-442b-b2fe-b34810ecef99\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.750878 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-combined-ca-bundle\") pod \"d3f8176b-8158-442b-b2fe-b34810ecef99\" (UID: \"d3f8176b-8158-442b-b2fe-b34810ecef99\") " Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.759888 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "d3f8176b-8158-442b-b2fe-b34810ecef99" (UID: "d3f8176b-8158-442b-b2fe-b34810ecef99"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.759921 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f8176b-8158-442b-b2fe-b34810ecef99-kube-api-access-r8j4b" (OuterVolumeSpecName: "kube-api-access-r8j4b") pod "d3f8176b-8158-442b-b2fe-b34810ecef99" (UID: "d3f8176b-8158-442b-b2fe-b34810ecef99"). InnerVolumeSpecName "kube-api-access-r8j4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.771042 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-scripts" (OuterVolumeSpecName: "scripts") pod "d3f8176b-8158-442b-b2fe-b34810ecef99" (UID: "d3f8176b-8158-442b-b2fe-b34810ecef99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.786584 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d3f8176b-8158-442b-b2fe-b34810ecef99" (UID: "d3f8176b-8158-442b-b2fe-b34810ecef99"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.815289 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3f8176b-8158-442b-b2fe-b34810ecef99" (UID: "d3f8176b-8158-442b-b2fe-b34810ecef99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.816251 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-config-data" (OuterVolumeSpecName: "config-data") pod "d3f8176b-8158-442b-b2fe-b34810ecef99" (UID: "d3f8176b-8158-442b-b2fe-b34810ecef99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.852690 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8j4b\" (UniqueName: \"kubernetes.io/projected/d3f8176b-8158-442b-b2fe-b34810ecef99-kube-api-access-r8j4b\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.852728 4856 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.852739 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.852747 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.852756 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:34 crc kubenswrapper[4856]: I1122 07:24:34.852763 4856 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d3f8176b-8158-442b-b2fe-b34810ecef99-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.597695 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-jthw8" event={"ID":"d3f8176b-8158-442b-b2fe-b34810ecef99","Type":"ContainerDied","Data":"18cb4206759fc44e30fbe9df47747842f8d8e0d6903cc8213c3f86c6aef2d7c4"} Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.598029 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18cb4206759fc44e30fbe9df47747842f8d8e0d6903cc8213c3f86c6aef2d7c4" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.597797 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-jthw8" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.603128 4856 generic.go:334] "Generic (PLEG): container finished" podID="ec618b5f-bf54-4636-b50b-330cdfdfcd62" containerID="b3b6e71e300a03394fe34c27f1ab9fdba9c2acea13d798b97a51ff2be3f5e36c" exitCode=0 Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.603195 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" event={"ID":"ec618b5f-bf54-4636-b50b-330cdfdfcd62","Type":"ContainerDied","Data":"b3b6e71e300a03394fe34c27f1ab9fdba9c2acea13d798b97a51ff2be3f5e36c"} Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.821824 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-jthw8"] Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.829185 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-jthw8"] Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.926418 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wczqs"] Nov 22 07:24:35 crc kubenswrapper[4856]: E1122 07:24:35.926993 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f8176b-8158-442b-b2fe-b34810ecef99" containerName="keystone-bootstrap" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.927036 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f8176b-8158-442b-b2fe-b34810ecef99" containerName="keystone-bootstrap" Nov 22 07:24:35 crc kubenswrapper[4856]: E1122 07:24:35.927067 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a92f1c3-e6ff-433b-8892-cab5bc99555a" containerName="init" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.927075 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a92f1c3-e6ff-433b-8892-cab5bc99555a" containerName="init" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.927262 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3f8176b-8158-442b-b2fe-b34810ecef99" containerName="keystone-bootstrap" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.927287 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a92f1c3-e6ff-433b-8892-cab5bc99555a" containerName="init" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.927836 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.929951 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.932926 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7mv7p" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.933764 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.933831 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.934073 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.945067 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wczqs"] Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.980091 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-scripts\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.980175 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-credential-keys\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.980200 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-config-data\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.980539 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq7hv\" (UniqueName: \"kubernetes.io/projected/23bda6aa-0edd-4530-99a3-860bf6dff736-kube-api-access-hq7hv\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.980759 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-combined-ca-bundle\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:35 crc kubenswrapper[4856]: I1122 07:24:35.980825 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-fernet-keys\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.082993 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq7hv\" (UniqueName: \"kubernetes.io/projected/23bda6aa-0edd-4530-99a3-860bf6dff736-kube-api-access-hq7hv\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.083052 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-combined-ca-bundle\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.083075 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-fernet-keys\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.083124 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-scripts\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.083192 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-credential-keys\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.083222 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-config-data\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.089937 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-scripts\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.090102 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-credential-keys\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.092295 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-config-data\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.093033 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-fernet-keys\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.100189 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq7hv\" (UniqueName: \"kubernetes.io/projected/23bda6aa-0edd-4530-99a3-860bf6dff736-kube-api-access-hq7hv\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.101680 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-combined-ca-bundle\") pod \"keystone-bootstrap-wczqs\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.245579 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:24:36 crc kubenswrapper[4856]: I1122 07:24:36.735019 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f8176b-8158-442b-b2fe-b34810ecef99" path="/var/lib/kubelet/pods/d3f8176b-8158-442b-b2fe-b34810ecef99/volumes" Nov 22 07:24:37 crc kubenswrapper[4856]: E1122 07:24:37.499569 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:d375d370be5ead0dac71109af644849e5795f535f9ad8eeacea261d77ae6f140" Nov 22 07:24:37 crc kubenswrapper[4856]: E1122 07:24:37.500171 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central@sha256:d375d370be5ead0dac71109af644849e5795f535f9ad8eeacea261d77ae6f140,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c4h64bh5bch64bh58dh5dh58ch78h57ch568h76hf7h68h667h6bh599hc6h97h574h589h686hc8h65ch5b7h97h96h6fh567h85hc6h567h57bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3e60644a-2d82-40ed-9d0b-bb144837842a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:24:44 crc kubenswrapper[4856]: I1122 07:24:44.211415 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" podUID="ec618b5f-bf54-4636-b50b-330cdfdfcd62" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Nov 22 07:24:44 crc kubenswrapper[4856]: I1122 07:24:44.686185 4856 generic.go:334] "Generic (PLEG): container finished" podID="10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb" containerID="1d30ff58d06db234c6dbea2039224f88949d93b4b3b10ffc82e7d58969c01365" exitCode=0 Nov 22 07:24:44 crc kubenswrapper[4856]: I1122 07:24:44.686262 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kvfrn" event={"ID":"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb","Type":"ContainerDied","Data":"1d30ff58d06db234c6dbea2039224f88949d93b4b3b10ffc82e7d58969c01365"} Nov 22 07:24:44 crc kubenswrapper[4856]: I1122 07:24:44.891324 4856 scope.go:117] "RemoveContainer" containerID="7eed5742a247dba7962a6ed3fe37d66bdb5a9ce7411624bbfcb903a0f9f7bd63" Nov 22 07:24:46 crc kubenswrapper[4856]: E1122 07:24:46.721476 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879" Nov 22 07:24:46 crc kubenswrapper[4856]: E1122 07:24:46.721955 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h44lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-n9nhw_openstack(d8c4fd78-c2bf-4a39-8db9-e511ae36a38c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:24:46 crc kubenswrapper[4856]: E1122 07:24:46.723299 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-n9nhw" podUID="d8c4fd78-c2bf-4a39-8db9-e511ae36a38c" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.201860 4856 scope.go:117] "RemoveContainer" containerID="8fa9d0fbe6604ab86da745f232791d7e63c3ccb79873f9131d887588d3524e09" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.326615 4856 scope.go:117] "RemoveContainer" containerID="0574bdf597b64fb6f4e3495c7af4c27eba6df879f1ad9a91a69a3350adf02c4b" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.339743 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.421048 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-ovsdbserver-nb\") pod \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.421109 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwg7b\" (UniqueName: \"kubernetes.io/projected/ec618b5f-bf54-4636-b50b-330cdfdfcd62-kube-api-access-lwg7b\") pod \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.421200 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-ovsdbserver-sb\") pod \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.421275 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-config\") pod \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.421302 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-dns-svc\") pod \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\" (UID: \"ec618b5f-bf54-4636-b50b-330cdfdfcd62\") " Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.428041 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec618b5f-bf54-4636-b50b-330cdfdfcd62-kube-api-access-lwg7b" (OuterVolumeSpecName: "kube-api-access-lwg7b") pod "ec618b5f-bf54-4636-b50b-330cdfdfcd62" (UID: "ec618b5f-bf54-4636-b50b-330cdfdfcd62"). InnerVolumeSpecName "kube-api-access-lwg7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.492702 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec618b5f-bf54-4636-b50b-330cdfdfcd62" (UID: "ec618b5f-bf54-4636-b50b-330cdfdfcd62"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.498497 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-config" (OuterVolumeSpecName: "config") pod "ec618b5f-bf54-4636-b50b-330cdfdfcd62" (UID: "ec618b5f-bf54-4636-b50b-330cdfdfcd62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.511947 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec618b5f-bf54-4636-b50b-330cdfdfcd62" (UID: "ec618b5f-bf54-4636-b50b-330cdfdfcd62"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.522181 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec618b5f-bf54-4636-b50b-330cdfdfcd62" (UID: "ec618b5f-bf54-4636-b50b-330cdfdfcd62"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.522849 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.522882 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.522892 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.522900 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec618b5f-bf54-4636-b50b-330cdfdfcd62-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.522909 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwg7b\" (UniqueName: \"kubernetes.io/projected/ec618b5f-bf54-4636-b50b-330cdfdfcd62-kube-api-access-lwg7b\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:47 crc kubenswrapper[4856]: E1122 07:24:47.557370 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645" Nov 22 07:24:47 crc kubenswrapper[4856]: E1122 07:24:47.557721 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-chzvc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-298l7_openstack(f62cc6af-1032-4593-a11f-0dde4a6020ae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:24:47 crc kubenswrapper[4856]: E1122 07:24:47.558945 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-298l7" podUID="f62cc6af-1032-4593-a11f-0dde4a6020ae" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.719182 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ckxn9" event={"ID":"ffb19735-07df-4fbd-9f9a-4d3aa861e03a","Type":"ContainerStarted","Data":"fe385768b79ac31126e52f3869af0ea80aced065b1de1ca8f0de99c92dbf7f22"} Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.725321 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.729574 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" event={"ID":"ec618b5f-bf54-4636-b50b-330cdfdfcd62","Type":"ContainerDied","Data":"cec7a6d45622f5ea52482e93c689af078fa72222451f214f547dd9001829bf3d"} Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.729615 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wczqs"] Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.729640 4856 scope.go:117] "RemoveContainer" containerID="b3b6e71e300a03394fe34c27f1ab9fdba9c2acea13d798b97a51ff2be3f5e36c" Nov 22 07:24:47 crc kubenswrapper[4856]: E1122 07:24:47.729812 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:37d64e0a00c54e71a4c1fcbbbf7e832f6886ffd03c9a02b6ee3ca48fabc30879\\\"\"" pod="openstack/cinder-db-sync-n9nhw" podUID="d8c4fd78-c2bf-4a39-8db9-e511ae36a38c" Nov 22 07:24:47 crc kubenswrapper[4856]: E1122 07:24:47.738149 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:4c93a5cccb9971e24f05daf93b3aa11ba71752bc3469a1a1a2c4906f92f69645\\\"\"" pod="openstack/barbican-db-sync-298l7" podUID="f62cc6af-1032-4593-a11f-0dde4a6020ae" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.756309 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.766649 4856 scope.go:117] "RemoveContainer" containerID="df9db1c81948a84fcf97203f1e737013cca74f18585909f9e9b3f8bb5907c03a" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.795198 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-9dc6m"] Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.802850 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9fdb784c-9dc6m"] Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.822371 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kvfrn" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.933726 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-db-sync-config-data\") pod \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.933823 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-combined-ca-bundle\") pod \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.933916 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-config-data\") pod \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.933941 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxgt4\" (UniqueName: \"kubernetes.io/projected/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-kube-api-access-zxgt4\") pod \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\" (UID: \"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb\") " Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.937467 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-kube-api-access-zxgt4" (OuterVolumeSpecName: "kube-api-access-zxgt4") pod "10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb" (UID: "10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb"). InnerVolumeSpecName "kube-api-access-zxgt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.939031 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb" (UID: "10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.955939 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb" (UID: "10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:24:47 crc kubenswrapper[4856]: I1122 07:24:47.974964 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-config-data" (OuterVolumeSpecName: "config-data") pod "10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb" (UID: "10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:24:48 crc kubenswrapper[4856]: I1122 07:24:48.037584 4856 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:48 crc kubenswrapper[4856]: I1122 07:24:48.037616 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:48 crc kubenswrapper[4856]: I1122 07:24:48.037625 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:48 crc kubenswrapper[4856]: I1122 07:24:48.037636 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxgt4\" (UniqueName: \"kubernetes.io/projected/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb-kube-api-access-zxgt4\") on node \"crc\" DevicePath \"\"" Nov 22 07:24:48 crc kubenswrapper[4856]: I1122 07:24:48.726232 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec618b5f-bf54-4636-b50b-330cdfdfcd62" path="/var/lib/kubelet/pods/ec618b5f-bf54-4636-b50b-330cdfdfcd62/volumes" Nov 22 07:24:48 crc kubenswrapper[4856]: I1122 07:24:48.742254 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wczqs" event={"ID":"23bda6aa-0edd-4530-99a3-860bf6dff736","Type":"ContainerStarted","Data":"07540c49088f81e1a1251cf274f0f75cda056c029e0a8f46a36b5f128a0b8a70"} Nov 22 07:24:48 crc kubenswrapper[4856]: I1122 07:24:48.742305 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wczqs" event={"ID":"23bda6aa-0edd-4530-99a3-860bf6dff736","Type":"ContainerStarted","Data":"cd50d30b39438746d60df0ee18c0b971db143769ee2dc7cf019f1776c8b9f136"} Nov 22 07:24:48 crc kubenswrapper[4856]: I1122 07:24:48.748450 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-kvfrn" event={"ID":"10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb","Type":"ContainerDied","Data":"8cbd2b9f59d2d013a91b9f8e46b21cd06fae8e820ede4733356baef05c62dad6"} Nov 22 07:24:48 crc kubenswrapper[4856]: I1122 07:24:48.748500 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cbd2b9f59d2d013a91b9f8e46b21cd06fae8e820ede4733356baef05c62dad6" Nov 22 07:24:48 crc kubenswrapper[4856]: I1122 07:24:48.748520 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-kvfrn" Nov 22 07:24:48 crc kubenswrapper[4856]: I1122 07:24:48.767239 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wczqs" podStartSLOduration=13.767224649 podStartE2EDuration="13.767224649s" podCreationTimestamp="2025-11-22 07:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:24:48.763614155 +0000 UTC m=+1331.177007433" watchObservedRunningTime="2025-11-22 07:24:48.767224649 +0000 UTC m=+1331.180617917" Nov 22 07:24:48 crc kubenswrapper[4856]: I1122 07:24:48.793391 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-ckxn9" podStartSLOduration=2.742137539 podStartE2EDuration="24.79337315s" podCreationTimestamp="2025-11-22 07:24:24 +0000 UTC" firstStartedPulling="2025-11-22 07:24:25.150943894 +0000 UTC m=+1307.564337152" lastFinishedPulling="2025-11-22 07:24:47.202179515 +0000 UTC m=+1329.615572763" observedRunningTime="2025-11-22 07:24:48.786732309 +0000 UTC m=+1331.200125567" watchObservedRunningTime="2025-11-22 07:24:48.79337315 +0000 UTC m=+1331.206766408" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.164098 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79f9fb8c9c-6qrw9"] Nov 22 07:24:49 crc kubenswrapper[4856]: E1122 07:24:49.164465 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec618b5f-bf54-4636-b50b-330cdfdfcd62" containerName="dnsmasq-dns" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.164485 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec618b5f-bf54-4636-b50b-330cdfdfcd62" containerName="dnsmasq-dns" Nov 22 07:24:49 crc kubenswrapper[4856]: E1122 07:24:49.164502 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec618b5f-bf54-4636-b50b-330cdfdfcd62" containerName="init" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.164516 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec618b5f-bf54-4636-b50b-330cdfdfcd62" containerName="init" Nov 22 07:24:49 crc kubenswrapper[4856]: E1122 07:24:49.164572 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb" containerName="glance-db-sync" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.164582 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb" containerName="glance-db-sync" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.164743 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb" containerName="glance-db-sync" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.164760 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec618b5f-bf54-4636-b50b-330cdfdfcd62" containerName="dnsmasq-dns" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.165651 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.181281 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79f9fb8c9c-6qrw9"] Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.216322 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9fdb784c-9dc6m" podUID="ec618b5f-bf54-4636-b50b-330cdfdfcd62" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: i/o timeout" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.263065 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-ovsdbserver-nb\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.263111 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-dns-svc\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.263157 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-ovsdbserver-sb\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.263206 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-config\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.263232 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzlnq\" (UniqueName: \"kubernetes.io/projected/736bf16a-9b84-4646-94fe-4eb5242fae71-kube-api-access-zzlnq\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.364306 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-ovsdbserver-nb\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.364357 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-dns-svc\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.364398 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-ovsdbserver-sb\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.364454 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-config\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.364484 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzlnq\" (UniqueName: \"kubernetes.io/projected/736bf16a-9b84-4646-94fe-4eb5242fae71-kube-api-access-zzlnq\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.365588 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-ovsdbserver-nb\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.366059 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-ovsdbserver-sb\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.366188 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-config\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.366656 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-dns-svc\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.389345 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzlnq\" (UniqueName: \"kubernetes.io/projected/736bf16a-9b84-4646-94fe-4eb5242fae71-kube-api-access-zzlnq\") pod \"dnsmasq-dns-79f9fb8c9c-6qrw9\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:49 crc kubenswrapper[4856]: I1122 07:24:49.492009 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.057152 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.059005 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.061520 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.061614 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.061847 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-5wct7" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.075463 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.176457 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-logs\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.176514 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.176586 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwm68\" (UniqueName: \"kubernetes.io/projected/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-kube-api-access-fwm68\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.176615 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.176637 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.176652 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.176955 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.278434 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.278514 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-logs\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.278566 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.278651 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwm68\" (UniqueName: \"kubernetes.io/projected/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-kube-api-access-fwm68\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.278678 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.278699 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.278719 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.279156 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.279706 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.280987 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-logs\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.287907 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.289626 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-config-data\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.289949 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-scripts\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.297961 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwm68\" (UniqueName: \"kubernetes.io/projected/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-kube-api-access-fwm68\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.309235 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.368453 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.371453 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.373232 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.376347 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.380383 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.482018 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05336763-c16e-41f1-b74e-7fcf9e5361f8-logs\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.482437 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.482519 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.482666 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxgbd\" (UniqueName: \"kubernetes.io/projected/05336763-c16e-41f1-b74e-7fcf9e5361f8-kube-api-access-mxgbd\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.482742 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.482835 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05336763-c16e-41f1-b74e-7fcf9e5361f8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.482863 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.584182 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05336763-c16e-41f1-b74e-7fcf9e5361f8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.584229 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.584258 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05336763-c16e-41f1-b74e-7fcf9e5361f8-logs\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.584309 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.584384 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.584426 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxgbd\" (UniqueName: \"kubernetes.io/projected/05336763-c16e-41f1-b74e-7fcf9e5361f8-kube-api-access-mxgbd\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.584459 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.584788 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.585681 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05336763-c16e-41f1-b74e-7fcf9e5361f8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.587097 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05336763-c16e-41f1-b74e-7fcf9e5361f8-logs\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.590617 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.592180 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.600068 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.614082 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxgbd\" (UniqueName: \"kubernetes.io/projected/05336763-c16e-41f1-b74e-7fcf9e5361f8-kube-api-access-mxgbd\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.647206 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:24:50 crc kubenswrapper[4856]: I1122 07:24:50.698766 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:24:51 crc kubenswrapper[4856]: I1122 07:24:51.154704 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79f9fb8c9c-6qrw9"] Nov 22 07:24:51 crc kubenswrapper[4856]: I1122 07:24:51.324773 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:24:51 crc kubenswrapper[4856]: W1122 07:24:51.332661 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f42cbc3_ef8a_429b_9dd0_fa55bce12ee0.slice/crio-989597828e03ffcf2dc6600ae4a624bf6a0a6f45adafc7f07685a48c81abce6a WatchSource:0}: Error finding container 989597828e03ffcf2dc6600ae4a624bf6a0a6f45adafc7f07685a48c81abce6a: Status 404 returned error can't find the container with id 989597828e03ffcf2dc6600ae4a624bf6a0a6f45adafc7f07685a48c81abce6a Nov 22 07:24:51 crc kubenswrapper[4856]: I1122 07:24:51.421571 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:24:51 crc kubenswrapper[4856]: W1122 07:24:51.436447 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05336763_c16e_41f1_b74e_7fcf9e5361f8.slice/crio-6b304b1e7e4701ed55b112336cdd8c92a65e1f1684457e36d411a3a7e5950f38 WatchSource:0}: Error finding container 6b304b1e7e4701ed55b112336cdd8c92a65e1f1684457e36d411a3a7e5950f38: Status 404 returned error can't find the container with id 6b304b1e7e4701ed55b112336cdd8c92a65e1f1684457e36d411a3a7e5950f38 Nov 22 07:24:51 crc kubenswrapper[4856]: I1122 07:24:51.782930 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0","Type":"ContainerStarted","Data":"989597828e03ffcf2dc6600ae4a624bf6a0a6f45adafc7f07685a48c81abce6a"} Nov 22 07:24:51 crc kubenswrapper[4856]: I1122 07:24:51.795871 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"9b6021a67115d6e55eab967cf6d9caa17bd06d922a3d54b43b6f5dec9196e96d"} Nov 22 07:24:51 crc kubenswrapper[4856]: I1122 07:24:51.795928 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"1cf12acdc3f6a6abb938bdcfc295ffa2101088f787027d51f80b951797bb5873"} Nov 22 07:24:51 crc kubenswrapper[4856]: I1122 07:24:51.798262 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e60644a-2d82-40ed-9d0b-bb144837842a","Type":"ContainerStarted","Data":"ac2f159623db3bfc776fec7df8aadf9a79636a8bec4d098e00bbbf20e2c19d12"} Nov 22 07:24:51 crc kubenswrapper[4856]: I1122 07:24:51.803420 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05336763-c16e-41f1-b74e-7fcf9e5361f8","Type":"ContainerStarted","Data":"6b304b1e7e4701ed55b112336cdd8c92a65e1f1684457e36d411a3a7e5950f38"} Nov 22 07:24:51 crc kubenswrapper[4856]: I1122 07:24:51.805718 4856 generic.go:334] "Generic (PLEG): container finished" podID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerID="3405e6ef45bdff5aefe5ca7a61c095aea67c3e9c1a448788fb95819ba429c170" exitCode=0 Nov 22 07:24:51 crc kubenswrapper[4856]: I1122 07:24:51.805765 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" event={"ID":"736bf16a-9b84-4646-94fe-4eb5242fae71","Type":"ContainerDied","Data":"3405e6ef45bdff5aefe5ca7a61c095aea67c3e9c1a448788fb95819ba429c170"} Nov 22 07:24:51 crc kubenswrapper[4856]: I1122 07:24:51.805797 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" event={"ID":"736bf16a-9b84-4646-94fe-4eb5242fae71","Type":"ContainerStarted","Data":"b60ec4564b0ea0b6259bbf88734285f3de1cf43c68187932127a641d5f66f291"} Nov 22 07:24:51 crc kubenswrapper[4856]: I1122 07:24:51.936899 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:24:51 crc kubenswrapper[4856]: I1122 07:24:51.997810 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:24:53 crc kubenswrapper[4856]: I1122 07:24:53.851167 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"423b2c9f27662f7d6367f52a13a9033ed0e18cb78b5dc553d9b64162d80e2544"} Nov 22 07:24:53 crc kubenswrapper[4856]: I1122 07:24:53.851897 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerStarted","Data":"dafe6ce95027e629d7af60bc33995b31a71bb7ef4de51b371a2ee48e7639d083"} Nov 22 07:24:53 crc kubenswrapper[4856]: I1122 07:24:53.854832 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05336763-c16e-41f1-b74e-7fcf9e5361f8","Type":"ContainerStarted","Data":"dd016f4d8e10a1ecaca531b22481a585ec8cece43423334c4e85e98dabfc79f5"} Nov 22 07:24:53 crc kubenswrapper[4856]: I1122 07:24:53.857834 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" event={"ID":"736bf16a-9b84-4646-94fe-4eb5242fae71","Type":"ContainerStarted","Data":"c1d8a20e69aad16ac7e07befcaa71f1ada72b72802c30ee6137ead75b22c170b"} Nov 22 07:24:53 crc kubenswrapper[4856]: I1122 07:24:53.858789 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:24:53 crc kubenswrapper[4856]: I1122 07:24:53.862288 4856 generic.go:334] "Generic (PLEG): container finished" podID="23bda6aa-0edd-4530-99a3-860bf6dff736" containerID="07540c49088f81e1a1251cf274f0f75cda056c029e0a8f46a36b5f128a0b8a70" exitCode=0 Nov 22 07:24:53 crc kubenswrapper[4856]: I1122 07:24:53.862409 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wczqs" event={"ID":"23bda6aa-0edd-4530-99a3-860bf6dff736","Type":"ContainerDied","Data":"07540c49088f81e1a1251cf274f0f75cda056c029e0a8f46a36b5f128a0b8a70"} Nov 22 07:24:53 crc kubenswrapper[4856]: I1122 07:24:53.866183 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0","Type":"ContainerStarted","Data":"96d03177fd0d127c95ccc47207005db1d7e1d9b3409032848f58607a735eefdb"} Nov 22 07:24:53 crc kubenswrapper[4856]: I1122 07:24:53.881820 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" podStartSLOduration=4.881788585 podStartE2EDuration="4.881788585s" podCreationTimestamp="2025-11-22 07:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:24:53.87814759 +0000 UTC m=+1336.291540838" watchObservedRunningTime="2025-11-22 07:24:53.881788585 +0000 UTC m=+1336.295181843" Nov 22 07:24:54 crc kubenswrapper[4856]: I1122 07:24:54.876476 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05336763-c16e-41f1-b74e-7fcf9e5361f8","Type":"ContainerStarted","Data":"7af43c562e46305d3a3a13ed3759630716e31ef3b4a003906ebf5c2827e46888"} Nov 22 07:24:54 crc kubenswrapper[4856]: I1122 07:24:54.877086 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="05336763-c16e-41f1-b74e-7fcf9e5361f8" containerName="glance-log" containerID="cri-o://dd016f4d8e10a1ecaca531b22481a585ec8cece43423334c4e85e98dabfc79f5" gracePeriod=30 Nov 22 07:24:54 crc kubenswrapper[4856]: I1122 07:24:54.877194 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="05336763-c16e-41f1-b74e-7fcf9e5361f8" containerName="glance-httpd" containerID="cri-o://7af43c562e46305d3a3a13ed3759630716e31ef3b4a003906ebf5c2827e46888" gracePeriod=30 Nov 22 07:24:54 crc kubenswrapper[4856]: I1122 07:24:54.878620 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0","Type":"ContainerStarted","Data":"4d6a38a16008a90c2713b1366cb55554fa650ba2817fa73cf36c092e78d835b9"} Nov 22 07:24:54 crc kubenswrapper[4856]: I1122 07:24:54.878803 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" containerName="glance-log" containerID="cri-o://96d03177fd0d127c95ccc47207005db1d7e1d9b3409032848f58607a735eefdb" gracePeriod=30 Nov 22 07:24:54 crc kubenswrapper[4856]: I1122 07:24:54.878863 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" containerName="glance-httpd" containerID="cri-o://4d6a38a16008a90c2713b1366cb55554fa650ba2817fa73cf36c092e78d835b9" gracePeriod=30 Nov 22 07:24:54 crc kubenswrapper[4856]: I1122 07:24:54.916577 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.916561821 podStartE2EDuration="5.916561821s" podCreationTimestamp="2025-11-22 07:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:24:54.915035747 +0000 UTC m=+1337.328429005" watchObservedRunningTime="2025-11-22 07:24:54.916561821 +0000 UTC m=+1337.329955079" Nov 22 07:24:54 crc kubenswrapper[4856]: I1122 07:24:54.916941 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.916937151 podStartE2EDuration="5.916937151s" podCreationTimestamp="2025-11-22 07:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:24:54.895991 +0000 UTC m=+1337.309384268" watchObservedRunningTime="2025-11-22 07:24:54.916937151 +0000 UTC m=+1337.330330399" Nov 22 07:24:54 crc kubenswrapper[4856]: I1122 07:24:54.953969 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.410120962 podStartE2EDuration="1m45.953952395s" podCreationTimestamp="2025-11-22 07:23:09 +0000 UTC" firstStartedPulling="2025-11-22 07:23:43.176543805 +0000 UTC m=+1265.589937063" lastFinishedPulling="2025-11-22 07:24:50.720375238 +0000 UTC m=+1333.133768496" observedRunningTime="2025-11-22 07:24:54.9510408 +0000 UTC m=+1337.364434058" watchObservedRunningTime="2025-11-22 07:24:54.953952395 +0000 UTC m=+1337.367345653" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.316734 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79f9fb8c9c-6qrw9"] Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.366846 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-588bcb86c-tjc5x"] Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.368588 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.372134 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.393408 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-588bcb86c-tjc5x"] Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.490537 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-config\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.490599 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-ovsdbserver-sb\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.490640 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-dns-swift-storage-0\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.490872 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-dns-svc\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.490938 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-ovsdbserver-nb\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.490964 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9zdw\" (UniqueName: \"kubernetes.io/projected/4cae477b-f4c8-416e-ac2d-de6cecccfafc-kube-api-access-q9zdw\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.593435 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-dns-svc\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.593488 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-ovsdbserver-nb\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.593512 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9zdw\" (UniqueName: \"kubernetes.io/projected/4cae477b-f4c8-416e-ac2d-de6cecccfafc-kube-api-access-q9zdw\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.593647 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-config\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.593686 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-ovsdbserver-sb\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.593737 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-dns-swift-storage-0\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.594737 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-dns-swift-storage-0\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.594758 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-ovsdbserver-nb\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.594764 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-dns-svc\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.594889 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-config\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.595586 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-ovsdbserver-sb\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.617259 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9zdw\" (UniqueName: \"kubernetes.io/projected/4cae477b-f4c8-416e-ac2d-de6cecccfafc-kube-api-access-q9zdw\") pod \"dnsmasq-dns-588bcb86c-tjc5x\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.806769 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.891858 4856 generic.go:334] "Generic (PLEG): container finished" podID="05336763-c16e-41f1-b74e-7fcf9e5361f8" containerID="7af43c562e46305d3a3a13ed3759630716e31ef3b4a003906ebf5c2827e46888" exitCode=0 Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.891890 4856 generic.go:334] "Generic (PLEG): container finished" podID="05336763-c16e-41f1-b74e-7fcf9e5361f8" containerID="dd016f4d8e10a1ecaca531b22481a585ec8cece43423334c4e85e98dabfc79f5" exitCode=143 Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.891932 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05336763-c16e-41f1-b74e-7fcf9e5361f8","Type":"ContainerDied","Data":"7af43c562e46305d3a3a13ed3759630716e31ef3b4a003906ebf5c2827e46888"} Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.891959 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05336763-c16e-41f1-b74e-7fcf9e5361f8","Type":"ContainerDied","Data":"dd016f4d8e10a1ecaca531b22481a585ec8cece43423334c4e85e98dabfc79f5"} Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.894217 4856 generic.go:334] "Generic (PLEG): container finished" podID="1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" containerID="4d6a38a16008a90c2713b1366cb55554fa650ba2817fa73cf36c092e78d835b9" exitCode=0 Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.894265 4856 generic.go:334] "Generic (PLEG): container finished" podID="1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" containerID="96d03177fd0d127c95ccc47207005db1d7e1d9b3409032848f58607a735eefdb" exitCode=143 Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.894339 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0","Type":"ContainerDied","Data":"4d6a38a16008a90c2713b1366cb55554fa650ba2817fa73cf36c092e78d835b9"} Nov 22 07:24:55 crc kubenswrapper[4856]: I1122 07:24:55.894420 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0","Type":"ContainerDied","Data":"96d03177fd0d127c95ccc47207005db1d7e1d9b3409032848f58607a735eefdb"} Nov 22 07:24:56 crc kubenswrapper[4856]: I1122 07:24:56.904356 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" podUID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerName="dnsmasq-dns" containerID="cri-o://c1d8a20e69aad16ac7e07befcaa71f1ada72b72802c30ee6137ead75b22c170b" gracePeriod=10 Nov 22 07:24:57 crc kubenswrapper[4856]: I1122 07:24:57.926292 4856 generic.go:334] "Generic (PLEG): container finished" podID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerID="c1d8a20e69aad16ac7e07befcaa71f1ada72b72802c30ee6137ead75b22c170b" exitCode=0 Nov 22 07:24:57 crc kubenswrapper[4856]: I1122 07:24:57.926337 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" event={"ID":"736bf16a-9b84-4646-94fe-4eb5242fae71","Type":"ContainerDied","Data":"c1d8a20e69aad16ac7e07befcaa71f1ada72b72802c30ee6137ead75b22c170b"} Nov 22 07:24:59 crc kubenswrapper[4856]: I1122 07:24:59.493467 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" podUID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Nov 22 07:25:04 crc kubenswrapper[4856]: I1122 07:25:04.493564 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" podUID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Nov 22 07:25:09 crc kubenswrapper[4856]: I1122 07:25:09.492712 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" podUID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Nov 22 07:25:14 crc kubenswrapper[4856]: I1122 07:25:14.493349 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" podUID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Nov 22 07:25:19 crc kubenswrapper[4856]: I1122 07:25:19.492642 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" podUID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Nov 22 07:25:20 crc kubenswrapper[4856]: I1122 07:25:20.376933 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 07:25:20 crc kubenswrapper[4856]: I1122 07:25:20.376997 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 07:25:20 crc kubenswrapper[4856]: I1122 07:25:20.699124 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:20 crc kubenswrapper[4856]: I1122 07:25:20.699220 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.680997 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.687976 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.832219 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-fernet-keys\") pod \"23bda6aa-0edd-4530-99a3-860bf6dff736\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.832280 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq7hv\" (UniqueName: \"kubernetes.io/projected/23bda6aa-0edd-4530-99a3-860bf6dff736-kube-api-access-hq7hv\") pod \"23bda6aa-0edd-4530-99a3-860bf6dff736\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.832323 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05336763-c16e-41f1-b74e-7fcf9e5361f8-logs\") pod \"05336763-c16e-41f1-b74e-7fcf9e5361f8\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.832363 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-config-data\") pod \"23bda6aa-0edd-4530-99a3-860bf6dff736\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.832404 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxgbd\" (UniqueName: \"kubernetes.io/projected/05336763-c16e-41f1-b74e-7fcf9e5361f8-kube-api-access-mxgbd\") pod \"05336763-c16e-41f1-b74e-7fcf9e5361f8\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.832426 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-scripts\") pod \"05336763-c16e-41f1-b74e-7fcf9e5361f8\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.832482 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-credential-keys\") pod \"23bda6aa-0edd-4530-99a3-860bf6dff736\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.832531 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"05336763-c16e-41f1-b74e-7fcf9e5361f8\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.832573 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-scripts\") pod \"23bda6aa-0edd-4530-99a3-860bf6dff736\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.832606 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-combined-ca-bundle\") pod \"05336763-c16e-41f1-b74e-7fcf9e5361f8\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.832630 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-combined-ca-bundle\") pod \"23bda6aa-0edd-4530-99a3-860bf6dff736\" (UID: \"23bda6aa-0edd-4530-99a3-860bf6dff736\") " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.833122 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05336763-c16e-41f1-b74e-7fcf9e5361f8-logs" (OuterVolumeSpecName: "logs") pod "05336763-c16e-41f1-b74e-7fcf9e5361f8" (UID: "05336763-c16e-41f1-b74e-7fcf9e5361f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.833582 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05336763-c16e-41f1-b74e-7fcf9e5361f8-httpd-run\") pod \"05336763-c16e-41f1-b74e-7fcf9e5361f8\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.833713 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-config-data\") pod \"05336763-c16e-41f1-b74e-7fcf9e5361f8\" (UID: \"05336763-c16e-41f1-b74e-7fcf9e5361f8\") " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.834181 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/05336763-c16e-41f1-b74e-7fcf9e5361f8-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.841039 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-scripts" (OuterVolumeSpecName: "scripts") pod "23bda6aa-0edd-4530-99a3-860bf6dff736" (UID: "23bda6aa-0edd-4530-99a3-860bf6dff736"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.842447 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "23bda6aa-0edd-4530-99a3-860bf6dff736" (UID: "23bda6aa-0edd-4530-99a3-860bf6dff736"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.842462 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05336763-c16e-41f1-b74e-7fcf9e5361f8-kube-api-access-mxgbd" (OuterVolumeSpecName: "kube-api-access-mxgbd") pod "05336763-c16e-41f1-b74e-7fcf9e5361f8" (UID: "05336763-c16e-41f1-b74e-7fcf9e5361f8"). InnerVolumeSpecName "kube-api-access-mxgbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.845273 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "23bda6aa-0edd-4530-99a3-860bf6dff736" (UID: "23bda6aa-0edd-4530-99a3-860bf6dff736"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.848442 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "05336763-c16e-41f1-b74e-7fcf9e5361f8" (UID: "05336763-c16e-41f1-b74e-7fcf9e5361f8"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.853785 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-scripts" (OuterVolumeSpecName: "scripts") pod "05336763-c16e-41f1-b74e-7fcf9e5361f8" (UID: "05336763-c16e-41f1-b74e-7fcf9e5361f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.853874 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23bda6aa-0edd-4530-99a3-860bf6dff736-kube-api-access-hq7hv" (OuterVolumeSpecName: "kube-api-access-hq7hv") pod "23bda6aa-0edd-4530-99a3-860bf6dff736" (UID: "23bda6aa-0edd-4530-99a3-860bf6dff736"). InnerVolumeSpecName "kube-api-access-hq7hv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.860049 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05336763-c16e-41f1-b74e-7fcf9e5361f8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "05336763-c16e-41f1-b74e-7fcf9e5361f8" (UID: "05336763-c16e-41f1-b74e-7fcf9e5361f8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.877203 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23bda6aa-0edd-4530-99a3-860bf6dff736" (UID: "23bda6aa-0edd-4530-99a3-860bf6dff736"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.882589 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-config-data" (OuterVolumeSpecName: "config-data") pod "23bda6aa-0edd-4530-99a3-860bf6dff736" (UID: "23bda6aa-0edd-4530-99a3-860bf6dff736"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.936075 4856 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.936109 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq7hv\" (UniqueName: \"kubernetes.io/projected/23bda6aa-0edd-4530-99a3-860bf6dff736-kube-api-access-hq7hv\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.936120 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.936128 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxgbd\" (UniqueName: \"kubernetes.io/projected/05336763-c16e-41f1-b74e-7fcf9e5361f8-kube-api-access-mxgbd\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.936136 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.936146 4856 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.936174 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.936183 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.936192 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23bda6aa-0edd-4530-99a3-860bf6dff736-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.936200 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/05336763-c16e-41f1-b74e-7fcf9e5361f8-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:21 crc kubenswrapper[4856]: I1122 07:25:21.960586 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.038079 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.065188 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-588bcb86c-tjc5x"] Nov 22 07:25:22 crc kubenswrapper[4856]: W1122 07:25:22.066997 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4cae477b_f4c8_416e_ac2d_de6cecccfafc.slice/crio-c78931d6ee08fa0b1c001c68375f58b864096e281bbf51e700fe6a651e27035e WatchSource:0}: Error finding container c78931d6ee08fa0b1c001c68375f58b864096e281bbf51e700fe6a651e27035e: Status 404 returned error can't find the container with id c78931d6ee08fa0b1c001c68375f58b864096e281bbf51e700fe6a651e27035e Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.142600 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"05336763-c16e-41f1-b74e-7fcf9e5361f8","Type":"ContainerDied","Data":"6b304b1e7e4701ed55b112336cdd8c92a65e1f1684457e36d411a3a7e5950f38"} Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.142652 4856 scope.go:117] "RemoveContainer" containerID="7af43c562e46305d3a3a13ed3759630716e31ef3b4a003906ebf5c2827e46888" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.142613 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.145174 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wczqs" event={"ID":"23bda6aa-0edd-4530-99a3-860bf6dff736","Type":"ContainerDied","Data":"cd50d30b39438746d60df0ee18c0b971db143769ee2dc7cf019f1776c8b9f136"} Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.145219 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd50d30b39438746d60df0ee18c0b971db143769ee2dc7cf019f1776c8b9f136" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.145185 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wczqs" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.146224 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" event={"ID":"4cae477b-f4c8-416e-ac2d-de6cecccfafc","Type":"ContainerStarted","Data":"c78931d6ee08fa0b1c001c68375f58b864096e281bbf51e700fe6a651e27035e"} Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.203442 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-config-data" (OuterVolumeSpecName: "config-data") pod "05336763-c16e-41f1-b74e-7fcf9e5361f8" (UID: "05336763-c16e-41f1-b74e-7fcf9e5361f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.207269 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05336763-c16e-41f1-b74e-7fcf9e5361f8" (UID: "05336763-c16e-41f1-b74e-7fcf9e5361f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.241790 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.241825 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05336763-c16e-41f1-b74e-7fcf9e5361f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.396365 4856 scope.go:117] "RemoveContainer" containerID="dd016f4d8e10a1ecaca531b22481a585ec8cece43423334c4e85e98dabfc79f5" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.481325 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.491122 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.500571 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:25:22 crc kubenswrapper[4856]: E1122 07:25:22.500923 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05336763-c16e-41f1-b74e-7fcf9e5361f8" containerName="glance-log" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.500948 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="05336763-c16e-41f1-b74e-7fcf9e5361f8" containerName="glance-log" Nov 22 07:25:22 crc kubenswrapper[4856]: E1122 07:25:22.500965 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05336763-c16e-41f1-b74e-7fcf9e5361f8" containerName="glance-httpd" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.500973 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="05336763-c16e-41f1-b74e-7fcf9e5361f8" containerName="glance-httpd" Nov 22 07:25:22 crc kubenswrapper[4856]: E1122 07:25:22.501005 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23bda6aa-0edd-4530-99a3-860bf6dff736" containerName="keystone-bootstrap" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.501014 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="23bda6aa-0edd-4530-99a3-860bf6dff736" containerName="keystone-bootstrap" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.501232 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="23bda6aa-0edd-4530-99a3-860bf6dff736" containerName="keystone-bootstrap" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.501250 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="05336763-c16e-41f1-b74e-7fcf9e5361f8" containerName="glance-httpd" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.501277 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="05336763-c16e-41f1-b74e-7fcf9e5361f8" containerName="glance-log" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.518403 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.518500 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.520478 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.521065 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.647263 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bacb8184-1aa1-400c-99c8-1cab84e83cd7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.647320 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bacb8184-1aa1-400c-99c8-1cab84e83cd7-logs\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.647372 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.647399 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.647451 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.647540 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.647573 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h9fd\" (UniqueName: \"kubernetes.io/projected/bacb8184-1aa1-400c-99c8-1cab84e83cd7-kube-api-access-9h9fd\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.647650 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.725288 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05336763-c16e-41f1-b74e-7fcf9e5361f8" path="/var/lib/kubelet/pods/05336763-c16e-41f1-b74e-7fcf9e5361f8/volumes" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.750314 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.750563 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.750691 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.750823 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.751648 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h9fd\" (UniqueName: \"kubernetes.io/projected/bacb8184-1aa1-400c-99c8-1cab84e83cd7-kube-api-access-9h9fd\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.751799 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.751146 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.752620 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bacb8184-1aa1-400c-99c8-1cab84e83cd7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.753111 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bacb8184-1aa1-400c-99c8-1cab84e83cd7-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.753275 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bacb8184-1aa1-400c-99c8-1cab84e83cd7-logs\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.848158 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6cf775d657-87zdn"] Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.850372 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.868046 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cf775d657-87zdn"] Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.869208 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7mv7p" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.869357 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.869575 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.869716 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.869801 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.869884 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.920501 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bacb8184-1aa1-400c-99c8-1cab84e83cd7-logs\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.921472 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.921528 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.921770 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.932190 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.958384 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-credential-keys\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.958540 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-combined-ca-bundle\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.958584 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5nwf\" (UniqueName: \"kubernetes.io/projected/f6976ffd-7286-4347-b8af-607803a96768-kube-api-access-r5nwf\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.958635 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-scripts\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.958686 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-fernet-keys\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.958706 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-config-data\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.958734 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-internal-tls-certs\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:22 crc kubenswrapper[4856]: I1122 07:25:22.958768 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-public-tls-certs\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.020411 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.020467 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h9fd\" (UniqueName: \"kubernetes.io/projected/bacb8184-1aa1-400c-99c8-1cab84e83cd7-kube-api-access-9h9fd\") pod \"glance-default-internal-api-0\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.059779 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-combined-ca-bundle\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.059840 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5nwf\" (UniqueName: \"kubernetes.io/projected/f6976ffd-7286-4347-b8af-607803a96768-kube-api-access-r5nwf\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.059871 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-scripts\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.059944 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-fernet-keys\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.059971 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-config-data\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.060000 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-internal-tls-certs\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.060028 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-public-tls-certs\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.060070 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-credential-keys\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.065064 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-scripts\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.065367 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-fernet-keys\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.065612 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-credential-keys\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.065822 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-internal-tls-certs\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.066012 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-public-tls-certs\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.067100 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-combined-ca-bundle\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.070167 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-config-data\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.077876 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5nwf\" (UniqueName: \"kubernetes.io/projected/f6976ffd-7286-4347-b8af-607803a96768-kube-api-access-r5nwf\") pod \"keystone-6cf775d657-87zdn\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.171726 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.221474 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.734624 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cf775d657-87zdn"] Nov 22 07:25:23 crc kubenswrapper[4856]: W1122 07:25:23.737692 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6976ffd_7286_4347_b8af_607803a96768.slice/crio-45afbbbd66324ae2272304d5459e72de6394c18ce1ca18ce20af0b57f9941bec WatchSource:0}: Error finding container 45afbbbd66324ae2272304d5459e72de6394c18ce1ca18ce20af0b57f9941bec: Status 404 returned error can't find the container with id 45afbbbd66324ae2272304d5459e72de6394c18ce1ca18ce20af0b57f9941bec Nov 22 07:25:23 crc kubenswrapper[4856]: W1122 07:25:23.775116 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbacb8184_1aa1_400c_99c8_1cab84e83cd7.slice/crio-8b8d18a78000ba17e839d09c57953a2d0d5cf19fc3b870dfa1b5b2c0080451ca WatchSource:0}: Error finding container 8b8d18a78000ba17e839d09c57953a2d0d5cf19fc3b870dfa1b5b2c0080451ca: Status 404 returned error can't find the container with id 8b8d18a78000ba17e839d09c57953a2d0d5cf19fc3b870dfa1b5b2c0080451ca Nov 22 07:25:23 crc kubenswrapper[4856]: I1122 07:25:23.776376 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:25:24 crc kubenswrapper[4856]: I1122 07:25:24.183580 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cf775d657-87zdn" event={"ID":"f6976ffd-7286-4347-b8af-607803a96768","Type":"ContainerStarted","Data":"45afbbbd66324ae2272304d5459e72de6394c18ce1ca18ce20af0b57f9941bec"} Nov 22 07:25:24 crc kubenswrapper[4856]: I1122 07:25:24.189250 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bacb8184-1aa1-400c-99c8-1cab84e83cd7","Type":"ContainerStarted","Data":"8b8d18a78000ba17e839d09c57953a2d0d5cf19fc3b870dfa1b5b2c0080451ca"} Nov 22 07:25:24 crc kubenswrapper[4856]: I1122 07:25:24.492854 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" podUID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.167243 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.211249 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" event={"ID":"736bf16a-9b84-4646-94fe-4eb5242fae71","Type":"ContainerDied","Data":"b60ec4564b0ea0b6259bbf88734285f3de1cf43c68187932127a641d5f66f291"} Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.211324 4856 scope.go:117] "RemoveContainer" containerID="c1d8a20e69aad16ac7e07befcaa71f1ada72b72802c30ee6137ead75b22c170b" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.211269 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79f9fb8c9c-6qrw9" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.213559 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" event={"ID":"4cae477b-f4c8-416e-ac2d-de6cecccfafc","Type":"ContainerStarted","Data":"7147ecaff322802e92bd7b2bc26f58f2536def0b991b416a409d90dc898d0ab6"} Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.268762 4856 scope.go:117] "RemoveContainer" containerID="3405e6ef45bdff5aefe5ca7a61c095aea67c3e9c1a448788fb95819ba429c170" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.307054 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-ovsdbserver-nb\") pod \"736bf16a-9b84-4646-94fe-4eb5242fae71\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.307172 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzlnq\" (UniqueName: \"kubernetes.io/projected/736bf16a-9b84-4646-94fe-4eb5242fae71-kube-api-access-zzlnq\") pod \"736bf16a-9b84-4646-94fe-4eb5242fae71\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.307245 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-dns-svc\") pod \"736bf16a-9b84-4646-94fe-4eb5242fae71\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.307274 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-ovsdbserver-sb\") pod \"736bf16a-9b84-4646-94fe-4eb5242fae71\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.307297 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-config\") pod \"736bf16a-9b84-4646-94fe-4eb5242fae71\" (UID: \"736bf16a-9b84-4646-94fe-4eb5242fae71\") " Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.313583 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736bf16a-9b84-4646-94fe-4eb5242fae71-kube-api-access-zzlnq" (OuterVolumeSpecName: "kube-api-access-zzlnq") pod "736bf16a-9b84-4646-94fe-4eb5242fae71" (UID: "736bf16a-9b84-4646-94fe-4eb5242fae71"). InnerVolumeSpecName "kube-api-access-zzlnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.357676 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-config" (OuterVolumeSpecName: "config") pod "736bf16a-9b84-4646-94fe-4eb5242fae71" (UID: "736bf16a-9b84-4646-94fe-4eb5242fae71"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.358877 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "736bf16a-9b84-4646-94fe-4eb5242fae71" (UID: "736bf16a-9b84-4646-94fe-4eb5242fae71"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.365971 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "736bf16a-9b84-4646-94fe-4eb5242fae71" (UID: "736bf16a-9b84-4646-94fe-4eb5242fae71"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.367574 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "736bf16a-9b84-4646-94fe-4eb5242fae71" (UID: "736bf16a-9b84-4646-94fe-4eb5242fae71"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.409280 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzlnq\" (UniqueName: \"kubernetes.io/projected/736bf16a-9b84-4646-94fe-4eb5242fae71-kube-api-access-zzlnq\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.409315 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.409325 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.409335 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.409343 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/736bf16a-9b84-4646-94fe-4eb5242fae71-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.422952 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.509909 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-config-data\") pod \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.510213 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.510359 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-combined-ca-bundle\") pod \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.510463 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-httpd-run\") pod \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.510600 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-scripts\") pod \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.510749 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-logs\") pod \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.510884 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwm68\" (UniqueName: \"kubernetes.io/projected/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-kube-api-access-fwm68\") pod \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\" (UID: \"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0\") " Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.512863 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" (UID: "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.513272 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-logs" (OuterVolumeSpecName: "logs") pod "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" (UID: "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.515136 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-kube-api-access-fwm68" (OuterVolumeSpecName: "kube-api-access-fwm68") pod "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" (UID: "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0"). InnerVolumeSpecName "kube-api-access-fwm68". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.516001 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" (UID: "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.516207 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-scripts" (OuterVolumeSpecName: "scripts") pod "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" (UID: "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.538629 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" (UID: "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.549149 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79f9fb8c9c-6qrw9"] Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.555558 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79f9fb8c9c-6qrw9"] Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.562999 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-config-data" (OuterVolumeSpecName: "config-data") pod "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" (UID: "1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.613050 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.613567 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwm68\" (UniqueName: \"kubernetes.io/projected/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-kube-api-access-fwm68\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.613641 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.613762 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.613820 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.613891 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.613966 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.632965 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 22 07:25:25 crc kubenswrapper[4856]: I1122 07:25:25.715308 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.226604 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cf775d657-87zdn" event={"ID":"f6976ffd-7286-4347-b8af-607803a96768","Type":"ContainerStarted","Data":"3cdce92348e8a5abc8c54f390907c002ea710c31b653f0e1d2c690885f3a2712"} Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.228828 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bacb8184-1aa1-400c-99c8-1cab84e83cd7","Type":"ContainerStarted","Data":"195c38e7799f12a7b158ae8ba1823c75572346d7ca99a5add897c36d0d7f77df"} Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.231676 4856 generic.go:334] "Generic (PLEG): container finished" podID="4cae477b-f4c8-416e-ac2d-de6cecccfafc" containerID="7147ecaff322802e92bd7b2bc26f58f2536def0b991b416a409d90dc898d0ab6" exitCode=0 Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.231751 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" event={"ID":"4cae477b-f4c8-416e-ac2d-de6cecccfafc","Type":"ContainerDied","Data":"7147ecaff322802e92bd7b2bc26f58f2536def0b991b416a409d90dc898d0ab6"} Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.235718 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0","Type":"ContainerDied","Data":"989597828e03ffcf2dc6600ae4a624bf6a0a6f45adafc7f07685a48c81abce6a"} Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.235776 4856 scope.go:117] "RemoveContainer" containerID="4d6a38a16008a90c2713b1366cb55554fa650ba2817fa73cf36c092e78d835b9" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.235904 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.326111 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.326318 4856 scope.go:117] "RemoveContainer" containerID="96d03177fd0d127c95ccc47207005db1d7e1d9b3409032848f58607a735eefdb" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.340455 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.349871 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:25:26 crc kubenswrapper[4856]: E1122 07:25:26.350498 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerName="init" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.350534 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerName="init" Nov 22 07:25:26 crc kubenswrapper[4856]: E1122 07:25:26.350548 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerName="dnsmasq-dns" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.350556 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerName="dnsmasq-dns" Nov 22 07:25:26 crc kubenswrapper[4856]: E1122 07:25:26.350589 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" containerName="glance-log" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.350598 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" containerName="glance-log" Nov 22 07:25:26 crc kubenswrapper[4856]: E1122 07:25:26.350617 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" containerName="glance-httpd" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.350623 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" containerName="glance-httpd" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.350777 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" containerName="glance-httpd" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.350789 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="736bf16a-9b84-4646-94fe-4eb5242fae71" containerName="dnsmasq-dns" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.350800 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" containerName="glance-log" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.353355 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.357678 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.363894 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.381426 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.429017 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e4b2fd6-9289-4543-ac15-75da468b55c9-logs\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.429300 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.429466 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.429607 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.429752 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.429909 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.430270 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e4b2fd6-9289-4543-ac15-75da468b55c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.430407 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs4n5\" (UniqueName: \"kubernetes.io/projected/4e4b2fd6-9289-4543-ac15-75da468b55c9-kube-api-access-fs4n5\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.531686 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e4b2fd6-9289-4543-ac15-75da468b55c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.531743 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs4n5\" (UniqueName: \"kubernetes.io/projected/4e4b2fd6-9289-4543-ac15-75da468b55c9-kube-api-access-fs4n5\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.531799 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e4b2fd6-9289-4543-ac15-75da468b55c9-logs\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.531832 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.531859 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.531900 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.531938 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.531981 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.532462 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.534828 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e4b2fd6-9289-4543-ac15-75da468b55c9-logs\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.534941 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e4b2fd6-9289-4543-ac15-75da468b55c9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.539979 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.540149 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.541246 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.542363 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.556434 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs4n5\" (UniqueName: \"kubernetes.io/projected/4e4b2fd6-9289-4543-ac15-75da468b55c9-kube-api-access-fs4n5\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.567879 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.692789 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.720282 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0" path="/var/lib/kubelet/pods/1f42cbc3-ef8a-429b-9dd0-fa55bce12ee0/volumes" Nov 22 07:25:26 crc kubenswrapper[4856]: I1122 07:25:26.721004 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736bf16a-9b84-4646-94fe-4eb5242fae71" path="/var/lib/kubelet/pods/736bf16a-9b84-4646-94fe-4eb5242fae71/volumes" Nov 22 07:25:26 crc kubenswrapper[4856]: E1122 07:25:26.823711 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1" Nov 22 07:25:26 crc kubenswrapper[4856]: E1122 07:25:26.824114 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3e60644a-2d82-40ed-9d0b-bb144837842a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 07:25:27 crc kubenswrapper[4856]: I1122 07:25:27.271192 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:25:28 crc kubenswrapper[4856]: I1122 07:25:28.305621 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e4b2fd6-9289-4543-ac15-75da468b55c9","Type":"ContainerStarted","Data":"5cd38466682d45fcc0fb6fdcef883681918b301b838befbf074021cc76e0d489"} Nov 22 07:25:28 crc kubenswrapper[4856]: I1122 07:25:28.306117 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:28 crc kubenswrapper[4856]: I1122 07:25:28.333166 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6cf775d657-87zdn" podStartSLOduration=6.333139984 podStartE2EDuration="6.333139984s" podCreationTimestamp="2025-11-22 07:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:25:28.327401348 +0000 UTC m=+1370.740794606" watchObservedRunningTime="2025-11-22 07:25:28.333139984 +0000 UTC m=+1370.746533242" Nov 22 07:25:29 crc kubenswrapper[4856]: I1122 07:25:29.315591 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e4b2fd6-9289-4543-ac15-75da468b55c9","Type":"ContainerStarted","Data":"cfdacf6da7c588ca0bcf1479465101a4822c4d7379ea095c311344318b3ab4da"} Nov 22 07:25:29 crc kubenswrapper[4856]: I1122 07:25:29.320139 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" event={"ID":"4cae477b-f4c8-416e-ac2d-de6cecccfafc","Type":"ContainerStarted","Data":"78e1262efa93dbe79c59f24fda69e8fedc14e777db6de1096f1712cbb85890b9"} Nov 22 07:25:30 crc kubenswrapper[4856]: I1122 07:25:30.333101 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bacb8184-1aa1-400c-99c8-1cab84e83cd7","Type":"ContainerStarted","Data":"ad7f34f5bdfc3e25a3856d2526fe4b7c79b62a3a2e4466751fce5b4b8233dc7d"} Nov 22 07:25:30 crc kubenswrapper[4856]: I1122 07:25:30.333248 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:25:32 crc kubenswrapper[4856]: I1122 07:25:32.383662 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.383637121 podStartE2EDuration="10.383637121s" podCreationTimestamp="2025-11-22 07:25:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:25:32.381236897 +0000 UTC m=+1374.794630155" watchObservedRunningTime="2025-11-22 07:25:32.383637121 +0000 UTC m=+1374.797030379" Nov 22 07:25:32 crc kubenswrapper[4856]: I1122 07:25:32.388311 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" podStartSLOduration=37.388288769 podStartE2EDuration="37.388288769s" podCreationTimestamp="2025-11-22 07:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:25:30.35793068 +0000 UTC m=+1372.771323938" watchObservedRunningTime="2025-11-22 07:25:32.388288769 +0000 UTC m=+1374.801682037" Nov 22 07:25:33 crc kubenswrapper[4856]: I1122 07:25:33.173071 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:33 crc kubenswrapper[4856]: I1122 07:25:33.173115 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:33 crc kubenswrapper[4856]: I1122 07:25:33.205132 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:33 crc kubenswrapper[4856]: I1122 07:25:33.216638 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:33 crc kubenswrapper[4856]: I1122 07:25:33.357721 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:33 crc kubenswrapper[4856]: I1122 07:25:33.357766 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:35 crc kubenswrapper[4856]: I1122 07:25:35.454124 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:35 crc kubenswrapper[4856]: I1122 07:25:35.809684 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:25:35 crc kubenswrapper[4856]: I1122 07:25:35.874502 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794df4974f-bqzxn"] Nov 22 07:25:35 crc kubenswrapper[4856]: I1122 07:25:35.874810 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" podUID="a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" containerName="dnsmasq-dns" containerID="cri-o://d08b0f39d313ca2bfb10a627ce9f6382f91298f77a6a3de7131c3e98a404d232" gracePeriod=10 Nov 22 07:25:37 crc kubenswrapper[4856]: I1122 07:25:37.393778 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 07:25:38 crc kubenswrapper[4856]: I1122 07:25:38.419964 4856 generic.go:334] "Generic (PLEG): container finished" podID="a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" containerID="d08b0f39d313ca2bfb10a627ce9f6382f91298f77a6a3de7131c3e98a404d232" exitCode=0 Nov 22 07:25:38 crc kubenswrapper[4856]: I1122 07:25:38.420079 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" event={"ID":"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82","Type":"ContainerDied","Data":"d08b0f39d313ca2bfb10a627ce9f6382f91298f77a6a3de7131c3e98a404d232"} Nov 22 07:25:39 crc kubenswrapper[4856]: I1122 07:25:39.609918 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" podUID="a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.453336 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" event={"ID":"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82","Type":"ContainerDied","Data":"50013ece6a1fe425ee65c97430756957b2c922cba2c2c80ff9ec64cb1bf55c3b"} Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.453790 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50013ece6a1fe425ee65c97430756957b2c922cba2c2c80ff9ec64cb1bf55c3b" Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.479345 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.592753 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-ovsdbserver-nb\") pod \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.593554 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-config\") pod \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.593667 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-458kg\" (UniqueName: \"kubernetes.io/projected/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-kube-api-access-458kg\") pod \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.593758 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-dns-svc\") pod \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.593867 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-ovsdbserver-sb\") pod \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\" (UID: \"a1748cfc-ac6b-454c-9dbd-1e18f3b16d82\") " Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.598835 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-kube-api-access-458kg" (OuterVolumeSpecName: "kube-api-access-458kg") pod "a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" (UID: "a1748cfc-ac6b-454c-9dbd-1e18f3b16d82"). InnerVolumeSpecName "kube-api-access-458kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.638781 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" (UID: "a1748cfc-ac6b-454c-9dbd-1e18f3b16d82"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.639636 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" (UID: "a1748cfc-ac6b-454c-9dbd-1e18f3b16d82"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.642744 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-config" (OuterVolumeSpecName: "config") pod "a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" (UID: "a1748cfc-ac6b-454c-9dbd-1e18f3b16d82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.654878 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" (UID: "a1748cfc-ac6b-454c-9dbd-1e18f3b16d82"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.695889 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.695934 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.695944 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.695958 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-458kg\" (UniqueName: \"kubernetes.io/projected/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-kube-api-access-458kg\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:41 crc kubenswrapper[4856]: I1122 07:25:41.695969 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:25:42 crc kubenswrapper[4856]: I1122 07:25:42.460531 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-794df4974f-bqzxn" Nov 22 07:25:42 crc kubenswrapper[4856]: I1122 07:25:42.495490 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-794df4974f-bqzxn"] Nov 22 07:25:42 crc kubenswrapper[4856]: I1122 07:25:42.501827 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-794df4974f-bqzxn"] Nov 22 07:25:42 crc kubenswrapper[4856]: I1122 07:25:42.720628 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" path="/var/lib/kubelet/pods/a1748cfc-ac6b-454c-9dbd-1e18f3b16d82/volumes" Nov 22 07:25:54 crc kubenswrapper[4856]: I1122 07:25:54.922830 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.722330 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 07:25:58 crc kubenswrapper[4856]: E1122 07:25:58.723238 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" containerName="dnsmasq-dns" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.723250 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" containerName="dnsmasq-dns" Nov 22 07:25:58 crc kubenswrapper[4856]: E1122 07:25:58.723265 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" containerName="init" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.723271 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" containerName="init" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.723440 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1748cfc-ac6b-454c-9dbd-1e18f3b16d82" containerName="dnsmasq-dns" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.724482 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.730297 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.733887 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.733994 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.734036 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-7gbdq" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.799663 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1538a039-b87f-4e9a-92ba-837236d61e99-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " pod="openstack/openstackclient" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.800779 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1538a039-b87f-4e9a-92ba-837236d61e99-openstack-config\") pod \"openstackclient\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " pod="openstack/openstackclient" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.800961 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1538a039-b87f-4e9a-92ba-837236d61e99-openstack-config-secret\") pod \"openstackclient\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " pod="openstack/openstackclient" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.801321 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsf98\" (UniqueName: \"kubernetes.io/projected/1538a039-b87f-4e9a-92ba-837236d61e99-kube-api-access-qsf98\") pod \"openstackclient\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " pod="openstack/openstackclient" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.904001 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1538a039-b87f-4e9a-92ba-837236d61e99-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " pod="openstack/openstackclient" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.904344 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1538a039-b87f-4e9a-92ba-837236d61e99-openstack-config\") pod \"openstackclient\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " pod="openstack/openstackclient" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.904464 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1538a039-b87f-4e9a-92ba-837236d61e99-openstack-config-secret\") pod \"openstackclient\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " pod="openstack/openstackclient" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.904651 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsf98\" (UniqueName: \"kubernetes.io/projected/1538a039-b87f-4e9a-92ba-837236d61e99-kube-api-access-qsf98\") pod \"openstackclient\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " pod="openstack/openstackclient" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.907071 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1538a039-b87f-4e9a-92ba-837236d61e99-openstack-config\") pod \"openstackclient\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " pod="openstack/openstackclient" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.912388 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1538a039-b87f-4e9a-92ba-837236d61e99-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " pod="openstack/openstackclient" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.920833 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1538a039-b87f-4e9a-92ba-837236d61e99-openstack-config-secret\") pod \"openstackclient\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " pod="openstack/openstackclient" Nov 22 07:25:58 crc kubenswrapper[4856]: I1122 07:25:58.923563 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsf98\" (UniqueName: \"kubernetes.io/projected/1538a039-b87f-4e9a-92ba-837236d61e99-kube-api-access-qsf98\") pod \"openstackclient\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.027588 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.030478 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.055414 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.062331 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.063535 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.070714 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.210759 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9a94a048-f961-4675-85bf-88414e414a51-openstack-config\") pod \"openstackclient\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.210960 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9a94a048-f961-4675-85bf-88414e414a51-openstack-config-secret\") pod \"openstackclient\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.211024 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h8fx\" (UniqueName: \"kubernetes.io/projected/9a94a048-f961-4675-85bf-88414e414a51-kube-api-access-5h8fx\") pod \"openstackclient\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.211062 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a94a048-f961-4675-85bf-88414e414a51-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.313075 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9a94a048-f961-4675-85bf-88414e414a51-openstack-config\") pod \"openstackclient\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.313391 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9a94a048-f961-4675-85bf-88414e414a51-openstack-config-secret\") pod \"openstackclient\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.313537 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h8fx\" (UniqueName: \"kubernetes.io/projected/9a94a048-f961-4675-85bf-88414e414a51-kube-api-access-5h8fx\") pod \"openstackclient\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.313608 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a94a048-f961-4675-85bf-88414e414a51-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.314399 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9a94a048-f961-4675-85bf-88414e414a51-openstack-config\") pod \"openstackclient\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.318652 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a94a048-f961-4675-85bf-88414e414a51-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.318677 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9a94a048-f961-4675-85bf-88414e414a51-openstack-config-secret\") pod \"openstackclient\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.332008 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h8fx\" (UniqueName: \"kubernetes.io/projected/9a94a048-f961-4675-85bf-88414e414a51-kube-api-access-5h8fx\") pod \"openstackclient\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.385839 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.754184 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:25:59 crc kubenswrapper[4856]: I1122 07:25:59.754235 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:26:09 crc kubenswrapper[4856]: I1122 07:26:09.696633 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e4b2fd6-9289-4543-ac15-75da468b55c9","Type":"ContainerStarted","Data":"9f4f428a44c5a3482bea4907846448ff80ccd7bbbe766f2ded1a78ab5486a550"} Nov 22 07:26:11 crc kubenswrapper[4856]: I1122 07:26:11.732931 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=45.732912184 podStartE2EDuration="45.732912184s" podCreationTimestamp="2025-11-22 07:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:26:11.729993174 +0000 UTC m=+1414.143386432" watchObservedRunningTime="2025-11-22 07:26:11.732912184 +0000 UTC m=+1414.146305452" Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.508858 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.509856 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:26:15 crc kubenswrapper[4856]: E1122 07:26:15.562485 4856 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 22 07:26:15 crc kubenswrapper[4856]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_1538a039-b87f-4e9a-92ba-837236d61e99_0(9f4735c576076fb62eca299028460d1c6b0586cfdb1e337db227e5248e563025): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9f4735c576076fb62eca299028460d1c6b0586cfdb1e337db227e5248e563025" Netns:"/var/run/netns/ffb4f7c6-7045-4fe5-88a0-2e498cbec627" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=9f4735c576076fb62eca299028460d1c6b0586cfdb1e337db227e5248e563025;K8S_POD_UID=1538a039-b87f-4e9a-92ba-837236d61e99" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/1538a039-b87f-4e9a-92ba-837236d61e99]: expected pod UID "1538a039-b87f-4e9a-92ba-837236d61e99" but got "9a94a048-f961-4675-85bf-88414e414a51" from Kube API Nov 22 07:26:15 crc kubenswrapper[4856]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 22 07:26:15 crc kubenswrapper[4856]: > Nov 22 07:26:15 crc kubenswrapper[4856]: E1122 07:26:15.562608 4856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 22 07:26:15 crc kubenswrapper[4856]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_1538a039-b87f-4e9a-92ba-837236d61e99_0(9f4735c576076fb62eca299028460d1c6b0586cfdb1e337db227e5248e563025): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9f4735c576076fb62eca299028460d1c6b0586cfdb1e337db227e5248e563025" Netns:"/var/run/netns/ffb4f7c6-7045-4fe5-88a0-2e498cbec627" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=9f4735c576076fb62eca299028460d1c6b0586cfdb1e337db227e5248e563025;K8S_POD_UID=1538a039-b87f-4e9a-92ba-837236d61e99" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/1538a039-b87f-4e9a-92ba-837236d61e99]: expected pod UID "1538a039-b87f-4e9a-92ba-837236d61e99" but got "9a94a048-f961-4675-85bf-88414e414a51" from Kube API Nov 22 07:26:15 crc kubenswrapper[4856]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 22 07:26:15 crc kubenswrapper[4856]: > pod="openstack/openstackclient" Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.746221 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.746221 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9a94a048-f961-4675-85bf-88414e414a51","Type":"ContainerStarted","Data":"e48c80b401ea4343370874f0569b4e364989795741b91f69d8f5bc5cbef45833"} Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.749944 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="1538a039-b87f-4e9a-92ba-837236d61e99" podUID="9a94a048-f961-4675-85bf-88414e414a51" Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.756750 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.760438 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="1538a039-b87f-4e9a-92ba-837236d61e99" podUID="9a94a048-f961-4675-85bf-88414e414a51" Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.896620 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1538a039-b87f-4e9a-92ba-837236d61e99-combined-ca-bundle\") pod \"1538a039-b87f-4e9a-92ba-837236d61e99\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.896692 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsf98\" (UniqueName: \"kubernetes.io/projected/1538a039-b87f-4e9a-92ba-837236d61e99-kube-api-access-qsf98\") pod \"1538a039-b87f-4e9a-92ba-837236d61e99\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.896794 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1538a039-b87f-4e9a-92ba-837236d61e99-openstack-config-secret\") pod \"1538a039-b87f-4e9a-92ba-837236d61e99\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.896832 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1538a039-b87f-4e9a-92ba-837236d61e99-openstack-config\") pod \"1538a039-b87f-4e9a-92ba-837236d61e99\" (UID: \"1538a039-b87f-4e9a-92ba-837236d61e99\") " Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.897550 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1538a039-b87f-4e9a-92ba-837236d61e99-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1538a039-b87f-4e9a-92ba-837236d61e99" (UID: "1538a039-b87f-4e9a-92ba-837236d61e99"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.899378 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1538a039-b87f-4e9a-92ba-837236d61e99-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.903088 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1538a039-b87f-4e9a-92ba-837236d61e99-kube-api-access-qsf98" (OuterVolumeSpecName: "kube-api-access-qsf98") pod "1538a039-b87f-4e9a-92ba-837236d61e99" (UID: "1538a039-b87f-4e9a-92ba-837236d61e99"). InnerVolumeSpecName "kube-api-access-qsf98". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.903486 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1538a039-b87f-4e9a-92ba-837236d61e99-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1538a039-b87f-4e9a-92ba-837236d61e99" (UID: "1538a039-b87f-4e9a-92ba-837236d61e99"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:26:15 crc kubenswrapper[4856]: I1122 07:26:15.903746 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1538a039-b87f-4e9a-92ba-837236d61e99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1538a039-b87f-4e9a-92ba-837236d61e99" (UID: "1538a039-b87f-4e9a-92ba-837236d61e99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:26:16 crc kubenswrapper[4856]: I1122 07:26:16.001099 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1538a039-b87f-4e9a-92ba-837236d61e99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:16 crc kubenswrapper[4856]: I1122 07:26:16.001137 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsf98\" (UniqueName: \"kubernetes.io/projected/1538a039-b87f-4e9a-92ba-837236d61e99-kube-api-access-qsf98\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:16 crc kubenswrapper[4856]: I1122 07:26:16.001152 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1538a039-b87f-4e9a-92ba-837236d61e99-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:16 crc kubenswrapper[4856]: I1122 07:26:16.693258 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 07:26:16 crc kubenswrapper[4856]: I1122 07:26:16.693716 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 07:26:16 crc kubenswrapper[4856]: I1122 07:26:16.721843 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1538a039-b87f-4e9a-92ba-837236d61e99" path="/var/lib/kubelet/pods/1538a039-b87f-4e9a-92ba-837236d61e99/volumes" Nov 22 07:26:16 crc kubenswrapper[4856]: I1122 07:26:16.725631 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 07:26:16 crc kubenswrapper[4856]: I1122 07:26:16.737251 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 07:26:16 crc kubenswrapper[4856]: I1122 07:26:16.755967 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:26:16 crc kubenswrapper[4856]: I1122 07:26:16.756278 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 07:26:16 crc kubenswrapper[4856]: I1122 07:26:16.756326 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 07:26:16 crc kubenswrapper[4856]: I1122 07:26:16.767751 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="1538a039-b87f-4e9a-92ba-837236d61e99" podUID="9a94a048-f961-4675-85bf-88414e414a51" Nov 22 07:26:18 crc kubenswrapper[4856]: I1122 07:26:18.778054 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n9nhw" event={"ID":"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c","Type":"ContainerStarted","Data":"e506d22d373d63c5c5df7338ebfdf37d6f2889f6528d9d2937bd53d522fa657f"} Nov 22 07:26:18 crc kubenswrapper[4856]: I1122 07:26:18.795643 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-298l7" event={"ID":"f62cc6af-1032-4593-a11f-0dde4a6020ae","Type":"ContainerStarted","Data":"c3368e9bb887c530083f7a09aacec83accf90141e2a0af6a2fffe8655043dddd"} Nov 22 07:26:18 crc kubenswrapper[4856]: I1122 07:26:18.805416 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-n9nhw" podStartSLOduration=38.958742731 podStartE2EDuration="1m55.803236882s" podCreationTimestamp="2025-11-22 07:24:23 +0000 UTC" firstStartedPulling="2025-11-22 07:24:24.872308762 +0000 UTC m=+1307.285702020" lastFinishedPulling="2025-11-22 07:25:41.716802913 +0000 UTC m=+1384.130196171" observedRunningTime="2025-11-22 07:26:18.794145514 +0000 UTC m=+1421.207538782" watchObservedRunningTime="2025-11-22 07:26:18.803236882 +0000 UTC m=+1421.216630150" Nov 22 07:26:18 crc kubenswrapper[4856]: I1122 07:26:18.828060 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-298l7" podStartSLOduration=38.211192961 podStartE2EDuration="1m54.828033986s" podCreationTimestamp="2025-11-22 07:24:24 +0000 UTC" firstStartedPulling="2025-11-22 07:24:25.099899287 +0000 UTC m=+1307.513292545" lastFinishedPulling="2025-11-22 07:25:41.716740312 +0000 UTC m=+1384.130133570" observedRunningTime="2025-11-22 07:26:18.816821091 +0000 UTC m=+1421.230214359" watchObservedRunningTime="2025-11-22 07:26:18.828033986 +0000 UTC m=+1421.241427254" Nov 22 07:26:19 crc kubenswrapper[4856]: I1122 07:26:19.010812 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 07:26:19 crc kubenswrapper[4856]: I1122 07:26:19.010924 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:26:19 crc kubenswrapper[4856]: I1122 07:26:19.080422 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 07:26:29 crc kubenswrapper[4856]: I1122 07:26:29.754681 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:26:29 crc kubenswrapper[4856]: I1122 07:26:29.755334 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:26:29 crc kubenswrapper[4856]: E1122 07:26:29.899926 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24@sha256:8536169e5537fe6c330eba814248abdcf39cdd8f7e7336034d74e6fda9544050" Nov 22 07:26:29 crc kubenswrapper[4856]: E1122 07:26:29.900194 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:8536169e5537fe6c330eba814248abdcf39cdd8f7e7336034d74e6fda9544050,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjc85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3e60644a-2d82-40ed-9d0b-bb144837842a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 22 07:26:29 crc kubenswrapper[4856]: E1122 07:26:29.901757 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="3e60644a-2d82-40ed-9d0b-bb144837842a" Nov 22 07:26:30 crc kubenswrapper[4856]: I1122 07:26:30.895033 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3e60644a-2d82-40ed-9d0b-bb144837842a" containerName="ceilometer-notification-agent" containerID="cri-o://ac2f159623db3bfc776fec7df8aadf9a79636a8bec4d098e00bbbf20e2c19d12" gracePeriod=30 Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.076357 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g8fst"] Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.078693 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.099859 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g8fst"] Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.259494 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2868d95-c9d5-4cad-8bba-d80388d761ef-utilities\") pod \"redhat-operators-g8fst\" (UID: \"f2868d95-c9d5-4cad-8bba-d80388d761ef\") " pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.259591 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pshlp\" (UniqueName: \"kubernetes.io/projected/f2868d95-c9d5-4cad-8bba-d80388d761ef-kube-api-access-pshlp\") pod \"redhat-operators-g8fst\" (UID: \"f2868d95-c9d5-4cad-8bba-d80388d761ef\") " pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.260144 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2868d95-c9d5-4cad-8bba-d80388d761ef-catalog-content\") pod \"redhat-operators-g8fst\" (UID: \"f2868d95-c9d5-4cad-8bba-d80388d761ef\") " pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.367783 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2868d95-c9d5-4cad-8bba-d80388d761ef-catalog-content\") pod \"redhat-operators-g8fst\" (UID: \"f2868d95-c9d5-4cad-8bba-d80388d761ef\") " pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.367923 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2868d95-c9d5-4cad-8bba-d80388d761ef-utilities\") pod \"redhat-operators-g8fst\" (UID: \"f2868d95-c9d5-4cad-8bba-d80388d761ef\") " pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.367986 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pshlp\" (UniqueName: \"kubernetes.io/projected/f2868d95-c9d5-4cad-8bba-d80388d761ef-kube-api-access-pshlp\") pod \"redhat-operators-g8fst\" (UID: \"f2868d95-c9d5-4cad-8bba-d80388d761ef\") " pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.368481 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2868d95-c9d5-4cad-8bba-d80388d761ef-catalog-content\") pod \"redhat-operators-g8fst\" (UID: \"f2868d95-c9d5-4cad-8bba-d80388d761ef\") " pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.368637 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2868d95-c9d5-4cad-8bba-d80388d761ef-utilities\") pod \"redhat-operators-g8fst\" (UID: \"f2868d95-c9d5-4cad-8bba-d80388d761ef\") " pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.394371 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pshlp\" (UniqueName: \"kubernetes.io/projected/f2868d95-c9d5-4cad-8bba-d80388d761ef-kube-api-access-pshlp\") pod \"redhat-operators-g8fst\" (UID: \"f2868d95-c9d5-4cad-8bba-d80388d761ef\") " pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.418049 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:26:31 crc kubenswrapper[4856]: I1122 07:26:31.927332 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g8fst"] Nov 22 07:26:32 crc kubenswrapper[4856]: I1122 07:26:32.942855 4856 generic.go:334] "Generic (PLEG): container finished" podID="f2868d95-c9d5-4cad-8bba-d80388d761ef" containerID="cf782038af024e8f038fb0bfa30bc2e659e9441deb5c7119d3424bedee5bdde0" exitCode=0 Nov 22 07:26:32 crc kubenswrapper[4856]: I1122 07:26:32.942945 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fst" event={"ID":"f2868d95-c9d5-4cad-8bba-d80388d761ef","Type":"ContainerDied","Data":"cf782038af024e8f038fb0bfa30bc2e659e9441deb5c7119d3424bedee5bdde0"} Nov 22 07:26:32 crc kubenswrapper[4856]: I1122 07:26:32.943501 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fst" event={"ID":"f2868d95-c9d5-4cad-8bba-d80388d761ef","Type":"ContainerStarted","Data":"e4c85c9240f6224cacf0317912369a9742f2eba7427356b5d97ae49928c24311"} Nov 22 07:26:35 crc kubenswrapper[4856]: I1122 07:26:35.975657 4856 generic.go:334] "Generic (PLEG): container finished" podID="3e60644a-2d82-40ed-9d0b-bb144837842a" containerID="ac2f159623db3bfc776fec7df8aadf9a79636a8bec4d098e00bbbf20e2c19d12" exitCode=0 Nov 22 07:26:35 crc kubenswrapper[4856]: I1122 07:26:35.975707 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e60644a-2d82-40ed-9d0b-bb144837842a","Type":"ContainerDied","Data":"ac2f159623db3bfc776fec7df8aadf9a79636a8bec4d098e00bbbf20e2c19d12"} Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.328133 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-56bc6597ff-ll6fl"] Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.330837 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.335347 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.335489 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.335551 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.344959 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-56bc6597ff-ll6fl"] Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.418430 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-run-httpd\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.418541 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-combined-ca-bundle\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.418735 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-etc-swift\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.418794 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-internal-tls-certs\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.418938 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-public-tls-certs\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.418987 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-config-data\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.419088 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-log-httpd\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.419162 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjznr\" (UniqueName: \"kubernetes.io/projected/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-kube-api-access-qjznr\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.520916 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-combined-ca-bundle\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.521059 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-etc-swift\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.521093 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-internal-tls-certs\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.521148 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-public-tls-certs\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.521174 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-config-data\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.521226 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-log-httpd\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.521263 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjznr\" (UniqueName: \"kubernetes.io/projected/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-kube-api-access-qjznr\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.521313 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-run-httpd\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.522409 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-run-httpd\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.522526 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-log-httpd\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.528151 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-etc-swift\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.528237 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-public-tls-certs\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.528341 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-combined-ca-bundle\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.533095 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-internal-tls-certs\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.536549 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-config-data\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.546559 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjznr\" (UniqueName: \"kubernetes.io/projected/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-kube-api-access-qjznr\") pod \"swift-proxy-56bc6597ff-ll6fl\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:38 crc kubenswrapper[4856]: I1122 07:26:38.667660 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.499398 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.547628 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e60644a-2d82-40ed-9d0b-bb144837842a-run-httpd\") pod \"3e60644a-2d82-40ed-9d0b-bb144837842a\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.548017 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e60644a-2d82-40ed-9d0b-bb144837842a-log-httpd\") pod \"3e60644a-2d82-40ed-9d0b-bb144837842a\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.548039 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-sg-core-conf-yaml\") pod \"3e60644a-2d82-40ed-9d0b-bb144837842a\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.548090 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e60644a-2d82-40ed-9d0b-bb144837842a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3e60644a-2d82-40ed-9d0b-bb144837842a" (UID: "3e60644a-2d82-40ed-9d0b-bb144837842a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.548114 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-config-data\") pod \"3e60644a-2d82-40ed-9d0b-bb144837842a\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.548140 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-scripts\") pod \"3e60644a-2d82-40ed-9d0b-bb144837842a\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.548244 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e60644a-2d82-40ed-9d0b-bb144837842a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3e60644a-2d82-40ed-9d0b-bb144837842a" (UID: "3e60644a-2d82-40ed-9d0b-bb144837842a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.548255 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjc85\" (UniqueName: \"kubernetes.io/projected/3e60644a-2d82-40ed-9d0b-bb144837842a-kube-api-access-jjc85\") pod \"3e60644a-2d82-40ed-9d0b-bb144837842a\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.548283 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-combined-ca-bundle\") pod \"3e60644a-2d82-40ed-9d0b-bb144837842a\" (UID: \"3e60644a-2d82-40ed-9d0b-bb144837842a\") " Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.548755 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e60644a-2d82-40ed-9d0b-bb144837842a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.548777 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e60644a-2d82-40ed-9d0b-bb144837842a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.554746 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3e60644a-2d82-40ed-9d0b-bb144837842a" (UID: "3e60644a-2d82-40ed-9d0b-bb144837842a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.556906 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-scripts" (OuterVolumeSpecName: "scripts") pod "3e60644a-2d82-40ed-9d0b-bb144837842a" (UID: "3e60644a-2d82-40ed-9d0b-bb144837842a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.561049 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e60644a-2d82-40ed-9d0b-bb144837842a-kube-api-access-jjc85" (OuterVolumeSpecName: "kube-api-access-jjc85") pod "3e60644a-2d82-40ed-9d0b-bb144837842a" (UID: "3e60644a-2d82-40ed-9d0b-bb144837842a"). InnerVolumeSpecName "kube-api-access-jjc85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.581287 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-config-data" (OuterVolumeSpecName: "config-data") pod "3e60644a-2d82-40ed-9d0b-bb144837842a" (UID: "3e60644a-2d82-40ed-9d0b-bb144837842a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.581658 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e60644a-2d82-40ed-9d0b-bb144837842a" (UID: "3e60644a-2d82-40ed-9d0b-bb144837842a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.650979 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.651028 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.651040 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.651053 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjc85\" (UniqueName: \"kubernetes.io/projected/3e60644a-2d82-40ed-9d0b-bb144837842a-kube-api-access-jjc85\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:51 crc kubenswrapper[4856]: I1122 07:26:51.651070 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e60644a-2d82-40ed-9d0b-bb144837842a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.102916 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3e60644a-2d82-40ed-9d0b-bb144837842a","Type":"ContainerDied","Data":"3b2272adc0ae334e531057d730af0ee18e91344a241ae15700a59fb9ff6915f5"} Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.102975 4856 scope.go:117] "RemoveContainer" containerID="ac2f159623db3bfc776fec7df8aadf9a79636a8bec4d098e00bbbf20e2c19d12" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.103133 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.190170 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.200973 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.210381 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:26:52 crc kubenswrapper[4856]: E1122 07:26:52.210806 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e60644a-2d82-40ed-9d0b-bb144837842a" containerName="ceilometer-notification-agent" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.210821 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e60644a-2d82-40ed-9d0b-bb144837842a" containerName="ceilometer-notification-agent" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.211068 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e60644a-2d82-40ed-9d0b-bb144837842a" containerName="ceilometer-notification-agent" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.212640 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.217902 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.218072 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.218105 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.269446 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.269503 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-config-data\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.269765 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-scripts\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.269792 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8q6p\" (UniqueName: \"kubernetes.io/projected/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-kube-api-access-c8q6p\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.269811 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-run-httpd\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.269854 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.269896 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-log-httpd\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.371615 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-log-httpd\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.372141 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-log-httpd\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.372311 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.372358 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-config-data\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.372378 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-scripts\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.372403 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8q6p\" (UniqueName: \"kubernetes.io/projected/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-kube-api-access-c8q6p\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.372421 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-run-httpd\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.372474 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.373571 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-run-httpd\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.378375 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.378770 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-config-data\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.379404 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-scripts\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.379783 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.388098 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8q6p\" (UniqueName: \"kubernetes.io/projected/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-kube-api-access-c8q6p\") pod \"ceilometer-0\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.536395 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.676522 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-56bc6597ff-ll6fl"] Nov 22 07:26:52 crc kubenswrapper[4856]: I1122 07:26:52.749281 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e60644a-2d82-40ed-9d0b-bb144837842a" path="/var/lib/kubelet/pods/3e60644a-2d82-40ed-9d0b-bb144837842a/volumes" Nov 22 07:26:53 crc kubenswrapper[4856]: I1122 07:26:53.072820 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:26:53 crc kubenswrapper[4856]: W1122 07:26:53.075415 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb467ee7e_f79f_4cbb_9cfb_13d4758e11b9.slice/crio-775cbc629469fe633c605eb1c2c584805a475c4475b04f0483da8174927c3cb9 WatchSource:0}: Error finding container 775cbc629469fe633c605eb1c2c584805a475c4475b04f0483da8174927c3cb9: Status 404 returned error can't find the container with id 775cbc629469fe633c605eb1c2c584805a475c4475b04f0483da8174927c3cb9 Nov 22 07:26:53 crc kubenswrapper[4856]: I1122 07:26:53.116215 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9","Type":"ContainerStarted","Data":"775cbc629469fe633c605eb1c2c584805a475c4475b04f0483da8174927c3cb9"} Nov 22 07:26:53 crc kubenswrapper[4856]: I1122 07:26:53.117692 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56bc6597ff-ll6fl" event={"ID":"314d3b00-9bb4-4caa-a2dd-521e70e3d73d","Type":"ContainerStarted","Data":"bf47a214c3fcadf89b3d1f49750ee64111d41e0ee997442284899dbb05d85345"} Nov 22 07:26:56 crc kubenswrapper[4856]: I1122 07:26:56.148986 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56bc6597ff-ll6fl" event={"ID":"314d3b00-9bb4-4caa-a2dd-521e70e3d73d","Type":"ContainerStarted","Data":"6219833dba75dd8b4b4fd8f9b3965d45ed8beebecf788175cdad2c1025ca7eea"} Nov 22 07:26:57 crc kubenswrapper[4856]: I1122 07:26:57.161659 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56bc6597ff-ll6fl" event={"ID":"314d3b00-9bb4-4caa-a2dd-521e70e3d73d","Type":"ContainerStarted","Data":"545281c9124acb52b1ddf1192147efb7e07a95b9f53d9d183531f5e1698bb14f"} Nov 22 07:26:57 crc kubenswrapper[4856]: I1122 07:26:57.164329 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:57 crc kubenswrapper[4856]: I1122 07:26:57.164397 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:26:58 crc kubenswrapper[4856]: I1122 07:26:58.741318 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-56bc6597ff-ll6fl" podStartSLOduration=20.741288234 podStartE2EDuration="20.741288234s" podCreationTimestamp="2025-11-22 07:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:26:57.186055385 +0000 UTC m=+1459.599448653" watchObservedRunningTime="2025-11-22 07:26:58.741288234 +0000 UTC m=+1461.154681502" Nov 22 07:26:59 crc kubenswrapper[4856]: I1122 07:26:59.755049 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:26:59 crc kubenswrapper[4856]: I1122 07:26:59.755117 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:26:59 crc kubenswrapper[4856]: I1122 07:26:59.755175 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:26:59 crc kubenswrapper[4856]: I1122 07:26:59.755940 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2d6ca7441dd492e3a581af2bfbc9e9d1023d20289aecd1a0ad5d8af62f035ce"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:26:59 crc kubenswrapper[4856]: I1122 07:26:59.755996 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://b2d6ca7441dd492e3a581af2bfbc9e9d1023d20289aecd1a0ad5d8af62f035ce" gracePeriod=600 Nov 22 07:27:01 crc kubenswrapper[4856]: I1122 07:27:01.198189 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="b2d6ca7441dd492e3a581af2bfbc9e9d1023d20289aecd1a0ad5d8af62f035ce" exitCode=0 Nov 22 07:27:01 crc kubenswrapper[4856]: I1122 07:27:01.198273 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"b2d6ca7441dd492e3a581af2bfbc9e9d1023d20289aecd1a0ad5d8af62f035ce"} Nov 22 07:27:01 crc kubenswrapper[4856]: I1122 07:27:01.198595 4856 scope.go:117] "RemoveContainer" containerID="4366d97abee77d6bcf27f0824324e78ad727912da8d9c8585365d5f93d21ed74" Nov 22 07:27:03 crc kubenswrapper[4856]: I1122 07:27:03.672951 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:27:03 crc kubenswrapper[4856]: I1122 07:27:03.674445 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:27:04 crc kubenswrapper[4856]: I1122 07:27:04.197685 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:27:06 crc kubenswrapper[4856]: I1122 07:27:06.275824 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167"} Nov 22 07:27:06 crc kubenswrapper[4856]: I1122 07:27:06.277702 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9","Type":"ContainerStarted","Data":"cca212e040858b5629e97d34daf4b9bad0fc1803d15dae320df917f2efa46829"} Nov 22 07:27:06 crc kubenswrapper[4856]: I1122 07:27:06.279653 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9a94a048-f961-4675-85bf-88414e414a51","Type":"ContainerStarted","Data":"b3b1f2a0ac6e8ef5ca8623acaf447ee1e4d4c639c63af0026dc10d1cc70ff28a"} Nov 22 07:27:06 crc kubenswrapper[4856]: I1122 07:27:06.317405 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=21.283235514 podStartE2EDuration="1m7.317386265s" podCreationTimestamp="2025-11-22 07:25:59 +0000 UTC" firstStartedPulling="2025-11-22 07:26:15.509536078 +0000 UTC m=+1417.922929336" lastFinishedPulling="2025-11-22 07:27:01.543686829 +0000 UTC m=+1463.957080087" observedRunningTime="2025-11-22 07:27:06.316800758 +0000 UTC m=+1468.730194036" watchObservedRunningTime="2025-11-22 07:27:06.317386265 +0000 UTC m=+1468.730779513" Nov 22 07:27:09 crc kubenswrapper[4856]: I1122 07:27:09.309188 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fst" event={"ID":"f2868d95-c9d5-4cad-8bba-d80388d761ef","Type":"ContainerStarted","Data":"7ab8d5864dac98a7611b1c1806965b13bd099da2476b71c4496ef076d655cb40"} Nov 22 07:27:11 crc kubenswrapper[4856]: I1122 07:27:11.338726 4856 generic.go:334] "Generic (PLEG): container finished" podID="f2868d95-c9d5-4cad-8bba-d80388d761ef" containerID="7ab8d5864dac98a7611b1c1806965b13bd099da2476b71c4496ef076d655cb40" exitCode=0 Nov 22 07:27:11 crc kubenswrapper[4856]: I1122 07:27:11.338835 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fst" event={"ID":"f2868d95-c9d5-4cad-8bba-d80388d761ef","Type":"ContainerDied","Data":"7ab8d5864dac98a7611b1c1806965b13bd099da2476b71c4496ef076d655cb40"} Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:16.384503 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9","Type":"ContainerStarted","Data":"7373594b757818fe85d5a82ee77be6ab5ccbfd19472715139b42b27c4f557e0b"} Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:25.467973 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fst" event={"ID":"f2868d95-c9d5-4cad-8bba-d80388d761ef","Type":"ContainerStarted","Data":"7c006f8ba7f553b68689ed342d834c0b736d6cafe830df1ba174db534f4f85f2"} Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:26.504483 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g8fst" podStartSLOduration=4.446327522 podStartE2EDuration="55.50446227s" podCreationTimestamp="2025-11-22 07:26:31 +0000 UTC" firstStartedPulling="2025-11-22 07:26:32.946308854 +0000 UTC m=+1435.359702112" lastFinishedPulling="2025-11-22 07:27:24.004443592 +0000 UTC m=+1486.417836860" observedRunningTime="2025-11-22 07:27:26.500704938 +0000 UTC m=+1488.914098206" watchObservedRunningTime="2025-11-22 07:27:26.50446227 +0000 UTC m=+1488.917855528" Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:29.378097 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p5wtg"] Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:29.380983 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:29.396322 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5wtg"] Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:29.485092 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htkph\" (UniqueName: \"kubernetes.io/projected/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-kube-api-access-htkph\") pod \"community-operators-p5wtg\" (UID: \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\") " pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:29.485189 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-utilities\") pod \"community-operators-p5wtg\" (UID: \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\") " pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:29.485236 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-catalog-content\") pod \"community-operators-p5wtg\" (UID: \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\") " pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:29.586949 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-utilities\") pod \"community-operators-p5wtg\" (UID: \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\") " pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:29.587010 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-catalog-content\") pod \"community-operators-p5wtg\" (UID: \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\") " pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:29.587105 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htkph\" (UniqueName: \"kubernetes.io/projected/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-kube-api-access-htkph\") pod \"community-operators-p5wtg\" (UID: \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\") " pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:29.587522 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-utilities\") pod \"community-operators-p5wtg\" (UID: \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\") " pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:29.587559 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-catalog-content\") pod \"community-operators-p5wtg\" (UID: \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\") " pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:29.608124 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htkph\" (UniqueName: \"kubernetes.io/projected/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-kube-api-access-htkph\") pod \"community-operators-p5wtg\" (UID: \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\") " pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:29 crc kubenswrapper[4856]: I1122 07:27:29.715899 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:31 crc kubenswrapper[4856]: I1122 07:27:31.420043 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:27:31 crc kubenswrapper[4856]: I1122 07:27:31.420464 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:27:31 crc kubenswrapper[4856]: I1122 07:27:31.467576 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:27:31 crc kubenswrapper[4856]: I1122 07:27:31.619793 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:27:31 crc kubenswrapper[4856]: I1122 07:27:31.932213 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p5wtg"] Nov 22 07:27:31 crc kubenswrapper[4856]: W1122 07:27:31.939701 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42524de9_1639_4cd4_b1b3_c651b6aa2dbf.slice/crio-789a6713fa13228c11bcbab539df0b72f2b6c429f4df9a1bcf40e9c05dfb3b35 WatchSource:0}: Error finding container 789a6713fa13228c11bcbab539df0b72f2b6c429f4df9a1bcf40e9c05dfb3b35: Status 404 returned error can't find the container with id 789a6713fa13228c11bcbab539df0b72f2b6c429f4df9a1bcf40e9c05dfb3b35 Nov 22 07:27:32 crc kubenswrapper[4856]: I1122 07:27:32.562270 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9","Type":"ContainerStarted","Data":"3532a9f30f9ea0e60903ee4ad791badad180f37306425c272f5f65b99ee2c2ab"} Nov 22 07:27:32 crc kubenswrapper[4856]: I1122 07:27:32.566342 4856 generic.go:334] "Generic (PLEG): container finished" podID="42524de9-1639-4cd4-b1b3-c651b6aa2dbf" containerID="a44af4c6e138ab2f394051b4858bde810a57c4633ddd1d6910ff527f0a8bad0f" exitCode=0 Nov 22 07:27:32 crc kubenswrapper[4856]: I1122 07:27:32.569733 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5wtg" event={"ID":"42524de9-1639-4cd4-b1b3-c651b6aa2dbf","Type":"ContainerDied","Data":"a44af4c6e138ab2f394051b4858bde810a57c4633ddd1d6910ff527f0a8bad0f"} Nov 22 07:27:32 crc kubenswrapper[4856]: I1122 07:27:32.569867 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5wtg" event={"ID":"42524de9-1639-4cd4-b1b3-c651b6aa2dbf","Type":"ContainerStarted","Data":"789a6713fa13228c11bcbab539df0b72f2b6c429f4df9a1bcf40e9c05dfb3b35"} Nov 22 07:27:33 crc kubenswrapper[4856]: I1122 07:27:33.553641 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g8fst"] Nov 22 07:27:33 crc kubenswrapper[4856]: I1122 07:27:33.573529 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g8fst" podUID="f2868d95-c9d5-4cad-8bba-d80388d761ef" containerName="registry-server" containerID="cri-o://7c006f8ba7f553b68689ed342d834c0b736d6cafe830df1ba174db534f4f85f2" gracePeriod=2 Nov 22 07:27:34 crc kubenswrapper[4856]: I1122 07:27:34.586841 4856 generic.go:334] "Generic (PLEG): container finished" podID="f2868d95-c9d5-4cad-8bba-d80388d761ef" containerID="7c006f8ba7f553b68689ed342d834c0b736d6cafe830df1ba174db534f4f85f2" exitCode=0 Nov 22 07:27:34 crc kubenswrapper[4856]: I1122 07:27:34.586946 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fst" event={"ID":"f2868d95-c9d5-4cad-8bba-d80388d761ef","Type":"ContainerDied","Data":"7c006f8ba7f553b68689ed342d834c0b736d6cafe830df1ba174db534f4f85f2"} Nov 22 07:27:34 crc kubenswrapper[4856]: I1122 07:27:34.707407 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:27:34 crc kubenswrapper[4856]: I1122 07:27:34.801846 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2868d95-c9d5-4cad-8bba-d80388d761ef-catalog-content\") pod \"f2868d95-c9d5-4cad-8bba-d80388d761ef\" (UID: \"f2868d95-c9d5-4cad-8bba-d80388d761ef\") " Nov 22 07:27:34 crc kubenswrapper[4856]: I1122 07:27:34.801911 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pshlp\" (UniqueName: \"kubernetes.io/projected/f2868d95-c9d5-4cad-8bba-d80388d761ef-kube-api-access-pshlp\") pod \"f2868d95-c9d5-4cad-8bba-d80388d761ef\" (UID: \"f2868d95-c9d5-4cad-8bba-d80388d761ef\") " Nov 22 07:27:34 crc kubenswrapper[4856]: I1122 07:27:34.801970 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2868d95-c9d5-4cad-8bba-d80388d761ef-utilities\") pod \"f2868d95-c9d5-4cad-8bba-d80388d761ef\" (UID: \"f2868d95-c9d5-4cad-8bba-d80388d761ef\") " Nov 22 07:27:34 crc kubenswrapper[4856]: I1122 07:27:34.802799 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2868d95-c9d5-4cad-8bba-d80388d761ef-utilities" (OuterVolumeSpecName: "utilities") pod "f2868d95-c9d5-4cad-8bba-d80388d761ef" (UID: "f2868d95-c9d5-4cad-8bba-d80388d761ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:34 crc kubenswrapper[4856]: I1122 07:27:34.805047 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2868d95-c9d5-4cad-8bba-d80388d761ef-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:34 crc kubenswrapper[4856]: I1122 07:27:34.806132 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2868d95-c9d5-4cad-8bba-d80388d761ef-kube-api-access-pshlp" (OuterVolumeSpecName: "kube-api-access-pshlp") pod "f2868d95-c9d5-4cad-8bba-d80388d761ef" (UID: "f2868d95-c9d5-4cad-8bba-d80388d761ef"). InnerVolumeSpecName "kube-api-access-pshlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:34 crc kubenswrapper[4856]: I1122 07:27:34.892488 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2868d95-c9d5-4cad-8bba-d80388d761ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2868d95-c9d5-4cad-8bba-d80388d761ef" (UID: "f2868d95-c9d5-4cad-8bba-d80388d761ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:34 crc kubenswrapper[4856]: I1122 07:27:34.906985 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2868d95-c9d5-4cad-8bba-d80388d761ef-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:34 crc kubenswrapper[4856]: I1122 07:27:34.907372 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pshlp\" (UniqueName: \"kubernetes.io/projected/f2868d95-c9d5-4cad-8bba-d80388d761ef-kube-api-access-pshlp\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.598069 4856 generic.go:334] "Generic (PLEG): container finished" podID="42524de9-1639-4cd4-b1b3-c651b6aa2dbf" containerID="93901b3bd17226ddf3c41d1dc2d8020befb3521d6805cda3fa32db07829ac66b" exitCode=0 Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.598187 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5wtg" event={"ID":"42524de9-1639-4cd4-b1b3-c651b6aa2dbf","Type":"ContainerDied","Data":"93901b3bd17226ddf3c41d1dc2d8020befb3521d6805cda3fa32db07829ac66b"} Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.600849 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9","Type":"ContainerStarted","Data":"67a92d2576d63ad9980885500d3150693ce081cad1d0b15f322dd529e1d24509"} Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.600982 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.601003 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="ceilometer-central-agent" containerID="cri-o://cca212e040858b5629e97d34daf4b9bad0fc1803d15dae320df917f2efa46829" gracePeriod=30 Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.601032 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="ceilometer-notification-agent" containerID="cri-o://7373594b757818fe85d5a82ee77be6ab5ccbfd19472715139b42b27c4f557e0b" gracePeriod=30 Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.601053 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="proxy-httpd" containerID="cri-o://67a92d2576d63ad9980885500d3150693ce081cad1d0b15f322dd529e1d24509" gracePeriod=30 Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.601056 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="sg-core" containerID="cri-o://3532a9f30f9ea0e60903ee4ad791badad180f37306425c272f5f65b99ee2c2ab" gracePeriod=30 Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.604316 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g8fst" event={"ID":"f2868d95-c9d5-4cad-8bba-d80388d761ef","Type":"ContainerDied","Data":"e4c85c9240f6224cacf0317912369a9742f2eba7427356b5d97ae49928c24311"} Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.604364 4856 scope.go:117] "RemoveContainer" containerID="7c006f8ba7f553b68689ed342d834c0b736d6cafe830df1ba174db534f4f85f2" Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.604382 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g8fst" Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.642050 4856 scope.go:117] "RemoveContainer" containerID="7ab8d5864dac98a7611b1c1806965b13bd099da2476b71c4496ef076d655cb40" Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.644402 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.297111685 podStartE2EDuration="43.644384304s" podCreationTimestamp="2025-11-22 07:26:52 +0000 UTC" firstStartedPulling="2025-11-22 07:26:53.078249798 +0000 UTC m=+1455.491643066" lastFinishedPulling="2025-11-22 07:27:34.425522417 +0000 UTC m=+1496.838915685" observedRunningTime="2025-11-22 07:27:35.640784695 +0000 UTC m=+1498.054177963" watchObservedRunningTime="2025-11-22 07:27:35.644384304 +0000 UTC m=+1498.057777562" Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.665787 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g8fst"] Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.667896 4856 scope.go:117] "RemoveContainer" containerID="cf782038af024e8f038fb0bfa30bc2e659e9441deb5c7119d3424bedee5bdde0" Nov 22 07:27:35 crc kubenswrapper[4856]: I1122 07:27:35.672419 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g8fst"] Nov 22 07:27:36 crc kubenswrapper[4856]: I1122 07:27:36.614112 4856 generic.go:334] "Generic (PLEG): container finished" podID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerID="67a92d2576d63ad9980885500d3150693ce081cad1d0b15f322dd529e1d24509" exitCode=0 Nov 22 07:27:36 crc kubenswrapper[4856]: I1122 07:27:36.614407 4856 generic.go:334] "Generic (PLEG): container finished" podID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerID="3532a9f30f9ea0e60903ee4ad791badad180f37306425c272f5f65b99ee2c2ab" exitCode=2 Nov 22 07:27:36 crc kubenswrapper[4856]: I1122 07:27:36.614416 4856 generic.go:334] "Generic (PLEG): container finished" podID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerID="cca212e040858b5629e97d34daf4b9bad0fc1803d15dae320df917f2efa46829" exitCode=0 Nov 22 07:27:36 crc kubenswrapper[4856]: I1122 07:27:36.614194 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9","Type":"ContainerDied","Data":"67a92d2576d63ad9980885500d3150693ce081cad1d0b15f322dd529e1d24509"} Nov 22 07:27:36 crc kubenswrapper[4856]: I1122 07:27:36.614479 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9","Type":"ContainerDied","Data":"3532a9f30f9ea0e60903ee4ad791badad180f37306425c272f5f65b99ee2c2ab"} Nov 22 07:27:36 crc kubenswrapper[4856]: I1122 07:27:36.614492 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9","Type":"ContainerDied","Data":"cca212e040858b5629e97d34daf4b9bad0fc1803d15dae320df917f2efa46829"} Nov 22 07:27:36 crc kubenswrapper[4856]: I1122 07:27:36.721476 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2868d95-c9d5-4cad-8bba-d80388d761ef" path="/var/lib/kubelet/pods/f2868d95-c9d5-4cad-8bba-d80388d761ef/volumes" Nov 22 07:27:37 crc kubenswrapper[4856]: I1122 07:27:37.628944 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5wtg" event={"ID":"42524de9-1639-4cd4-b1b3-c651b6aa2dbf","Type":"ContainerStarted","Data":"bbc75e275fd3ae459d6a4c020b7465c1fac5706343ee9fc81268d5c20ca9b2ca"} Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.105301 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.130809 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p5wtg" podStartSLOduration=5.243179355 podStartE2EDuration="9.13079116s" podCreationTimestamp="2025-11-22 07:27:29 +0000 UTC" firstStartedPulling="2025-11-22 07:27:32.57048621 +0000 UTC m=+1494.983879478" lastFinishedPulling="2025-11-22 07:27:36.458098025 +0000 UTC m=+1498.871491283" observedRunningTime="2025-11-22 07:27:37.648140817 +0000 UTC m=+1500.061534085" watchObservedRunningTime="2025-11-22 07:27:38.13079116 +0000 UTC m=+1500.544184418" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.284501 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-run-httpd\") pod \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.284607 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-log-httpd\") pod \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.284719 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-config-data\") pod \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.284750 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-sg-core-conf-yaml\") pod \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.284966 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-combined-ca-bundle\") pod \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.285010 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8q6p\" (UniqueName: \"kubernetes.io/projected/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-kube-api-access-c8q6p\") pod \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.285325 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" (UID: "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.285479 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" (UID: "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.286093 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-scripts\") pod \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\" (UID: \"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9\") " Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.286635 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.286655 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.291789 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-kube-api-access-c8q6p" (OuterVolumeSpecName: "kube-api-access-c8q6p") pod "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" (UID: "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9"). InnerVolumeSpecName "kube-api-access-c8q6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.295200 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-scripts" (OuterVolumeSpecName: "scripts") pod "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" (UID: "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.319539 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" (UID: "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.363430 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" (UID: "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.388305 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.388342 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.388356 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8q6p\" (UniqueName: \"kubernetes.io/projected/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-kube-api-access-c8q6p\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.388372 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.389049 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-config-data" (OuterVolumeSpecName: "config-data") pod "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" (UID: "b467ee7e-f79f-4cbb-9cfb-13d4758e11b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.490100 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.641423 4856 generic.go:334] "Generic (PLEG): container finished" podID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerID="7373594b757818fe85d5a82ee77be6ab5ccbfd19472715139b42b27c4f557e0b" exitCode=0 Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.641453 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9","Type":"ContainerDied","Data":"7373594b757818fe85d5a82ee77be6ab5ccbfd19472715139b42b27c4f557e0b"} Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.641500 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.641540 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b467ee7e-f79f-4cbb-9cfb-13d4758e11b9","Type":"ContainerDied","Data":"775cbc629469fe633c605eb1c2c584805a475c4475b04f0483da8174927c3cb9"} Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.641564 4856 scope.go:117] "RemoveContainer" containerID="67a92d2576d63ad9980885500d3150693ce081cad1d0b15f322dd529e1d24509" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.666732 4856 scope.go:117] "RemoveContainer" containerID="3532a9f30f9ea0e60903ee4ad791badad180f37306425c272f5f65b99ee2c2ab" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.678706 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.688754 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.700615 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:27:38 crc kubenswrapper[4856]: E1122 07:27:38.701224 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="ceilometer-central-agent" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.701289 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="ceilometer-central-agent" Nov 22 07:27:38 crc kubenswrapper[4856]: E1122 07:27:38.701351 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="ceilometer-notification-agent" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.701400 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="ceilometer-notification-agent" Nov 22 07:27:38 crc kubenswrapper[4856]: E1122 07:27:38.701478 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2868d95-c9d5-4cad-8bba-d80388d761ef" containerName="registry-server" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.701552 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2868d95-c9d5-4cad-8bba-d80388d761ef" containerName="registry-server" Nov 22 07:27:38 crc kubenswrapper[4856]: E1122 07:27:38.701627 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="proxy-httpd" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.701684 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="proxy-httpd" Nov 22 07:27:38 crc kubenswrapper[4856]: E1122 07:27:38.701741 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="sg-core" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.701803 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="sg-core" Nov 22 07:27:38 crc kubenswrapper[4856]: E1122 07:27:38.701870 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2868d95-c9d5-4cad-8bba-d80388d761ef" containerName="extract-content" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.701921 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2868d95-c9d5-4cad-8bba-d80388d761ef" containerName="extract-content" Nov 22 07:27:38 crc kubenswrapper[4856]: E1122 07:27:38.701976 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2868d95-c9d5-4cad-8bba-d80388d761ef" containerName="extract-utilities" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.702024 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2868d95-c9d5-4cad-8bba-d80388d761ef" containerName="extract-utilities" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.702251 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="sg-core" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.702317 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="ceilometer-notification-agent" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.702382 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="ceilometer-central-agent" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.702450 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2868d95-c9d5-4cad-8bba-d80388d761ef" containerName="registry-server" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.702524 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" containerName="proxy-httpd" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.704201 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.706330 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.711633 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.716595 4856 scope.go:117] "RemoveContainer" containerID="7373594b757818fe85d5a82ee77be6ab5ccbfd19472715139b42b27c4f557e0b" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.725348 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b467ee7e-f79f-4cbb-9cfb-13d4758e11b9" path="/var/lib/kubelet/pods/b467ee7e-f79f-4cbb-9cfb-13d4758e11b9/volumes" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.759981 4856 scope.go:117] "RemoveContainer" containerID="cca212e040858b5629e97d34daf4b9bad0fc1803d15dae320df917f2efa46829" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.761878 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.796529 4856 scope.go:117] "RemoveContainer" containerID="67a92d2576d63ad9980885500d3150693ce081cad1d0b15f322dd529e1d24509" Nov 22 07:27:38 crc kubenswrapper[4856]: E1122 07:27:38.797050 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a92d2576d63ad9980885500d3150693ce081cad1d0b15f322dd529e1d24509\": container with ID starting with 67a92d2576d63ad9980885500d3150693ce081cad1d0b15f322dd529e1d24509 not found: ID does not exist" containerID="67a92d2576d63ad9980885500d3150693ce081cad1d0b15f322dd529e1d24509" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.797099 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a92d2576d63ad9980885500d3150693ce081cad1d0b15f322dd529e1d24509"} err="failed to get container status \"67a92d2576d63ad9980885500d3150693ce081cad1d0b15f322dd529e1d24509\": rpc error: code = NotFound desc = could not find container \"67a92d2576d63ad9980885500d3150693ce081cad1d0b15f322dd529e1d24509\": container with ID starting with 67a92d2576d63ad9980885500d3150693ce081cad1d0b15f322dd529e1d24509 not found: ID does not exist" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.797138 4856 scope.go:117] "RemoveContainer" containerID="3532a9f30f9ea0e60903ee4ad791badad180f37306425c272f5f65b99ee2c2ab" Nov 22 07:27:38 crc kubenswrapper[4856]: E1122 07:27:38.797996 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3532a9f30f9ea0e60903ee4ad791badad180f37306425c272f5f65b99ee2c2ab\": container with ID starting with 3532a9f30f9ea0e60903ee4ad791badad180f37306425c272f5f65b99ee2c2ab not found: ID does not exist" containerID="3532a9f30f9ea0e60903ee4ad791badad180f37306425c272f5f65b99ee2c2ab" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.798041 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3532a9f30f9ea0e60903ee4ad791badad180f37306425c272f5f65b99ee2c2ab"} err="failed to get container status \"3532a9f30f9ea0e60903ee4ad791badad180f37306425c272f5f65b99ee2c2ab\": rpc error: code = NotFound desc = could not find container \"3532a9f30f9ea0e60903ee4ad791badad180f37306425c272f5f65b99ee2c2ab\": container with ID starting with 3532a9f30f9ea0e60903ee4ad791badad180f37306425c272f5f65b99ee2c2ab not found: ID does not exist" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.798070 4856 scope.go:117] "RemoveContainer" containerID="7373594b757818fe85d5a82ee77be6ab5ccbfd19472715139b42b27c4f557e0b" Nov 22 07:27:38 crc kubenswrapper[4856]: E1122 07:27:38.798362 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7373594b757818fe85d5a82ee77be6ab5ccbfd19472715139b42b27c4f557e0b\": container with ID starting with 7373594b757818fe85d5a82ee77be6ab5ccbfd19472715139b42b27c4f557e0b not found: ID does not exist" containerID="7373594b757818fe85d5a82ee77be6ab5ccbfd19472715139b42b27c4f557e0b" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.798393 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7373594b757818fe85d5a82ee77be6ab5ccbfd19472715139b42b27c4f557e0b"} err="failed to get container status \"7373594b757818fe85d5a82ee77be6ab5ccbfd19472715139b42b27c4f557e0b\": rpc error: code = NotFound desc = could not find container \"7373594b757818fe85d5a82ee77be6ab5ccbfd19472715139b42b27c4f557e0b\": container with ID starting with 7373594b757818fe85d5a82ee77be6ab5ccbfd19472715139b42b27c4f557e0b not found: ID does not exist" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.798411 4856 scope.go:117] "RemoveContainer" containerID="cca212e040858b5629e97d34daf4b9bad0fc1803d15dae320df917f2efa46829" Nov 22 07:27:38 crc kubenswrapper[4856]: E1122 07:27:38.798674 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cca212e040858b5629e97d34daf4b9bad0fc1803d15dae320df917f2efa46829\": container with ID starting with cca212e040858b5629e97d34daf4b9bad0fc1803d15dae320df917f2efa46829 not found: ID does not exist" containerID="cca212e040858b5629e97d34daf4b9bad0fc1803d15dae320df917f2efa46829" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.798697 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cca212e040858b5629e97d34daf4b9bad0fc1803d15dae320df917f2efa46829"} err="failed to get container status \"cca212e040858b5629e97d34daf4b9bad0fc1803d15dae320df917f2efa46829\": rpc error: code = NotFound desc = could not find container \"cca212e040858b5629e97d34daf4b9bad0fc1803d15dae320df917f2efa46829\": container with ID starting with cca212e040858b5629e97d34daf4b9bad0fc1803d15dae320df917f2efa46829 not found: ID does not exist" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.897664 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-run-httpd\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.898352 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.898463 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.898778 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dszb6\" (UniqueName: \"kubernetes.io/projected/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-kube-api-access-dszb6\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.898926 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-scripts\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.898956 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-config-data\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:38 crc kubenswrapper[4856]: I1122 07:27:38.899036 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-log-httpd\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.000614 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-run-httpd\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.000676 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.000706 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.000753 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dszb6\" (UniqueName: \"kubernetes.io/projected/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-kube-api-access-dszb6\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.000800 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-scripts\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.000818 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-config-data\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.000844 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-log-httpd\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.001258 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-run-httpd\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.001317 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-log-httpd\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.005588 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.006174 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.006710 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-config-data\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.007765 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-scripts\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.017938 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dszb6\" (UniqueName: \"kubernetes.io/projected/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-kube-api-access-dszb6\") pod \"ceilometer-0\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.026006 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.500977 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:27:39 crc kubenswrapper[4856]: W1122 07:27:39.512714 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3865f571_b5f0_4da2_b76d_dc2b5ef91b09.slice/crio-392c736fe0057a4ecb8f601181db3e471da19ffd295840d87ba0d60528935b72 WatchSource:0}: Error finding container 392c736fe0057a4ecb8f601181db3e471da19ffd295840d87ba0d60528935b72: Status 404 returned error can't find the container with id 392c736fe0057a4ecb8f601181db3e471da19ffd295840d87ba0d60528935b72 Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.653267 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3865f571-b5f0-4da2-b76d-dc2b5ef91b09","Type":"ContainerStarted","Data":"392c736fe0057a4ecb8f601181db3e471da19ffd295840d87ba0d60528935b72"} Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.655629 4856 generic.go:334] "Generic (PLEG): container finished" podID="ffb19735-07df-4fbd-9f9a-4d3aa861e03a" containerID="fe385768b79ac31126e52f3869af0ea80aced065b1de1ca8f0de99c92dbf7f22" exitCode=0 Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.655701 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ckxn9" event={"ID":"ffb19735-07df-4fbd-9f9a-4d3aa861e03a","Type":"ContainerDied","Data":"fe385768b79ac31126e52f3869af0ea80aced065b1de1ca8f0de99c92dbf7f22"} Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.716133 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.716175 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:39 crc kubenswrapper[4856]: I1122 07:27:39.767269 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:40 crc kubenswrapper[4856]: I1122 07:27:40.667684 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3865f571-b5f0-4da2-b76d-dc2b5ef91b09","Type":"ContainerStarted","Data":"cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3"} Nov 22 07:27:40 crc kubenswrapper[4856]: I1122 07:27:40.918373 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vr4x8"] Nov 22 07:27:40 crc kubenswrapper[4856]: I1122 07:27:40.926994 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vr4x8" Nov 22 07:27:40 crc kubenswrapper[4856]: I1122 07:27:40.941872 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vr4x8"] Nov 22 07:27:40 crc kubenswrapper[4856]: I1122 07:27:40.957383 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5897169-cfb8-4105-bc18-4fc7cbe28eee-operator-scripts\") pod \"nova-api-db-create-vr4x8\" (UID: \"f5897169-cfb8-4105-bc18-4fc7cbe28eee\") " pod="openstack/nova-api-db-create-vr4x8" Nov 22 07:27:40 crc kubenswrapper[4856]: I1122 07:27:40.957528 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8zjg\" (UniqueName: \"kubernetes.io/projected/f5897169-cfb8-4105-bc18-4fc7cbe28eee-kube-api-access-w8zjg\") pod \"nova-api-db-create-vr4x8\" (UID: \"f5897169-cfb8-4105-bc18-4fc7cbe28eee\") " pod="openstack/nova-api-db-create-vr4x8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.027593 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jrng7"] Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.029266 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jrng7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.035205 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jrng7"] Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.059631 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq824\" (UniqueName: \"kubernetes.io/projected/44d612c2-f369-4085-8e65-fc4d80281c5a-kube-api-access-bq824\") pod \"nova-cell0-db-create-jrng7\" (UID: \"44d612c2-f369-4085-8e65-fc4d80281c5a\") " pod="openstack/nova-cell0-db-create-jrng7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.059697 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8zjg\" (UniqueName: \"kubernetes.io/projected/f5897169-cfb8-4105-bc18-4fc7cbe28eee-kube-api-access-w8zjg\") pod \"nova-api-db-create-vr4x8\" (UID: \"f5897169-cfb8-4105-bc18-4fc7cbe28eee\") " pod="openstack/nova-api-db-create-vr4x8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.059803 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44d612c2-f369-4085-8e65-fc4d80281c5a-operator-scripts\") pod \"nova-cell0-db-create-jrng7\" (UID: \"44d612c2-f369-4085-8e65-fc4d80281c5a\") " pod="openstack/nova-cell0-db-create-jrng7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.059911 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5897169-cfb8-4105-bc18-4fc7cbe28eee-operator-scripts\") pod \"nova-api-db-create-vr4x8\" (UID: \"f5897169-cfb8-4105-bc18-4fc7cbe28eee\") " pod="openstack/nova-api-db-create-vr4x8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.060810 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5897169-cfb8-4105-bc18-4fc7cbe28eee-operator-scripts\") pod \"nova-api-db-create-vr4x8\" (UID: \"f5897169-cfb8-4105-bc18-4fc7cbe28eee\") " pod="openstack/nova-api-db-create-vr4x8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.106423 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8zjg\" (UniqueName: \"kubernetes.io/projected/f5897169-cfb8-4105-bc18-4fc7cbe28eee-kube-api-access-w8zjg\") pod \"nova-api-db-create-vr4x8\" (UID: \"f5897169-cfb8-4105-bc18-4fc7cbe28eee\") " pod="openstack/nova-api-db-create-vr4x8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.119568 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2c75-account-create-vt7h7"] Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.120818 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2c75-account-create-vt7h7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.122955 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.137038 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-pb4xh"] Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.138267 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pb4xh" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.148961 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2c75-account-create-vt7h7"] Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.162093 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pb4xh"] Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.162701 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xc4x\" (UniqueName: \"kubernetes.io/projected/be896f81-5804-4e66-8006-51eaa9675cb2-kube-api-access-7xc4x\") pod \"nova-cell1-db-create-pb4xh\" (UID: \"be896f81-5804-4e66-8006-51eaa9675cb2\") " pod="openstack/nova-cell1-db-create-pb4xh" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.162829 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44d612c2-f369-4085-8e65-fc4d80281c5a-operator-scripts\") pod \"nova-cell0-db-create-jrng7\" (UID: \"44d612c2-f369-4085-8e65-fc4d80281c5a\") " pod="openstack/nova-cell0-db-create-jrng7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.162903 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be896f81-5804-4e66-8006-51eaa9675cb2-operator-scripts\") pod \"nova-cell1-db-create-pb4xh\" (UID: \"be896f81-5804-4e66-8006-51eaa9675cb2\") " pod="openstack/nova-cell1-db-create-pb4xh" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.162943 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnv25\" (UniqueName: \"kubernetes.io/projected/2ccea049-279e-43e8-9da2-04101b095f12-kube-api-access-rnv25\") pod \"nova-api-2c75-account-create-vt7h7\" (UID: \"2ccea049-279e-43e8-9da2-04101b095f12\") " pod="openstack/nova-api-2c75-account-create-vt7h7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.162971 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccea049-279e-43e8-9da2-04101b095f12-operator-scripts\") pod \"nova-api-2c75-account-create-vt7h7\" (UID: \"2ccea049-279e-43e8-9da2-04101b095f12\") " pod="openstack/nova-api-2c75-account-create-vt7h7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.163033 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq824\" (UniqueName: \"kubernetes.io/projected/44d612c2-f369-4085-8e65-fc4d80281c5a-kube-api-access-bq824\") pod \"nova-cell0-db-create-jrng7\" (UID: \"44d612c2-f369-4085-8e65-fc4d80281c5a\") " pod="openstack/nova-cell0-db-create-jrng7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.163542 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44d612c2-f369-4085-8e65-fc4d80281c5a-operator-scripts\") pod \"nova-cell0-db-create-jrng7\" (UID: \"44d612c2-f369-4085-8e65-fc4d80281c5a\") " pod="openstack/nova-cell0-db-create-jrng7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.191943 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ckxn9" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.223671 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq824\" (UniqueName: \"kubernetes.io/projected/44d612c2-f369-4085-8e65-fc4d80281c5a-kube-api-access-bq824\") pod \"nova-cell0-db-create-jrng7\" (UID: \"44d612c2-f369-4085-8e65-fc4d80281c5a\") " pod="openstack/nova-cell0-db-create-jrng7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.266207 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-logs\") pod \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.266332 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-config-data\") pod \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.266383 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj7mg\" (UniqueName: \"kubernetes.io/projected/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-kube-api-access-jj7mg\") pod \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.266426 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-scripts\") pod \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.266455 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-combined-ca-bundle\") pod \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\" (UID: \"ffb19735-07df-4fbd-9f9a-4d3aa861e03a\") " Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.266691 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be896f81-5804-4e66-8006-51eaa9675cb2-operator-scripts\") pod \"nova-cell1-db-create-pb4xh\" (UID: \"be896f81-5804-4e66-8006-51eaa9675cb2\") " pod="openstack/nova-cell1-db-create-pb4xh" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.266726 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnv25\" (UniqueName: \"kubernetes.io/projected/2ccea049-279e-43e8-9da2-04101b095f12-kube-api-access-rnv25\") pod \"nova-api-2c75-account-create-vt7h7\" (UID: \"2ccea049-279e-43e8-9da2-04101b095f12\") " pod="openstack/nova-api-2c75-account-create-vt7h7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.266746 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccea049-279e-43e8-9da2-04101b095f12-operator-scripts\") pod \"nova-api-2c75-account-create-vt7h7\" (UID: \"2ccea049-279e-43e8-9da2-04101b095f12\") " pod="openstack/nova-api-2c75-account-create-vt7h7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.266825 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xc4x\" (UniqueName: \"kubernetes.io/projected/be896f81-5804-4e66-8006-51eaa9675cb2-kube-api-access-7xc4x\") pod \"nova-cell1-db-create-pb4xh\" (UID: \"be896f81-5804-4e66-8006-51eaa9675cb2\") " pod="openstack/nova-cell1-db-create-pb4xh" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.267261 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-logs" (OuterVolumeSpecName: "logs") pod "ffb19735-07df-4fbd-9f9a-4d3aa861e03a" (UID: "ffb19735-07df-4fbd-9f9a-4d3aa861e03a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.267755 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be896f81-5804-4e66-8006-51eaa9675cb2-operator-scripts\") pod \"nova-cell1-db-create-pb4xh\" (UID: \"be896f81-5804-4e66-8006-51eaa9675cb2\") " pod="openstack/nova-cell1-db-create-pb4xh" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.268325 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccea049-279e-43e8-9da2-04101b095f12-operator-scripts\") pod \"nova-api-2c75-account-create-vt7h7\" (UID: \"2ccea049-279e-43e8-9da2-04101b095f12\") " pod="openstack/nova-api-2c75-account-create-vt7h7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.288222 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-kube-api-access-jj7mg" (OuterVolumeSpecName: "kube-api-access-jj7mg") pod "ffb19735-07df-4fbd-9f9a-4d3aa861e03a" (UID: "ffb19735-07df-4fbd-9f9a-4d3aa861e03a"). InnerVolumeSpecName "kube-api-access-jj7mg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.288694 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-scripts" (OuterVolumeSpecName: "scripts") pod "ffb19735-07df-4fbd-9f9a-4d3aa861e03a" (UID: "ffb19735-07df-4fbd-9f9a-4d3aa861e03a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.293979 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnv25\" (UniqueName: \"kubernetes.io/projected/2ccea049-279e-43e8-9da2-04101b095f12-kube-api-access-rnv25\") pod \"nova-api-2c75-account-create-vt7h7\" (UID: \"2ccea049-279e-43e8-9da2-04101b095f12\") " pod="openstack/nova-api-2c75-account-create-vt7h7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.305273 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xc4x\" (UniqueName: \"kubernetes.io/projected/be896f81-5804-4e66-8006-51eaa9675cb2-kube-api-access-7xc4x\") pod \"nova-cell1-db-create-pb4xh\" (UID: \"be896f81-5804-4e66-8006-51eaa9675cb2\") " pod="openstack/nova-cell1-db-create-pb4xh" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.316495 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vr4x8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.319699 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffb19735-07df-4fbd-9f9a-4d3aa861e03a" (UID: "ffb19735-07df-4fbd-9f9a-4d3aa861e03a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.328267 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-config-data" (OuterVolumeSpecName: "config-data") pod "ffb19735-07df-4fbd-9f9a-4d3aa861e03a" (UID: "ffb19735-07df-4fbd-9f9a-4d3aa861e03a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.347833 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7477-account-create-b2vn8"] Nov 22 07:27:41 crc kubenswrapper[4856]: E1122 07:27:41.358025 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb19735-07df-4fbd-9f9a-4d3aa861e03a" containerName="placement-db-sync" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.358077 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb19735-07df-4fbd-9f9a-4d3aa861e03a" containerName="placement-db-sync" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.358337 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb19735-07df-4fbd-9f9a-4d3aa861e03a" containerName="placement-db-sync" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.360162 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7477-account-create-b2vn8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.363655 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.369231 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.369261 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.369275 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj7mg\" (UniqueName: \"kubernetes.io/projected/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-kube-api-access-jj7mg\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.369284 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.369295 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffb19735-07df-4fbd-9f9a-4d3aa861e03a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.380593 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7477-account-create-b2vn8"] Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.424650 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e905-account-create-2fzk4"] Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.425753 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e905-account-create-2fzk4" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.437182 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.445487 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e905-account-create-2fzk4"] Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.472756 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qskx\" (UniqueName: \"kubernetes.io/projected/eb9414db-136f-408b-9081-d9ffdaa00e07-kube-api-access-2qskx\") pod \"nova-cell1-e905-account-create-2fzk4\" (UID: \"eb9414db-136f-408b-9081-d9ffdaa00e07\") " pod="openstack/nova-cell1-e905-account-create-2fzk4" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.472812 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btrg4\" (UniqueName: \"kubernetes.io/projected/b4500308-9c55-4560-afc5-8e34d65bcfa7-kube-api-access-btrg4\") pod \"nova-cell0-7477-account-create-b2vn8\" (UID: \"b4500308-9c55-4560-afc5-8e34d65bcfa7\") " pod="openstack/nova-cell0-7477-account-create-b2vn8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.472836 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4500308-9c55-4560-afc5-8e34d65bcfa7-operator-scripts\") pod \"nova-cell0-7477-account-create-b2vn8\" (UID: \"b4500308-9c55-4560-afc5-8e34d65bcfa7\") " pod="openstack/nova-cell0-7477-account-create-b2vn8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.472878 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9414db-136f-408b-9081-d9ffdaa00e07-operator-scripts\") pod \"nova-cell1-e905-account-create-2fzk4\" (UID: \"eb9414db-136f-408b-9081-d9ffdaa00e07\") " pod="openstack/nova-cell1-e905-account-create-2fzk4" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.492905 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jrng7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.521842 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2c75-account-create-vt7h7" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.572961 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pb4xh" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.574464 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qskx\" (UniqueName: \"kubernetes.io/projected/eb9414db-136f-408b-9081-d9ffdaa00e07-kube-api-access-2qskx\") pod \"nova-cell1-e905-account-create-2fzk4\" (UID: \"eb9414db-136f-408b-9081-d9ffdaa00e07\") " pod="openstack/nova-cell1-e905-account-create-2fzk4" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.574799 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btrg4\" (UniqueName: \"kubernetes.io/projected/b4500308-9c55-4560-afc5-8e34d65bcfa7-kube-api-access-btrg4\") pod \"nova-cell0-7477-account-create-b2vn8\" (UID: \"b4500308-9c55-4560-afc5-8e34d65bcfa7\") " pod="openstack/nova-cell0-7477-account-create-b2vn8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.574824 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4500308-9c55-4560-afc5-8e34d65bcfa7-operator-scripts\") pod \"nova-cell0-7477-account-create-b2vn8\" (UID: \"b4500308-9c55-4560-afc5-8e34d65bcfa7\") " pod="openstack/nova-cell0-7477-account-create-b2vn8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.574866 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9414db-136f-408b-9081-d9ffdaa00e07-operator-scripts\") pod \"nova-cell1-e905-account-create-2fzk4\" (UID: \"eb9414db-136f-408b-9081-d9ffdaa00e07\") " pod="openstack/nova-cell1-e905-account-create-2fzk4" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.575717 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4500308-9c55-4560-afc5-8e34d65bcfa7-operator-scripts\") pod \"nova-cell0-7477-account-create-b2vn8\" (UID: \"b4500308-9c55-4560-afc5-8e34d65bcfa7\") " pod="openstack/nova-cell0-7477-account-create-b2vn8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.575726 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9414db-136f-408b-9081-d9ffdaa00e07-operator-scripts\") pod \"nova-cell1-e905-account-create-2fzk4\" (UID: \"eb9414db-136f-408b-9081-d9ffdaa00e07\") " pod="openstack/nova-cell1-e905-account-create-2fzk4" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.599916 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btrg4\" (UniqueName: \"kubernetes.io/projected/b4500308-9c55-4560-afc5-8e34d65bcfa7-kube-api-access-btrg4\") pod \"nova-cell0-7477-account-create-b2vn8\" (UID: \"b4500308-9c55-4560-afc5-8e34d65bcfa7\") " pod="openstack/nova-cell0-7477-account-create-b2vn8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.606227 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qskx\" (UniqueName: \"kubernetes.io/projected/eb9414db-136f-408b-9081-d9ffdaa00e07-kube-api-access-2qskx\") pod \"nova-cell1-e905-account-create-2fzk4\" (UID: \"eb9414db-136f-408b-9081-d9ffdaa00e07\") " pod="openstack/nova-cell1-e905-account-create-2fzk4" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.693314 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3865f571-b5f0-4da2-b76d-dc2b5ef91b09","Type":"ContainerStarted","Data":"5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a"} Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.696187 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-ckxn9" event={"ID":"ffb19735-07df-4fbd-9f9a-4d3aa861e03a","Type":"ContainerDied","Data":"e26620867e598a5bfa415df96470fe005c1640b27fa71ebce34fe3224d92df45"} Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.696242 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e26620867e598a5bfa415df96470fe005c1640b27fa71ebce34fe3224d92df45" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.696290 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-ckxn9" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.696578 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7477-account-create-b2vn8" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.775317 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e905-account-create-2fzk4" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.808301 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-fc96b95bb-4mtxg"] Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.810084 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.817068 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.817286 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.817556 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.817727 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.817854 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2mnmh" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.821994 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fc96b95bb-4mtxg"] Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.881963 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-combined-ca-bundle\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.882042 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-logs\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.882083 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-public-tls-certs\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.882164 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-scripts\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.882266 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-config-data\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.882308 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6n5k\" (UniqueName: \"kubernetes.io/projected/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-kube-api-access-g6n5k\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.882360 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-internal-tls-certs\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.952599 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vr4x8"] Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.988290 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-combined-ca-bundle\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.988354 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-logs\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.988386 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-public-tls-certs\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.988541 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-scripts\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.988628 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-config-data\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.988657 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6n5k\" (UniqueName: \"kubernetes.io/projected/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-kube-api-access-g6n5k\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.988692 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-internal-tls-certs\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.988822 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-logs\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.995008 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-internal-tls-certs\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.997826 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-public-tls-certs\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:41 crc kubenswrapper[4856]: I1122 07:27:41.998867 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-config-data\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.001989 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-scripts\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.007174 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-combined-ca-bundle\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.015248 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6n5k\" (UniqueName: \"kubernetes.io/projected/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-kube-api-access-g6n5k\") pod \"placement-fc96b95bb-4mtxg\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.159324 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.234853 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2c75-account-create-vt7h7"] Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.246647 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jrng7"] Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.497703 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pb4xh"] Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.577470 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7477-account-create-b2vn8"] Nov 22 07:27:42 crc kubenswrapper[4856]: W1122 07:27:42.587796 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4500308_9c55_4560_afc5_8e34d65bcfa7.slice/crio-b9dacdc30fec37095d5c93c3a5412a96b81ef959ecd6579cafc95451c10f13c9 WatchSource:0}: Error finding container b9dacdc30fec37095d5c93c3a5412a96b81ef959ecd6579cafc95451c10f13c9: Status 404 returned error can't find the container with id b9dacdc30fec37095d5c93c3a5412a96b81ef959ecd6579cafc95451c10f13c9 Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.672924 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e905-account-create-2fzk4"] Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.741790 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2c75-account-create-vt7h7" event={"ID":"2ccea049-279e-43e8-9da2-04101b095f12","Type":"ContainerStarted","Data":"99e1c598e25f59ffff2a323a35facd9127a5471afcf679dd24858433d3553660"} Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.744003 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pb4xh" event={"ID":"be896f81-5804-4e66-8006-51eaa9675cb2","Type":"ContainerStarted","Data":"6c734f6d803b31099b92d2f34cafa9892b9baf400dc384a4d9d8cbd19ccc4abb"} Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.745452 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e905-account-create-2fzk4" event={"ID":"eb9414db-136f-408b-9081-d9ffdaa00e07","Type":"ContainerStarted","Data":"b9f0e690bfb20848c0001ad5a93f6dd716694a792e97dfd8ad6dda4669816a26"} Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.747573 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7477-account-create-b2vn8" event={"ID":"b4500308-9c55-4560-afc5-8e34d65bcfa7","Type":"ContainerStarted","Data":"b9dacdc30fec37095d5c93c3a5412a96b81ef959ecd6579cafc95451c10f13c9"} Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.748635 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jrng7" event={"ID":"44d612c2-f369-4085-8e65-fc4d80281c5a","Type":"ContainerStarted","Data":"710fc4fad90dc73e619930c915e383f9b9af5ea088de10b56b652e49a07c4fb2"} Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.751596 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vr4x8" event={"ID":"f5897169-cfb8-4105-bc18-4fc7cbe28eee","Type":"ContainerStarted","Data":"efb8a5d4d8343649cf607898ba5bd73e65ec3c1f989943bfee33058f83c6e13b"} Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.751639 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vr4x8" event={"ID":"f5897169-cfb8-4105-bc18-4fc7cbe28eee","Type":"ContainerStarted","Data":"c96d0e18dc1d50c7a4803455399f5c69887771b442343111be21d4f1510ffb08"} Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.765023 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-fc96b95bb-4mtxg"] Nov 22 07:27:42 crc kubenswrapper[4856]: I1122 07:27:42.779187 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-vr4x8" podStartSLOduration=2.7791668449999998 podStartE2EDuration="2.779166845s" podCreationTimestamp="2025-11-22 07:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:27:42.776832992 +0000 UTC m=+1505.190226250" watchObservedRunningTime="2025-11-22 07:27:42.779166845 +0000 UTC m=+1505.192560103" Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.756786 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.757583 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4e4b2fd6-9289-4543-ac15-75da468b55c9" containerName="glance-log" containerID="cri-o://cfdacf6da7c588ca0bcf1479465101a4822c4d7379ea095c311344318b3ab4da" gracePeriod=30 Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.757981 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4e4b2fd6-9289-4543-ac15-75da468b55c9" containerName="glance-httpd" containerID="cri-o://9f4f428a44c5a3482bea4907846448ff80ccd7bbbe766f2ded1a78ab5486a550" gracePeriod=30 Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.790749 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2c75-account-create-vt7h7" event={"ID":"2ccea049-279e-43e8-9da2-04101b095f12","Type":"ContainerStarted","Data":"e51f9c56ea1c0e3182c1c3c0d9428cb803206db87582ea1f3fdb797ee1304a25"} Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.793722 4856 generic.go:334] "Generic (PLEG): container finished" podID="be896f81-5804-4e66-8006-51eaa9675cb2" containerID="4692846d3f96731874a1774fa70ea5b09c98e71c90605b52844703235cc88004" exitCode=0 Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.793776 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pb4xh" event={"ID":"be896f81-5804-4e66-8006-51eaa9675cb2","Type":"ContainerDied","Data":"4692846d3f96731874a1774fa70ea5b09c98e71c90605b52844703235cc88004"} Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.802960 4856 generic.go:334] "Generic (PLEG): container finished" podID="eb9414db-136f-408b-9081-d9ffdaa00e07" containerID="d5d879863319206355a86b5a2ece30cd07fa7ad8e1156bd4465388cd8948de14" exitCode=0 Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.803037 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e905-account-create-2fzk4" event={"ID":"eb9414db-136f-408b-9081-d9ffdaa00e07","Type":"ContainerDied","Data":"d5d879863319206355a86b5a2ece30cd07fa7ad8e1156bd4465388cd8948de14"} Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.809988 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7477-account-create-b2vn8" event={"ID":"b4500308-9c55-4560-afc5-8e34d65bcfa7","Type":"ContainerStarted","Data":"d8ce4b0ff118154c61796fcae8303cb155b81851dc7f2edb9facb56abc699957"} Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.812059 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2c75-account-create-vt7h7" podStartSLOduration=2.81203556 podStartE2EDuration="2.81203556s" podCreationTimestamp="2025-11-22 07:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:27:43.809790259 +0000 UTC m=+1506.223183517" watchObservedRunningTime="2025-11-22 07:27:43.81203556 +0000 UTC m=+1506.225428818" Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.821416 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jrng7" event={"ID":"44d612c2-f369-4085-8e65-fc4d80281c5a","Type":"ContainerStarted","Data":"4b3a34271aa5ac787753f7b938e7ae22608f0db7bdfd20a4d3e671e077fbfc32"} Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.829662 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fc96b95bb-4mtxg" event={"ID":"70c845eb-6695-4de7-8b4a-ef7c6a6701a4","Type":"ContainerStarted","Data":"4271d7224db735b34906645781ea2372db51f2e3d614022512e9b52eee61ba39"} Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.829713 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fc96b95bb-4mtxg" event={"ID":"70c845eb-6695-4de7-8b4a-ef7c6a6701a4","Type":"ContainerStarted","Data":"75113acf2dbb16ae1bdf42ce4f05d360fbccf9d7cd342f7d75054ed30301fec5"} Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.859388 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3865f571-b5f0-4da2-b76d-dc2b5ef91b09","Type":"ContainerStarted","Data":"ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de"} Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.909541 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-jrng7" podStartSLOduration=3.909519863 podStartE2EDuration="3.909519863s" podCreationTimestamp="2025-11-22 07:27:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:27:43.884015748 +0000 UTC m=+1506.297409026" watchObservedRunningTime="2025-11-22 07:27:43.909519863 +0000 UTC m=+1506.322913121" Nov 22 07:27:43 crc kubenswrapper[4856]: I1122 07:27:43.911476 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-7477-account-create-b2vn8" podStartSLOduration=2.911468066 podStartE2EDuration="2.911468066s" podCreationTimestamp="2025-11-22 07:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:27:43.904839495 +0000 UTC m=+1506.318232743" watchObservedRunningTime="2025-11-22 07:27:43.911468066 +0000 UTC m=+1506.324861324" Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.869445 4856 generic.go:334] "Generic (PLEG): container finished" podID="f5897169-cfb8-4105-bc18-4fc7cbe28eee" containerID="efb8a5d4d8343649cf607898ba5bd73e65ec3c1f989943bfee33058f83c6e13b" exitCode=0 Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.869536 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vr4x8" event={"ID":"f5897169-cfb8-4105-bc18-4fc7cbe28eee","Type":"ContainerDied","Data":"efb8a5d4d8343649cf607898ba5bd73e65ec3c1f989943bfee33058f83c6e13b"} Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.873479 4856 generic.go:334] "Generic (PLEG): container finished" podID="4e4b2fd6-9289-4543-ac15-75da468b55c9" containerID="cfdacf6da7c588ca0bcf1479465101a4822c4d7379ea095c311344318b3ab4da" exitCode=143 Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.873542 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e4b2fd6-9289-4543-ac15-75da468b55c9","Type":"ContainerDied","Data":"cfdacf6da7c588ca0bcf1479465101a4822c4d7379ea095c311344318b3ab4da"} Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.876030 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fc96b95bb-4mtxg" event={"ID":"70c845eb-6695-4de7-8b4a-ef7c6a6701a4","Type":"ContainerStarted","Data":"bb57a5740eec3fe63e3bb880f72bda941c5f54b634af051477157f490cf788ec"} Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.876174 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.878829 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3865f571-b5f0-4da2-b76d-dc2b5ef91b09","Type":"ContainerStarted","Data":"74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58"} Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.878937 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.880493 4856 generic.go:334] "Generic (PLEG): container finished" podID="2ccea049-279e-43e8-9da2-04101b095f12" containerID="e51f9c56ea1c0e3182c1c3c0d9428cb803206db87582ea1f3fdb797ee1304a25" exitCode=0 Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.880579 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2c75-account-create-vt7h7" event={"ID":"2ccea049-279e-43e8-9da2-04101b095f12","Type":"ContainerDied","Data":"e51f9c56ea1c0e3182c1c3c0d9428cb803206db87582ea1f3fdb797ee1304a25"} Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.894032 4856 generic.go:334] "Generic (PLEG): container finished" podID="b4500308-9c55-4560-afc5-8e34d65bcfa7" containerID="d8ce4b0ff118154c61796fcae8303cb155b81851dc7f2edb9facb56abc699957" exitCode=0 Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.894099 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7477-account-create-b2vn8" event={"ID":"b4500308-9c55-4560-afc5-8e34d65bcfa7","Type":"ContainerDied","Data":"d8ce4b0ff118154c61796fcae8303cb155b81851dc7f2edb9facb56abc699957"} Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.903801 4856 generic.go:334] "Generic (PLEG): container finished" podID="44d612c2-f369-4085-8e65-fc4d80281c5a" containerID="4b3a34271aa5ac787753f7b938e7ae22608f0db7bdfd20a4d3e671e077fbfc32" exitCode=0 Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.904026 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jrng7" event={"ID":"44d612c2-f369-4085-8e65-fc4d80281c5a","Type":"ContainerDied","Data":"4b3a34271aa5ac787753f7b938e7ae22608f0db7bdfd20a4d3e671e077fbfc32"} Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.960494 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5644067699999997 podStartE2EDuration="6.96047627s" podCreationTimestamp="2025-11-22 07:27:38 +0000 UTC" firstStartedPulling="2025-11-22 07:27:39.515626232 +0000 UTC m=+1501.929019490" lastFinishedPulling="2025-11-22 07:27:43.911695732 +0000 UTC m=+1506.325088990" observedRunningTime="2025-11-22 07:27:44.947901248 +0000 UTC m=+1507.361294506" watchObservedRunningTime="2025-11-22 07:27:44.96047627 +0000 UTC m=+1507.373869528" Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.962802 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.963006 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bacb8184-1aa1-400c-99c8-1cab84e83cd7" containerName="glance-log" containerID="cri-o://195c38e7799f12a7b158ae8ba1823c75572346d7ca99a5add897c36d0d7f77df" gracePeriod=30 Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.963171 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bacb8184-1aa1-400c-99c8-1cab84e83cd7" containerName="glance-httpd" containerID="cri-o://ad7f34f5bdfc3e25a3856d2526fe4b7c79b62a3a2e4466751fce5b4b8233dc7d" gracePeriod=30 Nov 22 07:27:44 crc kubenswrapper[4856]: I1122 07:27:44.975213 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-fc96b95bb-4mtxg" podStartSLOduration=3.975197361 podStartE2EDuration="3.975197361s" podCreationTimestamp="2025-11-22 07:27:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:27:44.970888474 +0000 UTC m=+1507.384281732" watchObservedRunningTime="2025-11-22 07:27:44.975197361 +0000 UTC m=+1507.388590619" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.394775 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e905-account-create-2fzk4" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.410098 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pb4xh" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.469168 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be896f81-5804-4e66-8006-51eaa9675cb2-operator-scripts\") pod \"be896f81-5804-4e66-8006-51eaa9675cb2\" (UID: \"be896f81-5804-4e66-8006-51eaa9675cb2\") " Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.469673 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be896f81-5804-4e66-8006-51eaa9675cb2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be896f81-5804-4e66-8006-51eaa9675cb2" (UID: "be896f81-5804-4e66-8006-51eaa9675cb2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.469758 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qskx\" (UniqueName: \"kubernetes.io/projected/eb9414db-136f-408b-9081-d9ffdaa00e07-kube-api-access-2qskx\") pod \"eb9414db-136f-408b-9081-d9ffdaa00e07\" (UID: \"eb9414db-136f-408b-9081-d9ffdaa00e07\") " Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.470556 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xc4x\" (UniqueName: \"kubernetes.io/projected/be896f81-5804-4e66-8006-51eaa9675cb2-kube-api-access-7xc4x\") pod \"be896f81-5804-4e66-8006-51eaa9675cb2\" (UID: \"be896f81-5804-4e66-8006-51eaa9675cb2\") " Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.470669 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9414db-136f-408b-9081-d9ffdaa00e07-operator-scripts\") pod \"eb9414db-136f-408b-9081-d9ffdaa00e07\" (UID: \"eb9414db-136f-408b-9081-d9ffdaa00e07\") " Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.471006 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be896f81-5804-4e66-8006-51eaa9675cb2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.471390 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9414db-136f-408b-9081-d9ffdaa00e07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb9414db-136f-408b-9081-d9ffdaa00e07" (UID: "eb9414db-136f-408b-9081-d9ffdaa00e07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.476106 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9414db-136f-408b-9081-d9ffdaa00e07-kube-api-access-2qskx" (OuterVolumeSpecName: "kube-api-access-2qskx") pod "eb9414db-136f-408b-9081-d9ffdaa00e07" (UID: "eb9414db-136f-408b-9081-d9ffdaa00e07"). InnerVolumeSpecName "kube-api-access-2qskx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.488101 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be896f81-5804-4e66-8006-51eaa9675cb2-kube-api-access-7xc4x" (OuterVolumeSpecName: "kube-api-access-7xc4x") pod "be896f81-5804-4e66-8006-51eaa9675cb2" (UID: "be896f81-5804-4e66-8006-51eaa9675cb2"). InnerVolumeSpecName "kube-api-access-7xc4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.573257 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xc4x\" (UniqueName: \"kubernetes.io/projected/be896f81-5804-4e66-8006-51eaa9675cb2-kube-api-access-7xc4x\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.573282 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb9414db-136f-408b-9081-d9ffdaa00e07-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.573292 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qskx\" (UniqueName: \"kubernetes.io/projected/eb9414db-136f-408b-9081-d9ffdaa00e07-kube-api-access-2qskx\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.638948 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.915674 4856 generic.go:334] "Generic (PLEG): container finished" podID="bacb8184-1aa1-400c-99c8-1cab84e83cd7" containerID="195c38e7799f12a7b158ae8ba1823c75572346d7ca99a5add897c36d0d7f77df" exitCode=143 Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.915719 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bacb8184-1aa1-400c-99c8-1cab84e83cd7","Type":"ContainerDied","Data":"195c38e7799f12a7b158ae8ba1823c75572346d7ca99a5add897c36d0d7f77df"} Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.918050 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pb4xh" event={"ID":"be896f81-5804-4e66-8006-51eaa9675cb2","Type":"ContainerDied","Data":"6c734f6d803b31099b92d2f34cafa9892b9baf400dc384a4d9d8cbd19ccc4abb"} Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.918091 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c734f6d803b31099b92d2f34cafa9892b9baf400dc384a4d9d8cbd19ccc4abb" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.918058 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pb4xh" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.919977 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e905-account-create-2fzk4" event={"ID":"eb9414db-136f-408b-9081-d9ffdaa00e07","Type":"ContainerDied","Data":"b9f0e690bfb20848c0001ad5a93f6dd716694a792e97dfd8ad6dda4669816a26"} Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.920025 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9f0e690bfb20848c0001ad5a93f6dd716694a792e97dfd8ad6dda4669816a26" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.920234 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e905-account-create-2fzk4" Nov 22 07:27:45 crc kubenswrapper[4856]: I1122 07:27:45.921546 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.352834 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vr4x8" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.394671 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5897169-cfb8-4105-bc18-4fc7cbe28eee-operator-scripts\") pod \"f5897169-cfb8-4105-bc18-4fc7cbe28eee\" (UID: \"f5897169-cfb8-4105-bc18-4fc7cbe28eee\") " Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.394745 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8zjg\" (UniqueName: \"kubernetes.io/projected/f5897169-cfb8-4105-bc18-4fc7cbe28eee-kube-api-access-w8zjg\") pod \"f5897169-cfb8-4105-bc18-4fc7cbe28eee\" (UID: \"f5897169-cfb8-4105-bc18-4fc7cbe28eee\") " Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.396735 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5897169-cfb8-4105-bc18-4fc7cbe28eee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5897169-cfb8-4105-bc18-4fc7cbe28eee" (UID: "f5897169-cfb8-4105-bc18-4fc7cbe28eee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.401231 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5897169-cfb8-4105-bc18-4fc7cbe28eee-kube-api-access-w8zjg" (OuterVolumeSpecName: "kube-api-access-w8zjg") pod "f5897169-cfb8-4105-bc18-4fc7cbe28eee" (UID: "f5897169-cfb8-4105-bc18-4fc7cbe28eee"). InnerVolumeSpecName "kube-api-access-w8zjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.491720 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jrng7" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.496472 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5897169-cfb8-4105-bc18-4fc7cbe28eee-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.496502 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8zjg\" (UniqueName: \"kubernetes.io/projected/f5897169-cfb8-4105-bc18-4fc7cbe28eee-kube-api-access-w8zjg\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.498136 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2c75-account-create-vt7h7" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.510675 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7477-account-create-b2vn8" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.597867 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq824\" (UniqueName: \"kubernetes.io/projected/44d612c2-f369-4085-8e65-fc4d80281c5a-kube-api-access-bq824\") pod \"44d612c2-f369-4085-8e65-fc4d80281c5a\" (UID: \"44d612c2-f369-4085-8e65-fc4d80281c5a\") " Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.598048 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44d612c2-f369-4085-8e65-fc4d80281c5a-operator-scripts\") pod \"44d612c2-f369-4085-8e65-fc4d80281c5a\" (UID: \"44d612c2-f369-4085-8e65-fc4d80281c5a\") " Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.598723 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44d612c2-f369-4085-8e65-fc4d80281c5a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44d612c2-f369-4085-8e65-fc4d80281c5a" (UID: "44d612c2-f369-4085-8e65-fc4d80281c5a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.606533 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d612c2-f369-4085-8e65-fc4d80281c5a-kube-api-access-bq824" (OuterVolumeSpecName: "kube-api-access-bq824") pod "44d612c2-f369-4085-8e65-fc4d80281c5a" (UID: "44d612c2-f369-4085-8e65-fc4d80281c5a"). InnerVolumeSpecName "kube-api-access-bq824". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.699622 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccea049-279e-43e8-9da2-04101b095f12-operator-scripts\") pod \"2ccea049-279e-43e8-9da2-04101b095f12\" (UID: \"2ccea049-279e-43e8-9da2-04101b095f12\") " Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.699687 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnv25\" (UniqueName: \"kubernetes.io/projected/2ccea049-279e-43e8-9da2-04101b095f12-kube-api-access-rnv25\") pod \"2ccea049-279e-43e8-9da2-04101b095f12\" (UID: \"2ccea049-279e-43e8-9da2-04101b095f12\") " Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.699774 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4500308-9c55-4560-afc5-8e34d65bcfa7-operator-scripts\") pod \"b4500308-9c55-4560-afc5-8e34d65bcfa7\" (UID: \"b4500308-9c55-4560-afc5-8e34d65bcfa7\") " Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.699799 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btrg4\" (UniqueName: \"kubernetes.io/projected/b4500308-9c55-4560-afc5-8e34d65bcfa7-kube-api-access-btrg4\") pod \"b4500308-9c55-4560-afc5-8e34d65bcfa7\" (UID: \"b4500308-9c55-4560-afc5-8e34d65bcfa7\") " Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.700311 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq824\" (UniqueName: \"kubernetes.io/projected/44d612c2-f369-4085-8e65-fc4d80281c5a-kube-api-access-bq824\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.700335 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44d612c2-f369-4085-8e65-fc4d80281c5a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.700380 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4500308-9c55-4560-afc5-8e34d65bcfa7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4500308-9c55-4560-afc5-8e34d65bcfa7" (UID: "b4500308-9c55-4560-afc5-8e34d65bcfa7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.700443 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ccea049-279e-43e8-9da2-04101b095f12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ccea049-279e-43e8-9da2-04101b095f12" (UID: "2ccea049-279e-43e8-9da2-04101b095f12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.703262 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4500308-9c55-4560-afc5-8e34d65bcfa7-kube-api-access-btrg4" (OuterVolumeSpecName: "kube-api-access-btrg4") pod "b4500308-9c55-4560-afc5-8e34d65bcfa7" (UID: "b4500308-9c55-4560-afc5-8e34d65bcfa7"). InnerVolumeSpecName "kube-api-access-btrg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.703459 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ccea049-279e-43e8-9da2-04101b095f12-kube-api-access-rnv25" (OuterVolumeSpecName: "kube-api-access-rnv25") pod "2ccea049-279e-43e8-9da2-04101b095f12" (UID: "2ccea049-279e-43e8-9da2-04101b095f12"). InnerVolumeSpecName "kube-api-access-rnv25". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.802897 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccea049-279e-43e8-9da2-04101b095f12-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.802961 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnv25\" (UniqueName: \"kubernetes.io/projected/2ccea049-279e-43e8-9da2-04101b095f12-kube-api-access-rnv25\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.803214 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4500308-9c55-4560-afc5-8e34d65bcfa7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.803227 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btrg4\" (UniqueName: \"kubernetes.io/projected/b4500308-9c55-4560-afc5-8e34d65bcfa7-kube-api-access-btrg4\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.961541 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7477-account-create-b2vn8" event={"ID":"b4500308-9c55-4560-afc5-8e34d65bcfa7","Type":"ContainerDied","Data":"b9dacdc30fec37095d5c93c3a5412a96b81ef959ecd6579cafc95451c10f13c9"} Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.961579 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7477-account-create-b2vn8" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.961582 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9dacdc30fec37095d5c93c3a5412a96b81ef959ecd6579cafc95451c10f13c9" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.964971 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jrng7" event={"ID":"44d612c2-f369-4085-8e65-fc4d80281c5a","Type":"ContainerDied","Data":"710fc4fad90dc73e619930c915e383f9b9af5ea088de10b56b652e49a07c4fb2"} Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.965427 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="710fc4fad90dc73e619930c915e383f9b9af5ea088de10b56b652e49a07c4fb2" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.965012 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jrng7" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.970464 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vr4x8" event={"ID":"f5897169-cfb8-4105-bc18-4fc7cbe28eee","Type":"ContainerDied","Data":"c96d0e18dc1d50c7a4803455399f5c69887771b442343111be21d4f1510ffb08"} Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.970537 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c96d0e18dc1d50c7a4803455399f5c69887771b442343111be21d4f1510ffb08" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.970608 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vr4x8" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.976761 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2c75-account-create-vt7h7" event={"ID":"2ccea049-279e-43e8-9da2-04101b095f12","Type":"ContainerDied","Data":"99e1c598e25f59ffff2a323a35facd9127a5471afcf679dd24858433d3553660"} Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.976815 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99e1c598e25f59ffff2a323a35facd9127a5471afcf679dd24858433d3553660" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.976880 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2c75-account-create-vt7h7" Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.978586 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="ceilometer-central-agent" containerID="cri-o://cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3" gracePeriod=30 Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.978752 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="proxy-httpd" containerID="cri-o://74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58" gracePeriod=30 Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.979019 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="ceilometer-notification-agent" containerID="cri-o://5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a" gracePeriod=30 Nov 22 07:27:46 crc kubenswrapper[4856]: I1122 07:27:46.979216 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="sg-core" containerID="cri-o://ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de" gracePeriod=30 Nov 22 07:27:47 crc kubenswrapper[4856]: I1122 07:27:47.989944 4856 generic.go:334] "Generic (PLEG): container finished" podID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerID="74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58" exitCode=0 Nov 22 07:27:47 crc kubenswrapper[4856]: I1122 07:27:47.990001 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3865f571-b5f0-4da2-b76d-dc2b5ef91b09","Type":"ContainerDied","Data":"74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58"} Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.676728 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.838925 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h9fd\" (UniqueName: \"kubernetes.io/projected/bacb8184-1aa1-400c-99c8-1cab84e83cd7-kube-api-access-9h9fd\") pod \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.838997 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-internal-tls-certs\") pod \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.839040 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-config-data\") pod \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.839168 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-scripts\") pod \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.839236 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-combined-ca-bundle\") pod \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.839312 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.839338 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bacb8184-1aa1-400c-99c8-1cab84e83cd7-logs\") pod \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.839383 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bacb8184-1aa1-400c-99c8-1cab84e83cd7-httpd-run\") pod \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\" (UID: \"bacb8184-1aa1-400c-99c8-1cab84e83cd7\") " Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.840316 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bacb8184-1aa1-400c-99c8-1cab84e83cd7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bacb8184-1aa1-400c-99c8-1cab84e83cd7" (UID: "bacb8184-1aa1-400c-99c8-1cab84e83cd7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.845173 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bacb8184-1aa1-400c-99c8-1cab84e83cd7-logs" (OuterVolumeSpecName: "logs") pod "bacb8184-1aa1-400c-99c8-1cab84e83cd7" (UID: "bacb8184-1aa1-400c-99c8-1cab84e83cd7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.846706 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bacb8184-1aa1-400c-99c8-1cab84e83cd7-kube-api-access-9h9fd" (OuterVolumeSpecName: "kube-api-access-9h9fd") pod "bacb8184-1aa1-400c-99c8-1cab84e83cd7" (UID: "bacb8184-1aa1-400c-99c8-1cab84e83cd7"). InnerVolumeSpecName "kube-api-access-9h9fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.847658 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "bacb8184-1aa1-400c-99c8-1cab84e83cd7" (UID: "bacb8184-1aa1-400c-99c8-1cab84e83cd7"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.849433 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-scripts" (OuterVolumeSpecName: "scripts") pod "bacb8184-1aa1-400c-99c8-1cab84e83cd7" (UID: "bacb8184-1aa1-400c-99c8-1cab84e83cd7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.878123 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bacb8184-1aa1-400c-99c8-1cab84e83cd7" (UID: "bacb8184-1aa1-400c-99c8-1cab84e83cd7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.916498 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-config-data" (OuterVolumeSpecName: "config-data") pod "bacb8184-1aa1-400c-99c8-1cab84e83cd7" (UID: "bacb8184-1aa1-400c-99c8-1cab84e83cd7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.936989 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bacb8184-1aa1-400c-99c8-1cab84e83cd7" (UID: "bacb8184-1aa1-400c-99c8-1cab84e83cd7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.943326 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.943388 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.943426 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.943465 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bacb8184-1aa1-400c-99c8-1cab84e83cd7-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.943478 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bacb8184-1aa1-400c-99c8-1cab84e83cd7-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.943490 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h9fd\" (UniqueName: \"kubernetes.io/projected/bacb8184-1aa1-400c-99c8-1cab84e83cd7-kube-api-access-9h9fd\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.943502 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.947541 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bacb8184-1aa1-400c-99c8-1cab84e83cd7-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.967492 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 22 07:27:48 crc kubenswrapper[4856]: I1122 07:27:48.968639 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.005859 4856 generic.go:334] "Generic (PLEG): container finished" podID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerID="ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de" exitCode=2 Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.006215 4856 generic.go:334] "Generic (PLEG): container finished" podID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerID="5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a" exitCode=0 Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.006225 4856 generic.go:334] "Generic (PLEG): container finished" podID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerID="cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3" exitCode=0 Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.006267 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3865f571-b5f0-4da2-b76d-dc2b5ef91b09","Type":"ContainerDied","Data":"ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de"} Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.006292 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3865f571-b5f0-4da2-b76d-dc2b5ef91b09","Type":"ContainerDied","Data":"5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a"} Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.006303 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3865f571-b5f0-4da2-b76d-dc2b5ef91b09","Type":"ContainerDied","Data":"cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3"} Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.006312 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3865f571-b5f0-4da2-b76d-dc2b5ef91b09","Type":"ContainerDied","Data":"392c736fe0057a4ecb8f601181db3e471da19ffd295840d87ba0d60528935b72"} Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.006328 4856 scope.go:117] "RemoveContainer" containerID="74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.006443 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.010019 4856 generic.go:334] "Generic (PLEG): container finished" podID="bacb8184-1aa1-400c-99c8-1cab84e83cd7" containerID="ad7f34f5bdfc3e25a3856d2526fe4b7c79b62a3a2e4466751fce5b4b8233dc7d" exitCode=0 Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.010072 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.010085 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bacb8184-1aa1-400c-99c8-1cab84e83cd7","Type":"ContainerDied","Data":"ad7f34f5bdfc3e25a3856d2526fe4b7c79b62a3a2e4466751fce5b4b8233dc7d"} Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.010106 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bacb8184-1aa1-400c-99c8-1cab84e83cd7","Type":"ContainerDied","Data":"8b8d18a78000ba17e839d09c57953a2d0d5cf19fc3b870dfa1b5b2c0080451ca"} Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.016265 4856 generic.go:334] "Generic (PLEG): container finished" podID="4e4b2fd6-9289-4543-ac15-75da468b55c9" containerID="9f4f428a44c5a3482bea4907846448ff80ccd7bbbe766f2ded1a78ab5486a550" exitCode=0 Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.016328 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e4b2fd6-9289-4543-ac15-75da468b55c9","Type":"ContainerDied","Data":"9f4f428a44c5a3482bea4907846448ff80ccd7bbbe766f2ded1a78ab5486a550"} Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.043265 4856 scope.go:117] "RemoveContainer" containerID="ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.048082 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dszb6\" (UniqueName: \"kubernetes.io/projected/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-kube-api-access-dszb6\") pod \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.048143 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-log-httpd\") pod \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.048178 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-config-data\") pod \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.048213 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-scripts\") pod \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.048311 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-combined-ca-bundle\") pod \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.048351 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-sg-core-conf-yaml\") pod \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.048409 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-run-httpd\") pod \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\" (UID: \"3865f571-b5f0-4da2-b76d-dc2b5ef91b09\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.052059 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.052527 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.052954 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3865f571-b5f0-4da2-b76d-dc2b5ef91b09" (UID: "3865f571-b5f0-4da2-b76d-dc2b5ef91b09"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.057216 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3865f571-b5f0-4da2-b76d-dc2b5ef91b09" (UID: "3865f571-b5f0-4da2-b76d-dc2b5ef91b09"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.062219 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-kube-api-access-dszb6" (OuterVolumeSpecName: "kube-api-access-dszb6") pod "3865f571-b5f0-4da2-b76d-dc2b5ef91b09" (UID: "3865f571-b5f0-4da2-b76d-dc2b5ef91b09"). InnerVolumeSpecName "kube-api-access-dszb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.063461 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-scripts" (OuterVolumeSpecName: "scripts") pod "3865f571-b5f0-4da2-b76d-dc2b5ef91b09" (UID: "3865f571-b5f0-4da2-b76d-dc2b5ef91b09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.064226 4856 scope.go:117] "RemoveContainer" containerID="5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.071258 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.088716 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3865f571-b5f0-4da2-b76d-dc2b5ef91b09" (UID: "3865f571-b5f0-4da2-b76d-dc2b5ef91b09"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.102413 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.102805 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d612c2-f369-4085-8e65-fc4d80281c5a" containerName="mariadb-database-create" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.102829 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d612c2-f369-4085-8e65-fc4d80281c5a" containerName="mariadb-database-create" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.102851 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be896f81-5804-4e66-8006-51eaa9675cb2" containerName="mariadb-database-create" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.102860 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="be896f81-5804-4e66-8006-51eaa9675cb2" containerName="mariadb-database-create" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.102875 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bacb8184-1aa1-400c-99c8-1cab84e83cd7" containerName="glance-log" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.102883 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bacb8184-1aa1-400c-99c8-1cab84e83cd7" containerName="glance-log" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.102894 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9414db-136f-408b-9081-d9ffdaa00e07" containerName="mariadb-account-create" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.102900 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9414db-136f-408b-9081-d9ffdaa00e07" containerName="mariadb-account-create" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.102908 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="sg-core" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.102914 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="sg-core" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.102922 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bacb8184-1aa1-400c-99c8-1cab84e83cd7" containerName="glance-httpd" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.102927 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bacb8184-1aa1-400c-99c8-1cab84e83cd7" containerName="glance-httpd" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.102936 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4500308-9c55-4560-afc5-8e34d65bcfa7" containerName="mariadb-account-create" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.102942 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4500308-9c55-4560-afc5-8e34d65bcfa7" containerName="mariadb-account-create" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.102954 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5897169-cfb8-4105-bc18-4fc7cbe28eee" containerName="mariadb-database-create" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.102960 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5897169-cfb8-4105-bc18-4fc7cbe28eee" containerName="mariadb-database-create" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.102967 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="ceilometer-central-agent" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.102973 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="ceilometer-central-agent" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.102982 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="ceilometer-notification-agent" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.102987 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="ceilometer-notification-agent" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.102997 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="proxy-httpd" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103003 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="proxy-httpd" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.103015 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ccea049-279e-43e8-9da2-04101b095f12" containerName="mariadb-account-create" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103021 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ccea049-279e-43e8-9da2-04101b095f12" containerName="mariadb-account-create" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103272 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ccea049-279e-43e8-9da2-04101b095f12" containerName="mariadb-account-create" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103285 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9414db-136f-408b-9081-d9ffdaa00e07" containerName="mariadb-account-create" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103301 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5897169-cfb8-4105-bc18-4fc7cbe28eee" containerName="mariadb-database-create" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103311 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="sg-core" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103319 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="proxy-httpd" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103332 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="44d612c2-f369-4085-8e65-fc4d80281c5a" containerName="mariadb-database-create" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103343 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bacb8184-1aa1-400c-99c8-1cab84e83cd7" containerName="glance-httpd" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103351 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bacb8184-1aa1-400c-99c8-1cab84e83cd7" containerName="glance-log" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103361 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="ceilometer-central-agent" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103369 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" containerName="ceilometer-notification-agent" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103377 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="be896f81-5804-4e66-8006-51eaa9675cb2" containerName="mariadb-database-create" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.103382 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4500308-9c55-4560-afc5-8e34d65bcfa7" containerName="mariadb-account-create" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.104488 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.110709 4856 scope.go:117] "RemoveContainer" containerID="cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.111144 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.111342 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.118479 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.153605 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dszb6\" (UniqueName: \"kubernetes.io/projected/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-kube-api-access-dszb6\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.153643 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.153656 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.153677 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.153688 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.180703 4856 scope.go:117] "RemoveContainer" containerID="74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.181843 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3865f571-b5f0-4da2-b76d-dc2b5ef91b09" (UID: "3865f571-b5f0-4da2-b76d-dc2b5ef91b09"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.204771 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58\": container with ID starting with 74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58 not found: ID does not exist" containerID="74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.204826 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58"} err="failed to get container status \"74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58\": rpc error: code = NotFound desc = could not find container \"74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58\": container with ID starting with 74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58 not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.204858 4856 scope.go:117] "RemoveContainer" containerID="ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.207315 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de\": container with ID starting with ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de not found: ID does not exist" containerID="ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.207356 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de"} err="failed to get container status \"ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de\": rpc error: code = NotFound desc = could not find container \"ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de\": container with ID starting with ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.207384 4856 scope.go:117] "RemoveContainer" containerID="5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.208102 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a\": container with ID starting with 5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a not found: ID does not exist" containerID="5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.208146 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a"} err="failed to get container status \"5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a\": rpc error: code = NotFound desc = could not find container \"5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a\": container with ID starting with 5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.208166 4856 scope.go:117] "RemoveContainer" containerID="cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.208472 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3\": container with ID starting with cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3 not found: ID does not exist" containerID="cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.208495 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3"} err="failed to get container status \"cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3\": rpc error: code = NotFound desc = could not find container \"cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3\": container with ID starting with cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3 not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.208528 4856 scope.go:117] "RemoveContainer" containerID="74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.208880 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58"} err="failed to get container status \"74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58\": rpc error: code = NotFound desc = could not find container \"74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58\": container with ID starting with 74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58 not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.208916 4856 scope.go:117] "RemoveContainer" containerID="ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.209200 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de"} err="failed to get container status \"ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de\": rpc error: code = NotFound desc = could not find container \"ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de\": container with ID starting with ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.209238 4856 scope.go:117] "RemoveContainer" containerID="5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.209540 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a"} err="failed to get container status \"5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a\": rpc error: code = NotFound desc = could not find container \"5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a\": container with ID starting with 5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.209577 4856 scope.go:117] "RemoveContainer" containerID="cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.209881 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3"} err="failed to get container status \"cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3\": rpc error: code = NotFound desc = could not find container \"cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3\": container with ID starting with cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3 not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.209914 4856 scope.go:117] "RemoveContainer" containerID="74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.210209 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58"} err="failed to get container status \"74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58\": rpc error: code = NotFound desc = could not find container \"74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58\": container with ID starting with 74bdba04d25caf57aed9ddaef5e698eb66672c6fda8fb334be833ed27ddd5d58 not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.210242 4856 scope.go:117] "RemoveContainer" containerID="ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.210537 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de"} err="failed to get container status \"ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de\": rpc error: code = NotFound desc = could not find container \"ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de\": container with ID starting with ff3daa9e2223f0b0ad610e86b1ac9849f06d7d68caaa6738f468b8f5727590de not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.210570 4856 scope.go:117] "RemoveContainer" containerID="5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.210788 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a"} err="failed to get container status \"5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a\": rpc error: code = NotFound desc = could not find container \"5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a\": container with ID starting with 5e2ba1868e81a434fe44e962d51aff14131f97607c06a9da899043425d285c0a not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.210807 4856 scope.go:117] "RemoveContainer" containerID="cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.211341 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3"} err="failed to get container status \"cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3\": rpc error: code = NotFound desc = could not find container \"cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3\": container with ID starting with cd09bad4b7c99b6219640d1537881e870bfaa8421a7f7d18589732e0381974a3 not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.211381 4856 scope.go:117] "RemoveContainer" containerID="ad7f34f5bdfc3e25a3856d2526fe4b7c79b62a3a2e4466751fce5b4b8233dc7d" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.221688 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-config-data" (OuterVolumeSpecName: "config-data") pod "3865f571-b5f0-4da2-b76d-dc2b5ef91b09" (UID: "3865f571-b5f0-4da2-b76d-dc2b5ef91b09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.247321 4856 scope.go:117] "RemoveContainer" containerID="195c38e7799f12a7b158ae8ba1823c75572346d7ca99a5add897c36d0d7f77df" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.256382 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.256429 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.256468 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wb7g\" (UniqueName: \"kubernetes.io/projected/c75cebe3-86db-4be1-9755-4bd8a83c9796-kube-api-access-4wb7g\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.256519 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.256554 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c75cebe3-86db-4be1-9755-4bd8a83c9796-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.256653 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75cebe3-86db-4be1-9755-4bd8a83c9796-logs\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.256742 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.256809 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.256940 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.256953 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f571-b5f0-4da2-b76d-dc2b5ef91b09-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.289764 4856 scope.go:117] "RemoveContainer" containerID="ad7f34f5bdfc3e25a3856d2526fe4b7c79b62a3a2e4466751fce5b4b8233dc7d" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.290692 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad7f34f5bdfc3e25a3856d2526fe4b7c79b62a3a2e4466751fce5b4b8233dc7d\": container with ID starting with ad7f34f5bdfc3e25a3856d2526fe4b7c79b62a3a2e4466751fce5b4b8233dc7d not found: ID does not exist" containerID="ad7f34f5bdfc3e25a3856d2526fe4b7c79b62a3a2e4466751fce5b4b8233dc7d" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.290734 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad7f34f5bdfc3e25a3856d2526fe4b7c79b62a3a2e4466751fce5b4b8233dc7d"} err="failed to get container status \"ad7f34f5bdfc3e25a3856d2526fe4b7c79b62a3a2e4466751fce5b4b8233dc7d\": rpc error: code = NotFound desc = could not find container \"ad7f34f5bdfc3e25a3856d2526fe4b7c79b62a3a2e4466751fce5b4b8233dc7d\": container with ID starting with ad7f34f5bdfc3e25a3856d2526fe4b7c79b62a3a2e4466751fce5b4b8233dc7d not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.290756 4856 scope.go:117] "RemoveContainer" containerID="195c38e7799f12a7b158ae8ba1823c75572346d7ca99a5add897c36d0d7f77df" Nov 22 07:27:49 crc kubenswrapper[4856]: E1122 07:27:49.293913 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"195c38e7799f12a7b158ae8ba1823c75572346d7ca99a5add897c36d0d7f77df\": container with ID starting with 195c38e7799f12a7b158ae8ba1823c75572346d7ca99a5add897c36d0d7f77df not found: ID does not exist" containerID="195c38e7799f12a7b158ae8ba1823c75572346d7ca99a5add897c36d0d7f77df" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.293944 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"195c38e7799f12a7b158ae8ba1823c75572346d7ca99a5add897c36d0d7f77df"} err="failed to get container status \"195c38e7799f12a7b158ae8ba1823c75572346d7ca99a5add897c36d0d7f77df\": rpc error: code = NotFound desc = could not find container \"195c38e7799f12a7b158ae8ba1823c75572346d7ca99a5add897c36d0d7f77df\": container with ID starting with 195c38e7799f12a7b158ae8ba1823c75572346d7ca99a5add897c36d0d7f77df not found: ID does not exist" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.350572 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.355107 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.358701 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.359097 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.359292 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.359411 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.359562 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wb7g\" (UniqueName: \"kubernetes.io/projected/c75cebe3-86db-4be1-9755-4bd8a83c9796-kube-api-access-4wb7g\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.359697 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.359827 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c75cebe3-86db-4be1-9755-4bd8a83c9796-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.359931 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75cebe3-86db-4be1-9755-4bd8a83c9796-logs\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.360610 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75cebe3-86db-4be1-9755-4bd8a83c9796-logs\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.363051 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.367008 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c75cebe3-86db-4be1-9755-4bd8a83c9796-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.369794 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.370806 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.375454 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.376370 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.400891 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.430349 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.430526 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.438993 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.439059 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.442621 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wb7g\" (UniqueName: \"kubernetes.io/projected/c75cebe3-86db-4be1-9755-4bd8a83c9796-kube-api-access-4wb7g\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.475440 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-0\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.482599 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.552335 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.566525 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nltxb\" (UniqueName: \"kubernetes.io/projected/40a3bda1-818f-456f-a44b-d6af3971c3ce-kube-api-access-nltxb\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.566690 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-config-data\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.566726 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40a3bda1-818f-456f-a44b-d6af3971c3ce-log-httpd\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.566750 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.566771 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.566798 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-scripts\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.566826 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40a3bda1-818f-456f-a44b-d6af3971c3ce-run-httpd\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.669781 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-combined-ca-bundle\") pod \"4e4b2fd6-9289-4543-ac15-75da468b55c9\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.670169 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-config-data\") pod \"4e4b2fd6-9289-4543-ac15-75da468b55c9\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.670204 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e4b2fd6-9289-4543-ac15-75da468b55c9-logs\") pod \"4e4b2fd6-9289-4543-ac15-75da468b55c9\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.670242 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs4n5\" (UniqueName: \"kubernetes.io/projected/4e4b2fd6-9289-4543-ac15-75da468b55c9-kube-api-access-fs4n5\") pod \"4e4b2fd6-9289-4543-ac15-75da468b55c9\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.670271 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-scripts\") pod \"4e4b2fd6-9289-4543-ac15-75da468b55c9\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.670378 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e4b2fd6-9289-4543-ac15-75da468b55c9-httpd-run\") pod \"4e4b2fd6-9289-4543-ac15-75da468b55c9\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.670419 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-public-tls-certs\") pod \"4e4b2fd6-9289-4543-ac15-75da468b55c9\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.670446 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"4e4b2fd6-9289-4543-ac15-75da468b55c9\" (UID: \"4e4b2fd6-9289-4543-ac15-75da468b55c9\") " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.671344 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e4b2fd6-9289-4543-ac15-75da468b55c9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4e4b2fd6-9289-4543-ac15-75da468b55c9" (UID: "4e4b2fd6-9289-4543-ac15-75da468b55c9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.671627 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-config-data\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.671688 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40a3bda1-818f-456f-a44b-d6af3971c3ce-log-httpd\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.671719 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.671742 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.671781 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-scripts\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.671815 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40a3bda1-818f-456f-a44b-d6af3971c3ce-run-httpd\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.671934 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nltxb\" (UniqueName: \"kubernetes.io/projected/40a3bda1-818f-456f-a44b-d6af3971c3ce-kube-api-access-nltxb\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.672171 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e4b2fd6-9289-4543-ac15-75da468b55c9-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.675222 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40a3bda1-818f-456f-a44b-d6af3971c3ce-log-httpd\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.676097 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e4b2fd6-9289-4543-ac15-75da468b55c9-logs" (OuterVolumeSpecName: "logs") pod "4e4b2fd6-9289-4543-ac15-75da468b55c9" (UID: "4e4b2fd6-9289-4543-ac15-75da468b55c9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.676455 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e4b2fd6-9289-4543-ac15-75da468b55c9-kube-api-access-fs4n5" (OuterVolumeSpecName: "kube-api-access-fs4n5") pod "4e4b2fd6-9289-4543-ac15-75da468b55c9" (UID: "4e4b2fd6-9289-4543-ac15-75da468b55c9"). InnerVolumeSpecName "kube-api-access-fs4n5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.676671 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40a3bda1-818f-456f-a44b-d6af3971c3ce-run-httpd\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.678347 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-scripts" (OuterVolumeSpecName: "scripts") pod "4e4b2fd6-9289-4543-ac15-75da468b55c9" (UID: "4e4b2fd6-9289-4543-ac15-75da468b55c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.679336 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-config-data\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.681527 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "4e4b2fd6-9289-4543-ac15-75da468b55c9" (UID: "4e4b2fd6-9289-4543-ac15-75da468b55c9"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.686912 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-scripts\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.690024 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.690291 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.712168 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nltxb\" (UniqueName: \"kubernetes.io/projected/40a3bda1-818f-456f-a44b-d6af3971c3ce-kube-api-access-nltxb\") pod \"ceilometer-0\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.720967 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e4b2fd6-9289-4543-ac15-75da468b55c9" (UID: "4e4b2fd6-9289-4543-ac15-75da468b55c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.768914 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.774590 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.774624 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.774636 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e4b2fd6-9289-4543-ac15-75da468b55c9-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.774647 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs4n5\" (UniqueName: \"kubernetes.io/projected/4e4b2fd6-9289-4543-ac15-75da468b55c9-kube-api-access-fs4n5\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.774657 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.787174 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4e4b2fd6-9289-4543-ac15-75da468b55c9" (UID: "4e4b2fd6-9289-4543-ac15-75da468b55c9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.795160 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.805587 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-config-data" (OuterVolumeSpecName: "config-data") pod "4e4b2fd6-9289-4543-ac15-75da468b55c9" (UID: "4e4b2fd6-9289-4543-ac15-75da468b55c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.806880 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.860723 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p5wtg"] Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.885875 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.885916 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:49 crc kubenswrapper[4856]: I1122 07:27:49.885928 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e4b2fd6-9289-4543-ac15-75da468b55c9-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.027862 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.027856 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e4b2fd6-9289-4543-ac15-75da468b55c9","Type":"ContainerDied","Data":"5cd38466682d45fcc0fb6fdcef883681918b301b838befbf074021cc76e0d489"} Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.028341 4856 scope.go:117] "RemoveContainer" containerID="9f4f428a44c5a3482bea4907846448ff80ccd7bbbe766f2ded1a78ab5486a550" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.034181 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p5wtg" podUID="42524de9-1639-4cd4-b1b3-c651b6aa2dbf" containerName="registry-server" containerID="cri-o://bbc75e275fd3ae459d6a4c020b7465c1fac5706343ee9fc81268d5c20ca9b2ca" gracePeriod=2 Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.069225 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.101996 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.116133 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:27:50 crc kubenswrapper[4856]: E1122 07:27:50.116673 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4b2fd6-9289-4543-ac15-75da468b55c9" containerName="glance-httpd" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.116696 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4b2fd6-9289-4543-ac15-75da468b55c9" containerName="glance-httpd" Nov 22 07:27:50 crc kubenswrapper[4856]: E1122 07:27:50.116718 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4b2fd6-9289-4543-ac15-75da468b55c9" containerName="glance-log" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.116726 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4b2fd6-9289-4543-ac15-75da468b55c9" containerName="glance-log" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.116948 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4b2fd6-9289-4543-ac15-75da468b55c9" containerName="glance-log" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.119484 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4b2fd6-9289-4543-ac15-75da468b55c9" containerName="glance-httpd" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.120827 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.129280 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.129302 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.129709 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.142309 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.145110 4856 scope.go:117] "RemoveContainer" containerID="cfdacf6da7c588ca0bcf1479465101a4822c4d7379ea095c311344318b3ab4da" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.191281 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rrhz\" (UniqueName: \"kubernetes.io/projected/bb0a212d-74dc-40d3-84a4-bce83b78e788-kube-api-access-4rrhz\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.191361 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.191420 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.191449 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb0a212d-74dc-40d3-84a4-bce83b78e788-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.191542 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.191580 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.191638 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.191693 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a212d-74dc-40d3-84a4-bce83b78e788-logs\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.267460 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:27:50 crc kubenswrapper[4856]: W1122 07:27:50.281697 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40a3bda1_818f_456f_a44b_d6af3971c3ce.slice/crio-9f5eb4e77e6740fac3a37a0515e97dbde5b8faea7697534313a4568430ac17ba WatchSource:0}: Error finding container 9f5eb4e77e6740fac3a37a0515e97dbde5b8faea7697534313a4568430ac17ba: Status 404 returned error can't find the container with id 9f5eb4e77e6740fac3a37a0515e97dbde5b8faea7697534313a4568430ac17ba Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.293018 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rrhz\" (UniqueName: \"kubernetes.io/projected/bb0a212d-74dc-40d3-84a4-bce83b78e788-kube-api-access-4rrhz\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.293131 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.293201 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.293242 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb0a212d-74dc-40d3-84a4-bce83b78e788-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.293293 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.293360 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.293416 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.293465 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a212d-74dc-40d3-84a4-bce83b78e788-logs\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.293655 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.293975 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb0a212d-74dc-40d3-84a4-bce83b78e788-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.294815 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a212d-74dc-40d3-84a4-bce83b78e788-logs\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.302205 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.302410 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.304888 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-config-data\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.311485 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-scripts\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.317211 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rrhz\" (UniqueName: \"kubernetes.io/projected/bb0a212d-74dc-40d3-84a4-bce83b78e788-kube-api-access-4rrhz\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.342462 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-0\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.462995 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.738259 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3865f571-b5f0-4da2-b76d-dc2b5ef91b09" path="/var/lib/kubelet/pods/3865f571-b5f0-4da2-b76d-dc2b5ef91b09/volumes" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.739897 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e4b2fd6-9289-4543-ac15-75da468b55c9" path="/var/lib/kubelet/pods/4e4b2fd6-9289-4543-ac15-75da468b55c9/volumes" Nov 22 07:27:50 crc kubenswrapper[4856]: I1122 07:27:50.741693 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bacb8184-1aa1-400c-99c8-1cab84e83cd7" path="/var/lib/kubelet/pods/bacb8184-1aa1-400c-99c8-1cab84e83cd7/volumes" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.048280 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c75cebe3-86db-4be1-9755-4bd8a83c9796","Type":"ContainerStarted","Data":"87c89906bf819de89643974ff91061bf464fcbe0da565621b557fdb026d38601"} Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.048334 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c75cebe3-86db-4be1-9755-4bd8a83c9796","Type":"ContainerStarted","Data":"e33450b9f082c55f7b154961241179c872319fb3ef16075de2e014c64cc91197"} Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.055289 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40a3bda1-818f-456f-a44b-d6af3971c3ce","Type":"ContainerStarted","Data":"9f5eb4e77e6740fac3a37a0515e97dbde5b8faea7697534313a4568430ac17ba"} Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.066553 4856 generic.go:334] "Generic (PLEG): container finished" podID="42524de9-1639-4cd4-b1b3-c651b6aa2dbf" containerID="bbc75e275fd3ae459d6a4c020b7465c1fac5706343ee9fc81268d5c20ca9b2ca" exitCode=0 Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.066646 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5wtg" event={"ID":"42524de9-1639-4cd4-b1b3-c651b6aa2dbf","Type":"ContainerDied","Data":"bbc75e275fd3ae459d6a4c020b7465c1fac5706343ee9fc81268d5c20ca9b2ca"} Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.094123 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.342076 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.414463 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-catalog-content\") pod \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\" (UID: \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\") " Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.414578 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htkph\" (UniqueName: \"kubernetes.io/projected/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-kube-api-access-htkph\") pod \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\" (UID: \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\") " Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.414663 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-utilities\") pod \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\" (UID: \"42524de9-1639-4cd4-b1b3-c651b6aa2dbf\") " Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.415567 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-utilities" (OuterVolumeSpecName: "utilities") pod "42524de9-1639-4cd4-b1b3-c651b6aa2dbf" (UID: "42524de9-1639-4cd4-b1b3-c651b6aa2dbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.421805 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-kube-api-access-htkph" (OuterVolumeSpecName: "kube-api-access-htkph") pod "42524de9-1639-4cd4-b1b3-c651b6aa2dbf" (UID: "42524de9-1639-4cd4-b1b3-c651b6aa2dbf"). InnerVolumeSpecName "kube-api-access-htkph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.473964 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42524de9-1639-4cd4-b1b3-c651b6aa2dbf" (UID: "42524de9-1639-4cd4-b1b3-c651b6aa2dbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.519981 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.520051 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htkph\" (UniqueName: \"kubernetes.io/projected/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-kube-api-access-htkph\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.520073 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42524de9-1639-4cd4-b1b3-c651b6aa2dbf-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.841991 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mjp7j"] Nov 22 07:27:51 crc kubenswrapper[4856]: E1122 07:27:51.842534 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42524de9-1639-4cd4-b1b3-c651b6aa2dbf" containerName="extract-utilities" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.842562 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="42524de9-1639-4cd4-b1b3-c651b6aa2dbf" containerName="extract-utilities" Nov 22 07:27:51 crc kubenswrapper[4856]: E1122 07:27:51.842589 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42524de9-1639-4cd4-b1b3-c651b6aa2dbf" containerName="registry-server" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.842599 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="42524de9-1639-4cd4-b1b3-c651b6aa2dbf" containerName="registry-server" Nov 22 07:27:51 crc kubenswrapper[4856]: E1122 07:27:51.842618 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42524de9-1639-4cd4-b1b3-c651b6aa2dbf" containerName="extract-content" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.842626 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="42524de9-1639-4cd4-b1b3-c651b6aa2dbf" containerName="extract-content" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.842916 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="42524de9-1639-4cd4-b1b3-c651b6aa2dbf" containerName="registry-server" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.843932 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mjp7j"] Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.844033 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.854151 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.854464 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.854761 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9nqcq" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.925588 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-config-data\") pod \"nova-cell0-conductor-db-sync-mjp7j\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.926095 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mjp7j\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.926140 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-scripts\") pod \"nova-cell0-conductor-db-sync-mjp7j\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:51 crc kubenswrapper[4856]: I1122 07:27:51.926220 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44zsm\" (UniqueName: \"kubernetes.io/projected/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-kube-api-access-44zsm\") pod \"nova-cell0-conductor-db-sync-mjp7j\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.027389 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mjp7j\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.027453 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-scripts\") pod \"nova-cell0-conductor-db-sync-mjp7j\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.027539 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44zsm\" (UniqueName: \"kubernetes.io/projected/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-kube-api-access-44zsm\") pod \"nova-cell0-conductor-db-sync-mjp7j\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.027575 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-config-data\") pod \"nova-cell0-conductor-db-sync-mjp7j\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.048023 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-scripts\") pod \"nova-cell0-conductor-db-sync-mjp7j\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.050136 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-config-data\") pod \"nova-cell0-conductor-db-sync-mjp7j\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.057336 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-mjp7j\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.076056 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44zsm\" (UniqueName: \"kubernetes.io/projected/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-kube-api-access-44zsm\") pod \"nova-cell0-conductor-db-sync-mjp7j\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.124998 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb0a212d-74dc-40d3-84a4-bce83b78e788","Type":"ContainerStarted","Data":"f37064b2330b484c7d969fce3d2a893f3abae84ebef184dd47e6d9b5cc22dbd0"} Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.125047 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb0a212d-74dc-40d3-84a4-bce83b78e788","Type":"ContainerStarted","Data":"145e73a89b1cf46146622956547b80e15f5d5360146cadd995a3e353c67367ed"} Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.127756 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p5wtg" event={"ID":"42524de9-1639-4cd4-b1b3-c651b6aa2dbf","Type":"ContainerDied","Data":"789a6713fa13228c11bcbab539df0b72f2b6c429f4df9a1bcf40e9c05dfb3b35"} Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.127790 4856 scope.go:117] "RemoveContainer" containerID="bbc75e275fd3ae459d6a4c020b7465c1fac5706343ee9fc81268d5c20ca9b2ca" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.127923 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p5wtg" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.172547 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p5wtg"] Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.178535 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p5wtg"] Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.189969 4856 scope.go:117] "RemoveContainer" containerID="93901b3bd17226ddf3c41d1dc2d8020befb3521d6805cda3fa32db07829ac66b" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.216779 4856 scope.go:117] "RemoveContainer" containerID="a44af4c6e138ab2f394051b4858bde810a57c4633ddd1d6910ff527f0a8bad0f" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.221141 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.724326 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42524de9-1639-4cd4-b1b3-c651b6aa2dbf" path="/var/lib/kubelet/pods/42524de9-1639-4cd4-b1b3-c651b6aa2dbf/volumes" Nov 22 07:27:52 crc kubenswrapper[4856]: I1122 07:27:52.728749 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mjp7j"] Nov 22 07:27:53 crc kubenswrapper[4856]: I1122 07:27:53.141318 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb0a212d-74dc-40d3-84a4-bce83b78e788","Type":"ContainerStarted","Data":"252762430df01a665bfdad1f0791906f5cce3eb5c2ead3119b24019aca7e6928"} Nov 22 07:27:53 crc kubenswrapper[4856]: I1122 07:27:53.144249 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c75cebe3-86db-4be1-9755-4bd8a83c9796","Type":"ContainerStarted","Data":"cfc3e2910129f9e8a60e68b621e6eee3267b6c9aa86e078920823532cee13fa0"} Nov 22 07:27:53 crc kubenswrapper[4856]: I1122 07:27:53.146318 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mjp7j" event={"ID":"a4ab0b87-dec0-42f2-86a2-4e12a02c7573","Type":"ContainerStarted","Data":"2c409d742ee56e58f573ea269a857bbbdc54d54c215a5d51289b0b2419e3ea31"} Nov 22 07:27:53 crc kubenswrapper[4856]: I1122 07:27:53.148499 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40a3bda1-818f-456f-a44b-d6af3971c3ce","Type":"ContainerStarted","Data":"dafb9cfcc49fa79ffd70a382afa51ec5d061d1e21b2c8205f6194e3556d39a25"} Nov 22 07:27:53 crc kubenswrapper[4856]: I1122 07:27:53.189459 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.189436976 podStartE2EDuration="3.189436976s" podCreationTimestamp="2025-11-22 07:27:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:27:53.169526964 +0000 UTC m=+1515.582920222" watchObservedRunningTime="2025-11-22 07:27:53.189436976 +0000 UTC m=+1515.602830234" Nov 22 07:27:53 crc kubenswrapper[4856]: I1122 07:27:53.207801 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.207776836 podStartE2EDuration="4.207776836s" podCreationTimestamp="2025-11-22 07:27:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:27:53.194982407 +0000 UTC m=+1515.608375685" watchObservedRunningTime="2025-11-22 07:27:53.207776836 +0000 UTC m=+1515.621170094" Nov 22 07:27:54 crc kubenswrapper[4856]: I1122 07:27:54.163325 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40a3bda1-818f-456f-a44b-d6af3971c3ce","Type":"ContainerStarted","Data":"f0b3ea9b180fa5996ea105f756332563275a060a41ea330b1991aa0ac038295a"} Nov 22 07:27:55 crc kubenswrapper[4856]: I1122 07:27:55.045688 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:27:55 crc kubenswrapper[4856]: I1122 07:27:55.182279 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40a3bda1-818f-456f-a44b-d6af3971c3ce","Type":"ContainerStarted","Data":"12cc83a401687c88f8ebc35ffd49d10a0a2decd86d166d325cfd735c090acc3f"} Nov 22 07:27:55 crc kubenswrapper[4856]: I1122 07:27:55.184690 4856 generic.go:334] "Generic (PLEG): container finished" podID="f62cc6af-1032-4593-a11f-0dde4a6020ae" containerID="c3368e9bb887c530083f7a09aacec83accf90141e2a0af6a2fffe8655043dddd" exitCode=0 Nov 22 07:27:55 crc kubenswrapper[4856]: I1122 07:27:55.184735 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-298l7" event={"ID":"f62cc6af-1032-4593-a11f-0dde4a6020ae","Type":"ContainerDied","Data":"c3368e9bb887c530083f7a09aacec83accf90141e2a0af6a2fffe8655043dddd"} Nov 22 07:27:59 crc kubenswrapper[4856]: I1122 07:27:59.483231 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 07:27:59 crc kubenswrapper[4856]: I1122 07:27:59.483745 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 07:27:59 crc kubenswrapper[4856]: I1122 07:27:59.512404 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 07:27:59 crc kubenswrapper[4856]: I1122 07:27:59.546188 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 07:28:00 crc kubenswrapper[4856]: I1122 07:28:00.230267 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 07:28:00 crc kubenswrapper[4856]: I1122 07:28:00.230468 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 07:28:00 crc kubenswrapper[4856]: I1122 07:28:00.463210 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 07:28:00 crc kubenswrapper[4856]: I1122 07:28:00.463262 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 07:28:00 crc kubenswrapper[4856]: I1122 07:28:00.535253 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 07:28:00 crc kubenswrapper[4856]: I1122 07:28:00.552169 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 07:28:01 crc kubenswrapper[4856]: I1122 07:28:01.240964 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 07:28:01 crc kubenswrapper[4856]: I1122 07:28:01.241307 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 07:28:02 crc kubenswrapper[4856]: I1122 07:28:02.389974 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 07:28:02 crc kubenswrapper[4856]: I1122 07:28:02.390079 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:28:02 crc kubenswrapper[4856]: I1122 07:28:02.929318 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 07:28:03 crc kubenswrapper[4856]: I1122 07:28:03.471543 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 07:28:03 crc kubenswrapper[4856]: I1122 07:28:03.471785 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 07:28:03 crc kubenswrapper[4856]: I1122 07:28:03.740869 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 07:28:03 crc kubenswrapper[4856]: I1122 07:28:03.882794 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-298l7" Nov 22 07:28:03 crc kubenswrapper[4856]: I1122 07:28:03.991535 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62cc6af-1032-4593-a11f-0dde4a6020ae-combined-ca-bundle\") pod \"f62cc6af-1032-4593-a11f-0dde4a6020ae\" (UID: \"f62cc6af-1032-4593-a11f-0dde4a6020ae\") " Nov 22 07:28:03 crc kubenswrapper[4856]: I1122 07:28:03.991951 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f62cc6af-1032-4593-a11f-0dde4a6020ae-db-sync-config-data\") pod \"f62cc6af-1032-4593-a11f-0dde4a6020ae\" (UID: \"f62cc6af-1032-4593-a11f-0dde4a6020ae\") " Nov 22 07:28:03 crc kubenswrapper[4856]: I1122 07:28:03.992082 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chzvc\" (UniqueName: \"kubernetes.io/projected/f62cc6af-1032-4593-a11f-0dde4a6020ae-kube-api-access-chzvc\") pod \"f62cc6af-1032-4593-a11f-0dde4a6020ae\" (UID: \"f62cc6af-1032-4593-a11f-0dde4a6020ae\") " Nov 22 07:28:03 crc kubenswrapper[4856]: I1122 07:28:03.997628 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f62cc6af-1032-4593-a11f-0dde4a6020ae-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f62cc6af-1032-4593-a11f-0dde4a6020ae" (UID: "f62cc6af-1032-4593-a11f-0dde4a6020ae"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:04 crc kubenswrapper[4856]: I1122 07:28:04.005929 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f62cc6af-1032-4593-a11f-0dde4a6020ae-kube-api-access-chzvc" (OuterVolumeSpecName: "kube-api-access-chzvc") pod "f62cc6af-1032-4593-a11f-0dde4a6020ae" (UID: "f62cc6af-1032-4593-a11f-0dde4a6020ae"). InnerVolumeSpecName "kube-api-access-chzvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:28:04 crc kubenswrapper[4856]: I1122 07:28:04.036322 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f62cc6af-1032-4593-a11f-0dde4a6020ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f62cc6af-1032-4593-a11f-0dde4a6020ae" (UID: "f62cc6af-1032-4593-a11f-0dde4a6020ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:04 crc kubenswrapper[4856]: I1122 07:28:04.094095 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f62cc6af-1032-4593-a11f-0dde4a6020ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:04 crc kubenswrapper[4856]: I1122 07:28:04.094128 4856 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f62cc6af-1032-4593-a11f-0dde4a6020ae-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:04 crc kubenswrapper[4856]: I1122 07:28:04.094138 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chzvc\" (UniqueName: \"kubernetes.io/projected/f62cc6af-1032-4593-a11f-0dde4a6020ae-kube-api-access-chzvc\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:04 crc kubenswrapper[4856]: I1122 07:28:04.272058 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-298l7" event={"ID":"f62cc6af-1032-4593-a11f-0dde4a6020ae","Type":"ContainerDied","Data":"fbb7a32b505ad0f80f9487229e47feac27c2a855957cabf6899a6c15d5173c4e"} Nov 22 07:28:04 crc kubenswrapper[4856]: I1122 07:28:04.272098 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbb7a32b505ad0f80f9487229e47feac27c2a855957cabf6899a6c15d5173c4e" Nov 22 07:28:04 crc kubenswrapper[4856]: I1122 07:28:04.272552 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-298l7" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.147612 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-68b59dd9f8-dgbs9"] Nov 22 07:28:05 crc kubenswrapper[4856]: E1122 07:28:05.148501 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f62cc6af-1032-4593-a11f-0dde4a6020ae" containerName="barbican-db-sync" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.148523 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f62cc6af-1032-4593-a11f-0dde4a6020ae" containerName="barbican-db-sync" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.148798 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f62cc6af-1032-4593-a11f-0dde4a6020ae" containerName="barbican-db-sync" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.150128 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.157360 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.157594 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-f69556b5c-qmsmf"] Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.159514 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.163990 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-qslwv" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.164380 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.164672 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.192120 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-68b59dd9f8-dgbs9"] Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.212687 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8k9b\" (UniqueName: \"kubernetes.io/projected/665dbe7c-5370-4a97-8502-e9b25c8acd3a-kube-api-access-x8k9b\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.212841 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-combined-ca-bundle\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.212887 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-config-data-custom\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.212974 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-config-data\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.213034 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/665dbe7c-5370-4a97-8502-e9b25c8acd3a-logs\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.218334 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f69556b5c-qmsmf"] Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.278158 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84c94df5fc-hgwfg"] Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.280193 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.308383 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84c94df5fc-hgwfg"] Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.314430 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6lgj\" (UniqueName: \"kubernetes.io/projected/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-kube-api-access-w6lgj\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.314495 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8k9b\" (UniqueName: \"kubernetes.io/projected/665dbe7c-5370-4a97-8502-e9b25c8acd3a-kube-api-access-x8k9b\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.314650 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-config-data\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.314752 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-config-data-custom\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.314794 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-combined-ca-bundle\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.314834 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-config-data-custom\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.314881 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-combined-ca-bundle\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.314987 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-config-data\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.315029 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-logs\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.315066 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/665dbe7c-5370-4a97-8502-e9b25c8acd3a-logs\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.315555 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/665dbe7c-5370-4a97-8502-e9b25c8acd3a-logs\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.322350 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-config-data-custom\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.331976 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-config-data\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.344782 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8k9b\" (UniqueName: \"kubernetes.io/projected/665dbe7c-5370-4a97-8502-e9b25c8acd3a-kube-api-access-x8k9b\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.345420 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-combined-ca-bundle\") pod \"barbican-keystone-listener-68b59dd9f8-dgbs9\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.372312 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7bd5f4cd4-gqk7t"] Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.377015 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.382086 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.401213 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bd5f4cd4-gqk7t"] Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.417035 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-config-data\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.417102 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nmln\" (UniqueName: \"kubernetes.io/projected/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-kube-api-access-8nmln\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.417169 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-ovsdbserver-sb\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.417313 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-config-data-custom\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.417355 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-dns-swift-storage-0\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.417416 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-ovsdbserver-nb\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.417497 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-dns-svc\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.417559 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-combined-ca-bundle\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.417655 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-config\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.417714 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-logs\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.417758 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6lgj\" (UniqueName: \"kubernetes.io/projected/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-kube-api-access-w6lgj\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.420467 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-logs\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.423852 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-config-data-custom\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.424858 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-config-data\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.426118 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-combined-ca-bundle\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.440367 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6lgj\" (UniqueName: \"kubernetes.io/projected/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-kube-api-access-w6lgj\") pod \"barbican-worker-f69556b5c-qmsmf\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.494499 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.506305 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.542352 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-dns-swift-storage-0\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.546873 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-config-data\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.543727 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-dns-swift-storage-0\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.547892 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9803018-06dd-4572-ae9e-eadc43492e39-logs\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.550148 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-ovsdbserver-nb\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.550416 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-dns-svc\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.550733 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-config\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.550966 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-ovsdbserver-nb\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.551561 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-dns-svc\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.552239 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-config\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.555308 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cvv4\" (UniqueName: \"kubernetes.io/projected/a9803018-06dd-4572-ae9e-eadc43492e39-kube-api-access-4cvv4\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.555507 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-config-data-custom\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.555659 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nmln\" (UniqueName: \"kubernetes.io/projected/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-kube-api-access-8nmln\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.555768 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-combined-ca-bundle\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.557866 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-ovsdbserver-sb\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.559198 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-ovsdbserver-sb\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.580573 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nmln\" (UniqueName: \"kubernetes.io/projected/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-kube-api-access-8nmln\") pod \"dnsmasq-dns-84c94df5fc-hgwfg\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.690294 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cvv4\" (UniqueName: \"kubernetes.io/projected/a9803018-06dd-4572-ae9e-eadc43492e39-kube-api-access-4cvv4\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.690768 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-config-data-custom\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.690823 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-combined-ca-bundle\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.690912 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-config-data\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.690937 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9803018-06dd-4572-ae9e-eadc43492e39-logs\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.692785 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9803018-06dd-4572-ae9e-eadc43492e39-logs\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.704105 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-combined-ca-bundle\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.704362 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-config-data\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.704672 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-config-data-custom\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.711687 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cvv4\" (UniqueName: \"kubernetes.io/projected/a9803018-06dd-4572-ae9e-eadc43492e39-kube-api-access-4cvv4\") pod \"barbican-api-7bd5f4cd4-gqk7t\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.816294 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:05 crc kubenswrapper[4856]: I1122 07:28:05.857119 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:06 crc kubenswrapper[4856]: I1122 07:28:06.053919 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-68b59dd9f8-dgbs9"] Nov 22 07:28:06 crc kubenswrapper[4856]: I1122 07:28:06.121692 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-f69556b5c-qmsmf"] Nov 22 07:28:06 crc kubenswrapper[4856]: I1122 07:28:06.294738 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40a3bda1-818f-456f-a44b-d6af3971c3ce","Type":"ContainerStarted","Data":"44b2c896dea59752c453d550f83421f09e99e5dd57e8c6bcb68d0091000d3aab"} Nov 22 07:28:06 crc kubenswrapper[4856]: I1122 07:28:06.294905 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="ceilometer-central-agent" containerID="cri-o://dafb9cfcc49fa79ffd70a382afa51ec5d061d1e21b2c8205f6194e3556d39a25" gracePeriod=30 Nov 22 07:28:06 crc kubenswrapper[4856]: I1122 07:28:06.295207 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:28:06 crc kubenswrapper[4856]: I1122 07:28:06.295489 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="proxy-httpd" containerID="cri-o://44b2c896dea59752c453d550f83421f09e99e5dd57e8c6bcb68d0091000d3aab" gracePeriod=30 Nov 22 07:28:06 crc kubenswrapper[4856]: I1122 07:28:06.295596 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="sg-core" containerID="cri-o://12cc83a401687c88f8ebc35ffd49d10a0a2decd86d166d325cfd735c090acc3f" gracePeriod=30 Nov 22 07:28:06 crc kubenswrapper[4856]: I1122 07:28:06.295643 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="ceilometer-notification-agent" containerID="cri-o://f0b3ea9b180fa5996ea105f756332563275a060a41ea330b1991aa0ac038295a" gracePeriod=30 Nov 22 07:28:06 crc kubenswrapper[4856]: I1122 07:28:06.322669 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.385642393 podStartE2EDuration="17.322650639s" podCreationTimestamp="2025-11-22 07:27:49 +0000 UTC" firstStartedPulling="2025-11-22 07:27:50.288398507 +0000 UTC m=+1512.701791765" lastFinishedPulling="2025-11-22 07:28:05.225406743 +0000 UTC m=+1527.638800011" observedRunningTime="2025-11-22 07:28:06.320156162 +0000 UTC m=+1528.733549430" watchObservedRunningTime="2025-11-22 07:28:06.322650639 +0000 UTC m=+1528.736043887" Nov 22 07:28:06 crc kubenswrapper[4856]: W1122 07:28:06.379837 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfd5417e_43d6_4fe2_807c_8c203cb74c0a.slice/crio-b85204fbdfdf859441b4e75d2ce56a7c02f478ec81a958626410d1abc75e637c WatchSource:0}: Error finding container b85204fbdfdf859441b4e75d2ce56a7c02f478ec81a958626410d1abc75e637c: Status 404 returned error can't find the container with id b85204fbdfdf859441b4e75d2ce56a7c02f478ec81a958626410d1abc75e637c Nov 22 07:28:06 crc kubenswrapper[4856]: I1122 07:28:06.998726 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bd5f4cd4-gqk7t"] Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.090457 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84c94df5fc-hgwfg"] Nov 22 07:28:07 crc kubenswrapper[4856]: W1122 07:28:07.109901 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc22daaa_cc21_45b9_b3a3_4c7f4465b0fe.slice/crio-0d47897308af6d12780c26eb5b4ff07acac479a8ee127a44329ca4792b760a37 WatchSource:0}: Error finding container 0d47897308af6d12780c26eb5b4ff07acac479a8ee127a44329ca4792b760a37: Status 404 returned error can't find the container with id 0d47897308af6d12780c26eb5b4ff07acac479a8ee127a44329ca4792b760a37 Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.331002 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" event={"ID":"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe","Type":"ContainerStarted","Data":"0d47897308af6d12780c26eb5b4ff07acac479a8ee127a44329ca4792b760a37"} Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.334317 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" event={"ID":"665dbe7c-5370-4a97-8502-e9b25c8acd3a","Type":"ContainerStarted","Data":"c49e3d050a13649ab2b85b71a7fc7be52f04efb484f94677a16f0203aca5d2b7"} Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.337470 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f69556b5c-qmsmf" event={"ID":"bfd5417e-43d6-4fe2-807c-8c203cb74c0a","Type":"ContainerStarted","Data":"b85204fbdfdf859441b4e75d2ce56a7c02f478ec81a958626410d1abc75e637c"} Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.342684 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" event={"ID":"a9803018-06dd-4572-ae9e-eadc43492e39","Type":"ContainerStarted","Data":"6a4c13f075a387354b957b2842020111902cb65c1c7a324b385db380ff0cca4b"} Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.347081 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mjp7j" event={"ID":"a4ab0b87-dec0-42f2-86a2-4e12a02c7573","Type":"ContainerStarted","Data":"d74045845a7dba814efb401d7b033582ccdbf8ee08845c8e8fdf207bd5c6d465"} Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.358936 4856 generic.go:334] "Generic (PLEG): container finished" podID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerID="44b2c896dea59752c453d550f83421f09e99e5dd57e8c6bcb68d0091000d3aab" exitCode=0 Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.358973 4856 generic.go:334] "Generic (PLEG): container finished" podID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerID="12cc83a401687c88f8ebc35ffd49d10a0a2decd86d166d325cfd735c090acc3f" exitCode=2 Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.358982 4856 generic.go:334] "Generic (PLEG): container finished" podID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerID="f0b3ea9b180fa5996ea105f756332563275a060a41ea330b1991aa0ac038295a" exitCode=0 Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.358988 4856 generic.go:334] "Generic (PLEG): container finished" podID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerID="dafb9cfcc49fa79ffd70a382afa51ec5d061d1e21b2c8205f6194e3556d39a25" exitCode=0 Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.359009 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40a3bda1-818f-456f-a44b-d6af3971c3ce","Type":"ContainerDied","Data":"44b2c896dea59752c453d550f83421f09e99e5dd57e8c6bcb68d0091000d3aab"} Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.359037 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40a3bda1-818f-456f-a44b-d6af3971c3ce","Type":"ContainerDied","Data":"12cc83a401687c88f8ebc35ffd49d10a0a2decd86d166d325cfd735c090acc3f"} Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.359047 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40a3bda1-818f-456f-a44b-d6af3971c3ce","Type":"ContainerDied","Data":"f0b3ea9b180fa5996ea105f756332563275a060a41ea330b1991aa0ac038295a"} Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.359054 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40a3bda1-818f-456f-a44b-d6af3971c3ce","Type":"ContainerDied","Data":"dafb9cfcc49fa79ffd70a382afa51ec5d061d1e21b2c8205f6194e3556d39a25"} Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.635758 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.679100 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-mjp7j" podStartSLOduration=2.866824258 podStartE2EDuration="16.679082699s" podCreationTimestamp="2025-11-22 07:27:51 +0000 UTC" firstStartedPulling="2025-11-22 07:27:52.725861991 +0000 UTC m=+1515.139255249" lastFinishedPulling="2025-11-22 07:28:06.538120442 +0000 UTC m=+1528.951513690" observedRunningTime="2025-11-22 07:28:07.377025579 +0000 UTC m=+1529.790418837" watchObservedRunningTime="2025-11-22 07:28:07.679082699 +0000 UTC m=+1530.092475957" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.742327 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40a3bda1-818f-456f-a44b-d6af3971c3ce-run-httpd\") pod \"40a3bda1-818f-456f-a44b-d6af3971c3ce\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.742376 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-config-data\") pod \"40a3bda1-818f-456f-a44b-d6af3971c3ce\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.742428 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-combined-ca-bundle\") pod \"40a3bda1-818f-456f-a44b-d6af3971c3ce\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.742459 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-sg-core-conf-yaml\") pod \"40a3bda1-818f-456f-a44b-d6af3971c3ce\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.742501 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-scripts\") pod \"40a3bda1-818f-456f-a44b-d6af3971c3ce\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.742573 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nltxb\" (UniqueName: \"kubernetes.io/projected/40a3bda1-818f-456f-a44b-d6af3971c3ce-kube-api-access-nltxb\") pod \"40a3bda1-818f-456f-a44b-d6af3971c3ce\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.742607 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40a3bda1-818f-456f-a44b-d6af3971c3ce-log-httpd\") pod \"40a3bda1-818f-456f-a44b-d6af3971c3ce\" (UID: \"40a3bda1-818f-456f-a44b-d6af3971c3ce\") " Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.743639 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40a3bda1-818f-456f-a44b-d6af3971c3ce-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "40a3bda1-818f-456f-a44b-d6af3971c3ce" (UID: "40a3bda1-818f-456f-a44b-d6af3971c3ce"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.743864 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40a3bda1-818f-456f-a44b-d6af3971c3ce-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "40a3bda1-818f-456f-a44b-d6af3971c3ce" (UID: "40a3bda1-818f-456f-a44b-d6af3971c3ce"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.755865 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-scripts" (OuterVolumeSpecName: "scripts") pod "40a3bda1-818f-456f-a44b-d6af3971c3ce" (UID: "40a3bda1-818f-456f-a44b-d6af3971c3ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.756057 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40a3bda1-818f-456f-a44b-d6af3971c3ce-kube-api-access-nltxb" (OuterVolumeSpecName: "kube-api-access-nltxb") pod "40a3bda1-818f-456f-a44b-d6af3971c3ce" (UID: "40a3bda1-818f-456f-a44b-d6af3971c3ce"). InnerVolumeSpecName "kube-api-access-nltxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.797539 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "40a3bda1-818f-456f-a44b-d6af3971c3ce" (UID: "40a3bda1-818f-456f-a44b-d6af3971c3ce"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.847499 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40a3bda1-818f-456f-a44b-d6af3971c3ce-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.847851 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.847868 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.847881 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nltxb\" (UniqueName: \"kubernetes.io/projected/40a3bda1-818f-456f-a44b-d6af3971c3ce-kube-api-access-nltxb\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.847894 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/40a3bda1-818f-456f-a44b-d6af3971c3ce-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.864827 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40a3bda1-818f-456f-a44b-d6af3971c3ce" (UID: "40a3bda1-818f-456f-a44b-d6af3971c3ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.883044 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-config-data" (OuterVolumeSpecName: "config-data") pod "40a3bda1-818f-456f-a44b-d6af3971c3ce" (UID: "40a3bda1-818f-456f-a44b-d6af3971c3ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.950088 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:07 crc kubenswrapper[4856]: I1122 07:28:07.950139 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40a3bda1-818f-456f-a44b-d6af3971c3ce-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.323959 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-79bdcb776d-cl77m"] Nov 22 07:28:08 crc kubenswrapper[4856]: E1122 07:28:08.324574 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="ceilometer-central-agent" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.324667 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="ceilometer-central-agent" Nov 22 07:28:08 crc kubenswrapper[4856]: E1122 07:28:08.324729 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="sg-core" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.324787 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="sg-core" Nov 22 07:28:08 crc kubenswrapper[4856]: E1122 07:28:08.324857 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="ceilometer-notification-agent" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.324909 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="ceilometer-notification-agent" Nov 22 07:28:08 crc kubenswrapper[4856]: E1122 07:28:08.324968 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="proxy-httpd" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.325056 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="proxy-httpd" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.325285 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="ceilometer-notification-agent" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.325361 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="sg-core" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.325418 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="ceilometer-central-agent" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.325473 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" containerName="proxy-httpd" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.326435 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.332968 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.334599 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.352343 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79bdcb776d-cl77m"] Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.394612 4856 generic.go:334] "Generic (PLEG): container finished" podID="bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" containerID="0b319424567ba82485defaa2b5170384a9d60d7def3d4ce0a32a2f9a351f262d" exitCode=0 Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.394695 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" event={"ID":"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe","Type":"ContainerDied","Data":"0b319424567ba82485defaa2b5170384a9d60d7def3d4ce0a32a2f9a351f262d"} Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.400120 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" event={"ID":"a9803018-06dd-4572-ae9e-eadc43492e39","Type":"ContainerStarted","Data":"9f32f37097e3245ac33236c34783306c1bca68b9b54405f8b43edb400fe1e2e8"} Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.400161 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" event={"ID":"a9803018-06dd-4572-ae9e-eadc43492e39","Type":"ContainerStarted","Data":"4cf1ecab5a0d71491cde66ab7a4a2c05712de67111850321a6b53bfea9d001a2"} Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.401215 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.401241 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.404816 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.407943 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"40a3bda1-818f-456f-a44b-d6af3971c3ce","Type":"ContainerDied","Data":"9f5eb4e77e6740fac3a37a0515e97dbde5b8faea7697534313a4568430ac17ba"} Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.408050 4856 scope.go:117] "RemoveContainer" containerID="44b2c896dea59752c453d550f83421f09e99e5dd57e8c6bcb68d0091000d3aab" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.464383 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-internal-tls-certs\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.464464 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcxnm\" (UniqueName: \"kubernetes.io/projected/39f7a457-9a5c-48b5-86c0-24d274596c8a-kube-api-access-zcxnm\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.464515 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-public-tls-certs\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.464624 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-combined-ca-bundle\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.464682 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f7a457-9a5c-48b5-86c0-24d274596c8a-logs\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.464721 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-config-data-custom\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.464754 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-config-data\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.465299 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podStartSLOduration=3.465283272 podStartE2EDuration="3.465283272s" podCreationTimestamp="2025-11-22 07:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:28:08.46299477 +0000 UTC m=+1530.876388048" watchObservedRunningTime="2025-11-22 07:28:08.465283272 +0000 UTC m=+1530.878676530" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.496442 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.507494 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.584112 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-config-data\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.584347 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-internal-tls-certs\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.584456 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcxnm\" (UniqueName: \"kubernetes.io/projected/39f7a457-9a5c-48b5-86c0-24d274596c8a-kube-api-access-zcxnm\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.584563 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-public-tls-certs\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.584614 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-combined-ca-bundle\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.584693 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f7a457-9a5c-48b5-86c0-24d274596c8a-logs\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.584788 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-config-data-custom\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.586866 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f7a457-9a5c-48b5-86c0-24d274596c8a-logs\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.604765 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-public-tls-certs\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.606172 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-config-data-custom\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.615452 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-combined-ca-bundle\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.619090 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcxnm\" (UniqueName: \"kubernetes.io/projected/39f7a457-9a5c-48b5-86c0-24d274596c8a-kube-api-access-zcxnm\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.639069 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-config-data\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.650723 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-internal-tls-certs\") pod \"barbican-api-79bdcb776d-cl77m\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.655847 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.657920 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.658876 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.661208 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.661367 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.670290 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.729567 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40a3bda1-818f-456f-a44b-d6af3971c3ce" path="/var/lib/kubelet/pods/40a3bda1-818f-456f-a44b-d6af3971c3ce/volumes" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.793620 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/753b7046-c6a6-4a8a-bc9c-46b1161c43df-run-httpd\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.793723 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-config-data\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.793813 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/753b7046-c6a6-4a8a-bc9c-46b1161c43df-log-httpd\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.793876 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-scripts\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.794132 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9v42\" (UniqueName: \"kubernetes.io/projected/753b7046-c6a6-4a8a-bc9c-46b1161c43df-kube-api-access-w9v42\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.794273 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.794321 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.895732 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.895840 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/753b7046-c6a6-4a8a-bc9c-46b1161c43df-run-httpd\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.895879 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-config-data\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.895926 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/753b7046-c6a6-4a8a-bc9c-46b1161c43df-log-httpd\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.895957 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-scripts\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.896014 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9v42\" (UniqueName: \"kubernetes.io/projected/753b7046-c6a6-4a8a-bc9c-46b1161c43df-kube-api-access-w9v42\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.896051 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.897086 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/753b7046-c6a6-4a8a-bc9c-46b1161c43df-log-httpd\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.897349 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/753b7046-c6a6-4a8a-bc9c-46b1161c43df-run-httpd\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.900212 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-scripts\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.901654 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.901705 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.903264 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-config-data\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:08 crc kubenswrapper[4856]: I1122 07:28:08.921280 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9v42\" (UniqueName: \"kubernetes.io/projected/753b7046-c6a6-4a8a-bc9c-46b1161c43df-kube-api-access-w9v42\") pod \"ceilometer-0\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " pod="openstack/ceilometer-0" Nov 22 07:28:09 crc kubenswrapper[4856]: I1122 07:28:09.019126 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:28:09 crc kubenswrapper[4856]: I1122 07:28:09.322057 4856 scope.go:117] "RemoveContainer" containerID="12cc83a401687c88f8ebc35ffd49d10a0a2decd86d166d325cfd735c090acc3f" Nov 22 07:28:09 crc kubenswrapper[4856]: I1122 07:28:09.623487 4856 scope.go:117] "RemoveContainer" containerID="f0b3ea9b180fa5996ea105f756332563275a060a41ea330b1991aa0ac038295a" Nov 22 07:28:09 crc kubenswrapper[4856]: I1122 07:28:09.765803 4856 scope.go:117] "RemoveContainer" containerID="dafb9cfcc49fa79ffd70a382afa51ec5d061d1e21b2c8205f6194e3556d39a25" Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.238976 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79bdcb776d-cl77m"] Nov 22 07:28:10 crc kubenswrapper[4856]: W1122 07:28:10.252719 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39f7a457_9a5c_48b5_86c0_24d274596c8a.slice/crio-d6fe7529ff8e811824319a6266cace19801991ea250affe344e2f1ecfd121999 WatchSource:0}: Error finding container d6fe7529ff8e811824319a6266cace19801991ea250affe344e2f1ecfd121999: Status 404 returned error can't find the container with id d6fe7529ff8e811824319a6266cace19801991ea250affe344e2f1ecfd121999 Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.343887 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:28:10 crc kubenswrapper[4856]: W1122 07:28:10.353672 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod753b7046_c6a6_4a8a_bc9c_46b1161c43df.slice/crio-0195e49453eeb03a8da98c792647dcdd235fc34de81a21f71511875a29f4f525 WatchSource:0}: Error finding container 0195e49453eeb03a8da98c792647dcdd235fc34de81a21f71511875a29f4f525: Status 404 returned error can't find the container with id 0195e49453eeb03a8da98c792647dcdd235fc34de81a21f71511875a29f4f525 Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.431205 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79bdcb776d-cl77m" event={"ID":"39f7a457-9a5c-48b5-86c0-24d274596c8a","Type":"ContainerStarted","Data":"d6fe7529ff8e811824319a6266cace19801991ea250affe344e2f1ecfd121999"} Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.437985 4856 generic.go:334] "Generic (PLEG): container finished" podID="d8c4fd78-c2bf-4a39-8db9-e511ae36a38c" containerID="e506d22d373d63c5c5df7338ebfdf37d6f2889f6528d9d2937bd53d522fa657f" exitCode=0 Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.438112 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n9nhw" event={"ID":"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c","Type":"ContainerDied","Data":"e506d22d373d63c5c5df7338ebfdf37d6f2889f6528d9d2937bd53d522fa657f"} Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.445634 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" event={"ID":"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe","Type":"ContainerStarted","Data":"eb5a53ded4a3b71ba2fb136a0ac07f9a5cb9550bb6a8048e04f5cf32e604d7a0"} Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.445686 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.454458 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" event={"ID":"665dbe7c-5370-4a97-8502-e9b25c8acd3a","Type":"ContainerStarted","Data":"79ce02c0e12e71d034284ed8bae98790aa968294e2855ff785b9729ddd86f16b"} Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.454636 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" event={"ID":"665dbe7c-5370-4a97-8502-e9b25c8acd3a","Type":"ContainerStarted","Data":"89fb7a00fd4efc74515a0c3d4a20db20a62bcd9de48f98ba66ab6036caf8a420"} Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.463017 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f69556b5c-qmsmf" event={"ID":"bfd5417e-43d6-4fe2-807c-8c203cb74c0a","Type":"ContainerStarted","Data":"08e96c872138b89aa87fe681eda59fce3d594656121c84a13f4d89a1c5be6ca8"} Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.463069 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f69556b5c-qmsmf" event={"ID":"bfd5417e-43d6-4fe2-807c-8c203cb74c0a","Type":"ContainerStarted","Data":"df445e7e3ade77c2dd919f37116927b5b747d07550482260db0ad6f5970682fd"} Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.468490 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"753b7046-c6a6-4a8a-bc9c-46b1161c43df","Type":"ContainerStarted","Data":"0195e49453eeb03a8da98c792647dcdd235fc34de81a21f71511875a29f4f525"} Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.502426 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" podStartSLOduration=2.373308699 podStartE2EDuration="5.502406734s" podCreationTimestamp="2025-11-22 07:28:05 +0000 UTC" firstStartedPulling="2025-11-22 07:28:06.489346355 +0000 UTC m=+1528.902739603" lastFinishedPulling="2025-11-22 07:28:09.61844438 +0000 UTC m=+1532.031837638" observedRunningTime="2025-11-22 07:28:10.482250355 +0000 UTC m=+1532.895643613" watchObservedRunningTime="2025-11-22 07:28:10.502406734 +0000 UTC m=+1532.915799992" Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.530039 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" podStartSLOduration=5.530014394 podStartE2EDuration="5.530014394s" podCreationTimestamp="2025-11-22 07:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:28:10.520896197 +0000 UTC m=+1532.934289485" watchObservedRunningTime="2025-11-22 07:28:10.530014394 +0000 UTC m=+1532.943407662" Nov 22 07:28:10 crc kubenswrapper[4856]: I1122 07:28:10.547354 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-f69556b5c-qmsmf" podStartSLOduration=2.419323491 podStartE2EDuration="5.547330986s" podCreationTimestamp="2025-11-22 07:28:05 +0000 UTC" firstStartedPulling="2025-11-22 07:28:06.489580792 +0000 UTC m=+1528.902974050" lastFinishedPulling="2025-11-22 07:28:09.617588287 +0000 UTC m=+1532.030981545" observedRunningTime="2025-11-22 07:28:10.540958722 +0000 UTC m=+1532.954351980" watchObservedRunningTime="2025-11-22 07:28:10.547330986 +0000 UTC m=+1532.960724264" Nov 22 07:28:11 crc kubenswrapper[4856]: I1122 07:28:11.225353 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:28:11 crc kubenswrapper[4856]: I1122 07:28:11.488651 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79bdcb776d-cl77m" event={"ID":"39f7a457-9a5c-48b5-86c0-24d274596c8a","Type":"ContainerStarted","Data":"4b3192676d3e19f237ce934c70e2e2105edb9e9415b2d7c5b848a4de24f6ac9a"} Nov 22 07:28:11 crc kubenswrapper[4856]: I1122 07:28:11.488714 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79bdcb776d-cl77m" event={"ID":"39f7a457-9a5c-48b5-86c0-24d274596c8a","Type":"ContainerStarted","Data":"b6356dec8e3af2060f0508772909c3164a9dbf1ad47a0fddc1e261b2db1f8b4f"} Nov 22 07:28:11 crc kubenswrapper[4856]: I1122 07:28:11.488843 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:11 crc kubenswrapper[4856]: I1122 07:28:11.490477 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"753b7046-c6a6-4a8a-bc9c-46b1161c43df","Type":"ContainerStarted","Data":"3276dc849e2788805c7afaa623b6bfbad6b0d732144c17a1059624f2982c13c9"} Nov 22 07:28:11 crc kubenswrapper[4856]: I1122 07:28:11.528983 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-79bdcb776d-cl77m" podStartSLOduration=3.528949227 podStartE2EDuration="3.528949227s" podCreationTimestamp="2025-11-22 07:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:28:11.517172175 +0000 UTC m=+1533.930565443" watchObservedRunningTime="2025-11-22 07:28:11.528949227 +0000 UTC m=+1533.942342485" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.054776 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.160245 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h44lk\" (UniqueName: \"kubernetes.io/projected/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-kube-api-access-h44lk\") pod \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.160416 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-scripts\") pod \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.160448 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-db-sync-config-data\") pod \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.160470 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-combined-ca-bundle\") pod \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.160557 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-config-data\") pod \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.160626 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-etc-machine-id\") pod \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\" (UID: \"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c\") " Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.161046 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d8c4fd78-c2bf-4a39-8db9-e511ae36a38c" (UID: "d8c4fd78-c2bf-4a39-8db9-e511ae36a38c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.164372 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d8c4fd78-c2bf-4a39-8db9-e511ae36a38c" (UID: "d8c4fd78-c2bf-4a39-8db9-e511ae36a38c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.165116 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-scripts" (OuterVolumeSpecName: "scripts") pod "d8c4fd78-c2bf-4a39-8db9-e511ae36a38c" (UID: "d8c4fd78-c2bf-4a39-8db9-e511ae36a38c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.165290 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-kube-api-access-h44lk" (OuterVolumeSpecName: "kube-api-access-h44lk") pod "d8c4fd78-c2bf-4a39-8db9-e511ae36a38c" (UID: "d8c4fd78-c2bf-4a39-8db9-e511ae36a38c"). InnerVolumeSpecName "kube-api-access-h44lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.237640 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8c4fd78-c2bf-4a39-8db9-e511ae36a38c" (UID: "d8c4fd78-c2bf-4a39-8db9-e511ae36a38c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.249890 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-config-data" (OuterVolumeSpecName: "config-data") pod "d8c4fd78-c2bf-4a39-8db9-e511ae36a38c" (UID: "d8c4fd78-c2bf-4a39-8db9-e511ae36a38c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.263901 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.263940 4856 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.263952 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.263960 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.263968 4856 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.263984 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h44lk\" (UniqueName: \"kubernetes.io/projected/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c-kube-api-access-h44lk\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.500289 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-n9nhw" event={"ID":"d8c4fd78-c2bf-4a39-8db9-e511ae36a38c","Type":"ContainerDied","Data":"ee6f4bb3d77c6bb784c9b5850eae8ece617005f7cdaaac2e6df4774a4413ab71"} Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.500329 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee6f4bb3d77c6bb784c9b5850eae8ece617005f7cdaaac2e6df4774a4413ab71" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.500387 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-n9nhw" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.503333 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"753b7046-c6a6-4a8a-bc9c-46b1161c43df","Type":"ContainerStarted","Data":"562d4bdb219c308b27412d94eef347626c984f11edda6494e6814e6a306a4775"} Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.503571 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.923842 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:28:12 crc kubenswrapper[4856]: E1122 07:28:12.924501 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8c4fd78-c2bf-4a39-8db9-e511ae36a38c" containerName="cinder-db-sync" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.925829 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8c4fd78-c2bf-4a39-8db9-e511ae36a38c" containerName="cinder-db-sync" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.926038 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c4fd78-c2bf-4a39-8db9-e511ae36a38c" containerName="cinder-db-sync" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.927626 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.933106 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.933344 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.933665 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 07:28:12 crc kubenswrapper[4856]: I1122 07:28:12.933830 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-khp4b" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:12.994597 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.036608 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84c94df5fc-hgwfg"] Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.036900 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" podUID="bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" containerName="dnsmasq-dns" containerID="cri-o://eb5a53ded4a3b71ba2fb136a0ac07f9a5cb9550bb6a8048e04f5cf32e604d7a0" gracePeriod=10 Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.085833 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjg6t\" (UniqueName: \"kubernetes.io/projected/61083129-301d-45a9-92be-2afa22968773-kube-api-access-kjg6t\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.085884 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-scripts\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.085915 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-config-data\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.085937 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.085954 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.085973 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61083129-301d-45a9-92be-2afa22968773-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.118805 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d4b4d75d9-r4bms"] Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.120530 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.152944 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4b4d75d9-r4bms"] Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.196686 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.196736 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.196754 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61083129-301d-45a9-92be-2afa22968773-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.196937 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjg6t\" (UniqueName: \"kubernetes.io/projected/61083129-301d-45a9-92be-2afa22968773-kube-api-access-kjg6t\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.196964 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-scripts\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.197004 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-config-data\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.217695 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.219542 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.227733 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61083129-301d-45a9-92be-2afa22968773-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.231776 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.235845 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.236415 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-config-data\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.236492 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-scripts\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.246257 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.251004 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.270159 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjg6t\" (UniqueName: \"kubernetes.io/projected/61083129-301d-45a9-92be-2afa22968773-kube-api-access-kjg6t\") pod \"cinder-scheduler-0\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.283394 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.299790 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-logs\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.303423 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.303701 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.303977 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr9md\" (UniqueName: \"kubernetes.io/projected/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-kube-api-access-zr9md\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.310029 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-config-data\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.310355 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.310552 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-dns-svc\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.310768 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.310976 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-scripts\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.311113 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.313415 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.313589 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-config\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.313777 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5mkt\" (UniqueName: \"kubernetes.io/projected/7a2cd411-a78b-4a0e-b667-94994b50d4da-kube-api-access-w5mkt\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: E1122 07:28:13.391772 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc22daaa_cc21_45b9_b3a3_4c7f4465b0fe.slice/crio-eb5a53ded4a3b71ba2fb136a0ac07f9a5cb9550bb6a8048e04f5cf32e604d7a0.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.421500 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.421598 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-scripts\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.421633 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.421684 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.421703 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-config\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.421725 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5mkt\" (UniqueName: \"kubernetes.io/projected/7a2cd411-a78b-4a0e-b667-94994b50d4da-kube-api-access-w5mkt\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.421765 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-logs\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.421802 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.421819 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.421859 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr9md\" (UniqueName: \"kubernetes.io/projected/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-kube-api-access-zr9md\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.421894 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-config-data\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.421914 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.421934 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-dns-svc\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.422915 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.492957 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.496161 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-logs\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.500067 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.501162 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.503774 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr9md\" (UniqueName: \"kubernetes.io/projected/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-kube-api-access-zr9md\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.504223 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-config\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.505791 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-config-data-custom\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.509786 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-dns-svc\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.512058 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-config-data\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.518638 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5mkt\" (UniqueName: \"kubernetes.io/projected/7a2cd411-a78b-4a0e-b667-94994b50d4da-kube-api-access-w5mkt\") pod \"dnsmasq-dns-6d4b4d75d9-r4bms\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.519160 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.525622 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-scripts\") pod \"cinder-api-0\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.533329 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.535960 4856 generic.go:334] "Generic (PLEG): container finished" podID="bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" containerID="eb5a53ded4a3b71ba2fb136a0ac07f9a5cb9550bb6a8048e04f5cf32e604d7a0" exitCode=0 Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.537149 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" event={"ID":"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe","Type":"ContainerDied","Data":"eb5a53ded4a3b71ba2fb136a0ac07f9a5cb9550bb6a8048e04f5cf32e604d7a0"} Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.809434 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:28:13 crc kubenswrapper[4856]: I1122 07:28:13.949135 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.214693 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4b4d75d9-r4bms"] Nov 22 07:28:14 crc kubenswrapper[4856]: W1122 07:28:14.216802 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a2cd411_a78b_4a0e_b667_94994b50d4da.slice/crio-c876c5078226875de0eb967f2e6faf89a3fb9849c4ec3b39fdd4171c353f98de WatchSource:0}: Error finding container c876c5078226875de0eb967f2e6faf89a3fb9849c4ec3b39fdd4171c353f98de: Status 404 returned error can't find the container with id c876c5078226875de0eb967f2e6faf89a3fb9849c4ec3b39fdd4171c353f98de Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.303143 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.435558 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.464474 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-ovsdbserver-sb\") pod \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.464548 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-dns-swift-storage-0\") pod \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.464578 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-ovsdbserver-nb\") pod \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.464605 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-config\") pod \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.464633 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nmln\" (UniqueName: \"kubernetes.io/projected/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-kube-api-access-8nmln\") pod \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.465242 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-dns-svc\") pod \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\" (UID: \"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe\") " Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.477883 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-kube-api-access-8nmln" (OuterVolumeSpecName: "kube-api-access-8nmln") pod "bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" (UID: "bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe"). InnerVolumeSpecName "kube-api-access-8nmln". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.568700 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" event={"ID":"7a2cd411-a78b-4a0e-b667-94994b50d4da","Type":"ContainerStarted","Data":"c876c5078226875de0eb967f2e6faf89a3fb9849c4ec3b39fdd4171c353f98de"} Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.570525 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229","Type":"ContainerStarted","Data":"513499d22ea0cb56432750f46b8ff8c6a28bf2a2957325549186f4feae8d14d3"} Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.573127 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nmln\" (UniqueName: \"kubernetes.io/projected/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-kube-api-access-8nmln\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.573794 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" event={"ID":"bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe","Type":"ContainerDied","Data":"0d47897308af6d12780c26eb5b4ff07acac479a8ee127a44329ca4792b760a37"} Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.573827 4856 scope.go:117] "RemoveContainer" containerID="eb5a53ded4a3b71ba2fb136a0ac07f9a5cb9550bb6a8048e04f5cf32e604d7a0" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.575068 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84c94df5fc-hgwfg" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.575807 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" (UID: "bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.579096 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-config" (OuterVolumeSpecName: "config") pod "bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" (UID: "bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.592624 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"61083129-301d-45a9-92be-2afa22968773","Type":"ContainerStarted","Data":"255b55d1a0579959dfb5954b4d203490367c4d2842f3725daaac8ca420f09fa8"} Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.610603 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" (UID: "bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.626848 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" (UID: "bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.636921 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"753b7046-c6a6-4a8a-bc9c-46b1161c43df","Type":"ContainerStarted","Data":"8de4f4988bab83236a11467a1cc71bdd3e3315d1652063ac823d869b96e746fa"} Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.643176 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" (UID: "bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.649272 4856 scope.go:117] "RemoveContainer" containerID="0b319424567ba82485defaa2b5170384a9d60d7def3d4ce0a32a2f9a351f262d" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.674420 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.674460 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.674476 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.674487 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.674499 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.903013 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84c94df5fc-hgwfg"] Nov 22 07:28:14 crc kubenswrapper[4856]: I1122 07:28:14.914500 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84c94df5fc-hgwfg"] Nov 22 07:28:15 crc kubenswrapper[4856]: I1122 07:28:15.244395 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:28:15 crc kubenswrapper[4856]: I1122 07:28:15.652304 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229","Type":"ContainerStarted","Data":"4dce49cd8e431cf27ee37db7afc7046d6ad9a0266ec7d18449bef4ca6f09b4de"} Nov 22 07:28:15 crc kubenswrapper[4856]: I1122 07:28:15.654103 4856 generic.go:334] "Generic (PLEG): container finished" podID="7a2cd411-a78b-4a0e-b667-94994b50d4da" containerID="2afc6abc382c4d636dbf6c18ee99d51a7bb85449371c2e3fe310052453b490d0" exitCode=0 Nov 22 07:28:15 crc kubenswrapper[4856]: I1122 07:28:15.654137 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" event={"ID":"7a2cd411-a78b-4a0e-b667-94994b50d4da","Type":"ContainerDied","Data":"2afc6abc382c4d636dbf6c18ee99d51a7bb85449371c2e3fe310052453b490d0"} Nov 22 07:28:16 crc kubenswrapper[4856]: I1122 07:28:16.669562 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229","Type":"ContainerStarted","Data":"85dfd3da1099f9484cabd3a3d84e1f862a7d4c96a0b8a18f7435129a0e40b58d"} Nov 22 07:28:16 crc kubenswrapper[4856]: I1122 07:28:16.670254 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" containerName="cinder-api-log" containerID="cri-o://4dce49cd8e431cf27ee37db7afc7046d6ad9a0266ec7d18449bef4ca6f09b4de" gracePeriod=30 Nov 22 07:28:16 crc kubenswrapper[4856]: I1122 07:28:16.670663 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 07:28:16 crc kubenswrapper[4856]: I1122 07:28:16.670737 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" containerName="cinder-api" containerID="cri-o://85dfd3da1099f9484cabd3a3d84e1f862a7d4c96a0b8a18f7435129a0e40b58d" gracePeriod=30 Nov 22 07:28:16 crc kubenswrapper[4856]: I1122 07:28:16.692652 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.692632264 podStartE2EDuration="3.692632264s" podCreationTimestamp="2025-11-22 07:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:28:16.688767018 +0000 UTC m=+1539.102160296" watchObservedRunningTime="2025-11-22 07:28:16.692632264 +0000 UTC m=+1539.106025522" Nov 22 07:28:16 crc kubenswrapper[4856]: I1122 07:28:16.723868 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" path="/var/lib/kubelet/pods/bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe/volumes" Nov 22 07:28:17 crc kubenswrapper[4856]: I1122 07:28:17.680557 4856 generic.go:334] "Generic (PLEG): container finished" podID="e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" containerID="4dce49cd8e431cf27ee37db7afc7046d6ad9a0266ec7d18449bef4ca6f09b4de" exitCode=143 Nov 22 07:28:17 crc kubenswrapper[4856]: I1122 07:28:17.680722 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229","Type":"ContainerDied","Data":"4dce49cd8e431cf27ee37db7afc7046d6ad9a0266ec7d18449bef4ca6f09b4de"} Nov 22 07:28:18 crc kubenswrapper[4856]: I1122 07:28:18.699660 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"61083129-301d-45a9-92be-2afa22968773","Type":"ContainerStarted","Data":"bd3b30c6c08463e2bae61be00dd76c48697a39a1512ce7e84ccf873b4b1e8e4d"} Nov 22 07:28:18 crc kubenswrapper[4856]: I1122 07:28:18.705529 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"753b7046-c6a6-4a8a-bc9c-46b1161c43df","Type":"ContainerStarted","Data":"395665ce63a5fc2a3189872b904be1279bac4fd5e78ec92fffb8722586b9d573"} Nov 22 07:28:18 crc kubenswrapper[4856]: I1122 07:28:18.705701 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="ceilometer-central-agent" containerID="cri-o://3276dc849e2788805c7afaa623b6bfbad6b0d732144c17a1059624f2982c13c9" gracePeriod=30 Nov 22 07:28:18 crc kubenswrapper[4856]: I1122 07:28:18.705963 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:28:18 crc kubenswrapper[4856]: I1122 07:28:18.706258 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="proxy-httpd" containerID="cri-o://395665ce63a5fc2a3189872b904be1279bac4fd5e78ec92fffb8722586b9d573" gracePeriod=30 Nov 22 07:28:18 crc kubenswrapper[4856]: I1122 07:28:18.706307 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="sg-core" containerID="cri-o://8de4f4988bab83236a11467a1cc71bdd3e3315d1652063ac823d869b96e746fa" gracePeriod=30 Nov 22 07:28:18 crc kubenswrapper[4856]: I1122 07:28:18.706341 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="ceilometer-notification-agent" containerID="cri-o://562d4bdb219c308b27412d94eef347626c984f11edda6494e6814e6a306a4775" gracePeriod=30 Nov 22 07:28:18 crc kubenswrapper[4856]: I1122 07:28:18.748194 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.528135875 podStartE2EDuration="10.748175036s" podCreationTimestamp="2025-11-22 07:28:08 +0000 UTC" firstStartedPulling="2025-11-22 07:28:10.35673968 +0000 UTC m=+1532.770132938" lastFinishedPulling="2025-11-22 07:28:16.576778841 +0000 UTC m=+1538.990172099" observedRunningTime="2025-11-22 07:28:18.745974616 +0000 UTC m=+1541.159367874" watchObservedRunningTime="2025-11-22 07:28:18.748175036 +0000 UTC m=+1541.161568294" Nov 22 07:28:18 crc kubenswrapper[4856]: I1122 07:28:18.759581 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" event={"ID":"7a2cd411-a78b-4a0e-b667-94994b50d4da","Type":"ContainerStarted","Data":"bf30ee61fbad22f1709515fa30aae4a80f402ac967ce4f3e934886ffb2310cbe"} Nov 22 07:28:18 crc kubenswrapper[4856]: I1122 07:28:18.759663 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:18 crc kubenswrapper[4856]: I1122 07:28:18.808353 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" podStartSLOduration=6.808336093 podStartE2EDuration="6.808336093s" podCreationTimestamp="2025-11-22 07:28:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:28:18.80087511 +0000 UTC m=+1541.214268368" watchObservedRunningTime="2025-11-22 07:28:18.808336093 +0000 UTC m=+1541.221729351" Nov 22 07:28:19 crc kubenswrapper[4856]: I1122 07:28:19.759929 4856 generic.go:334] "Generic (PLEG): container finished" podID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerID="8de4f4988bab83236a11467a1cc71bdd3e3315d1652063ac823d869b96e746fa" exitCode=2 Nov 22 07:28:19 crc kubenswrapper[4856]: I1122 07:28:19.760429 4856 generic.go:334] "Generic (PLEG): container finished" podID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerID="562d4bdb219c308b27412d94eef347626c984f11edda6494e6814e6a306a4775" exitCode=0 Nov 22 07:28:19 crc kubenswrapper[4856]: I1122 07:28:19.760010 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"753b7046-c6a6-4a8a-bc9c-46b1161c43df","Type":"ContainerDied","Data":"8de4f4988bab83236a11467a1cc71bdd3e3315d1652063ac823d869b96e746fa"} Nov 22 07:28:19 crc kubenswrapper[4856]: I1122 07:28:19.760539 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"753b7046-c6a6-4a8a-bc9c-46b1161c43df","Type":"ContainerDied","Data":"562d4bdb219c308b27412d94eef347626c984f11edda6494e6814e6a306a4775"} Nov 22 07:28:19 crc kubenswrapper[4856]: I1122 07:28:19.762352 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"61083129-301d-45a9-92be-2afa22968773","Type":"ContainerStarted","Data":"fc66357289067032478e99cefc75960bd0feb8dd63252e993fe2df2290125a73"} Nov 22 07:28:19 crc kubenswrapper[4856]: I1122 07:28:19.791140 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.169729736 podStartE2EDuration="7.791122696s" podCreationTimestamp="2025-11-22 07:28:12 +0000 UTC" firstStartedPulling="2025-11-22 07:28:13.955259538 +0000 UTC m=+1536.368652796" lastFinishedPulling="2025-11-22 07:28:16.576652498 +0000 UTC m=+1538.990045756" observedRunningTime="2025-11-22 07:28:19.781806712 +0000 UTC m=+1542.195199970" watchObservedRunningTime="2025-11-22 07:28:19.791122696 +0000 UTC m=+1542.204515954" Nov 22 07:28:19 crc kubenswrapper[4856]: I1122 07:28:19.939817 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:19 crc kubenswrapper[4856]: I1122 07:28:19.939863 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:20 crc kubenswrapper[4856]: I1122 07:28:20.940815 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:20 crc kubenswrapper[4856]: I1122 07:28:20.941559 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:22 crc kubenswrapper[4856]: I1122 07:28:22.666753 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-79bdcb776d-cl77m" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:22 crc kubenswrapper[4856]: I1122 07:28:22.674752 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-79bdcb776d-cl77m" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:23 crc kubenswrapper[4856]: I1122 07:28:23.284825 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 07:28:23 crc kubenswrapper[4856]: I1122 07:28:23.286348 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="61083129-301d-45a9-92be-2afa22968773" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.174:8080/\": dial tcp 10.217.0.174:8080: connect: connection refused" Nov 22 07:28:23 crc kubenswrapper[4856]: I1122 07:28:23.536567 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:28:23 crc kubenswrapper[4856]: I1122 07:28:23.603983 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-588bcb86c-tjc5x"] Nov 22 07:28:23 crc kubenswrapper[4856]: I1122 07:28:23.604213 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" podUID="4cae477b-f4c8-416e-ac2d-de6cecccfafc" containerName="dnsmasq-dns" containerID="cri-o://78e1262efa93dbe79c59f24fda69e8fedc14e777db6de1096f1712cbb85890b9" gracePeriod=10 Nov 22 07:28:23 crc kubenswrapper[4856]: I1122 07:28:23.669977 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79bdcb776d-cl77m" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:23 crc kubenswrapper[4856]: I1122 07:28:23.669994 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79bdcb776d-cl77m" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:24 crc kubenswrapper[4856]: I1122 07:28:24.805459 4856 generic.go:334] "Generic (PLEG): container finished" podID="4cae477b-f4c8-416e-ac2d-de6cecccfafc" containerID="78e1262efa93dbe79c59f24fda69e8fedc14e777db6de1096f1712cbb85890b9" exitCode=0 Nov 22 07:28:24 crc kubenswrapper[4856]: I1122 07:28:24.806160 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" event={"ID":"4cae477b-f4c8-416e-ac2d-de6cecccfafc","Type":"ContainerDied","Data":"78e1262efa93dbe79c59f24fda69e8fedc14e777db6de1096f1712cbb85890b9"} Nov 22 07:28:25 crc kubenswrapper[4856]: I1122 07:28:25.025038 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:25 crc kubenswrapper[4856]: I1122 07:28:25.025409 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.024780 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.024797 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.067582 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.112407 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-ovsdbserver-sb\") pod \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.112459 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-config\") pod \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.112481 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-ovsdbserver-nb\") pod \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.112501 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-dns-swift-storage-0\") pod \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.112582 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9zdw\" (UniqueName: \"kubernetes.io/projected/4cae477b-f4c8-416e-ac2d-de6cecccfafc-kube-api-access-q9zdw\") pod \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.112657 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-dns-svc\") pod \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\" (UID: \"4cae477b-f4c8-416e-ac2d-de6cecccfafc\") " Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.120680 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cae477b-f4c8-416e-ac2d-de6cecccfafc-kube-api-access-q9zdw" (OuterVolumeSpecName: "kube-api-access-q9zdw") pod "4cae477b-f4c8-416e-ac2d-de6cecccfafc" (UID: "4cae477b-f4c8-416e-ac2d-de6cecccfafc"). InnerVolumeSpecName "kube-api-access-q9zdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.172298 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4cae477b-f4c8-416e-ac2d-de6cecccfafc" (UID: "4cae477b-f4c8-416e-ac2d-de6cecccfafc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.177297 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4cae477b-f4c8-416e-ac2d-de6cecccfafc" (UID: "4cae477b-f4c8-416e-ac2d-de6cecccfafc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.178819 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4cae477b-f4c8-416e-ac2d-de6cecccfafc" (UID: "4cae477b-f4c8-416e-ac2d-de6cecccfafc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.187859 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4cae477b-f4c8-416e-ac2d-de6cecccfafc" (UID: "4cae477b-f4c8-416e-ac2d-de6cecccfafc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.214127 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.214609 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.214698 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9zdw\" (UniqueName: \"kubernetes.io/projected/4cae477b-f4c8-416e-ac2d-de6cecccfafc-kube-api-access-q9zdw\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.214786 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.214855 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.215888 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-config" (OuterVolumeSpecName: "config") pod "4cae477b-f4c8-416e-ac2d-de6cecccfafc" (UID: "4cae477b-f4c8-416e-ac2d-de6cecccfafc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.316104 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cae477b-f4c8-416e-ac2d-de6cecccfafc-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.828553 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" event={"ID":"4cae477b-f4c8-416e-ac2d-de6cecccfafc","Type":"ContainerDied","Data":"c78931d6ee08fa0b1c001c68375f58b864096e281bbf51e700fe6a651e27035e"} Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.828615 4856 scope.go:117] "RemoveContainer" containerID="78e1262efa93dbe79c59f24fda69e8fedc14e777db6de1096f1712cbb85890b9" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.828663 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.852526 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-588bcb86c-tjc5x"] Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.860793 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-588bcb86c-tjc5x"] Nov 22 07:28:26 crc kubenswrapper[4856]: I1122 07:28:26.871951 4856 scope.go:117] "RemoveContainer" containerID="7147ecaff322802e92bd7b2bc26f58f2536def0b991b416a409d90dc898d0ab6" Nov 22 07:28:27 crc kubenswrapper[4856]: I1122 07:28:27.672000 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-79bdcb776d-cl77m" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:27 crc kubenswrapper[4856]: I1122 07:28:27.678682 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-79bdcb776d-cl77m" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:28 crc kubenswrapper[4856]: I1122 07:28:28.284887 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="61083129-301d-45a9-92be-2afa22968773" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.174:8080/\": dial tcp 10.217.0.174:8080: connect: connection refused" Nov 22 07:28:28 crc kubenswrapper[4856]: I1122 07:28:28.679659 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79bdcb776d-cl77m" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:28 crc kubenswrapper[4856]: I1122 07:28:28.679708 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79bdcb776d-cl77m" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:28 crc kubenswrapper[4856]: I1122 07:28:28.723570 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cae477b-f4c8-416e-ac2d-de6cecccfafc" path="/var/lib/kubelet/pods/4cae477b-f4c8-416e-ac2d-de6cecccfafc/volumes" Nov 22 07:28:28 crc kubenswrapper[4856]: I1122 07:28:28.852103 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.176:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:30 crc kubenswrapper[4856]: I1122 07:28:30.107745 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:30 crc kubenswrapper[4856]: I1122 07:28:30.107789 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:30 crc kubenswrapper[4856]: I1122 07:28:30.108042 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:30 crc kubenswrapper[4856]: I1122 07:28:30.108105 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:30 crc kubenswrapper[4856]: I1122 07:28:30.108710 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="barbican-api-log" containerStatusID={"Type":"cri-o","ID":"4cf1ecab5a0d71491cde66ab7a4a2c05712de67111850321a6b53bfea9d001a2"} pod="openstack/barbican-api-7bd5f4cd4-gqk7t" containerMessage="Container barbican-api-log failed liveness probe, will be restarted" Nov 22 07:28:30 crc kubenswrapper[4856]: I1122 07:28:30.108743 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="barbican-api" containerStatusID={"Type":"cri-o","ID":"9f32f37097e3245ac33236c34783306c1bca68b9b54405f8b43edb400fe1e2e8"} pod="openstack/barbican-api-7bd5f4cd4-gqk7t" containerMessage="Container barbican-api failed liveness probe, will be restarted" Nov 22 07:28:30 crc kubenswrapper[4856]: I1122 07:28:30.108775 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" containerID="cri-o://9f32f37097e3245ac33236c34783306c1bca68b9b54405f8b43edb400fe1e2e8" gracePeriod=30 Nov 22 07:28:30 crc kubenswrapper[4856]: I1122 07:28:30.113986 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": EOF" Nov 22 07:28:30 crc kubenswrapper[4856]: I1122 07:28:30.114029 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": EOF" Nov 22 07:28:30 crc kubenswrapper[4856]: I1122 07:28:30.435985 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:28:30 crc kubenswrapper[4856]: I1122 07:28:30.436087 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:28:30 crc kubenswrapper[4856]: I1122 07:28:30.808070 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-588bcb86c-tjc5x" podUID="4cae477b-f4c8-416e-ac2d-de6cecccfafc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Nov 22 07:28:32 crc kubenswrapper[4856]: I1122 07:28:32.116327 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:32 crc kubenswrapper[4856]: I1122 07:28:32.122393 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:28:32 crc kubenswrapper[4856]: I1122 07:28:32.213017 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bd5f4cd4-gqk7t"] Nov 22 07:28:33 crc kubenswrapper[4856]: I1122 07:28:33.791826 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="61083129-301d-45a9-92be-2afa22968773" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 22 07:28:33 crc kubenswrapper[4856]: I1122 07:28:33.893690 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.176:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:35 crc kubenswrapper[4856]: I1122 07:28:35.198754 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:35 crc kubenswrapper[4856]: I1122 07:28:35.198750 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:28:35 crc kubenswrapper[4856]: I1122 07:28:35.928241 4856 generic.go:334] "Generic (PLEG): container finished" podID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerID="3276dc849e2788805c7afaa623b6bfbad6b0d732144c17a1059624f2982c13c9" exitCode=0 Nov 22 07:28:35 crc kubenswrapper[4856]: I1122 07:28:35.928282 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"753b7046-c6a6-4a8a-bc9c-46b1161c43df","Type":"ContainerDied","Data":"3276dc849e2788805c7afaa623b6bfbad6b0d732144c17a1059624f2982c13c9"} Nov 22 07:28:36 crc kubenswrapper[4856]: I1122 07:28:36.041445 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 22 07:28:36 crc kubenswrapper[4856]: I1122 07:28:36.597279 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": read tcp 10.217.0.2:55548->10.217.0.171:9311: read: connection reset by peer" Nov 22 07:28:36 crc kubenswrapper[4856]: I1122 07:28:36.597282 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": read tcp 10.217.0.2:55552->10.217.0.171:9311: read: connection reset by peer" Nov 22 07:28:36 crc kubenswrapper[4856]: I1122 07:28:36.598201 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": dial tcp 10.217.0.171:9311: connect: connection refused" Nov 22 07:28:36 crc kubenswrapper[4856]: I1122 07:28:36.598304 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": dial tcp 10.217.0.171:9311: connect: connection refused" Nov 22 07:28:36 crc kubenswrapper[4856]: I1122 07:28:36.720244 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" containerID="cri-o://4cf1ecab5a0d71491cde66ab7a4a2c05712de67111850321a6b53bfea9d001a2" gracePeriod=30 Nov 22 07:28:36 crc kubenswrapper[4856]: I1122 07:28:36.938670 4856 generic.go:334] "Generic (PLEG): container finished" podID="a9803018-06dd-4572-ae9e-eadc43492e39" containerID="9f32f37097e3245ac33236c34783306c1bca68b9b54405f8b43edb400fe1e2e8" exitCode=0 Nov 22 07:28:36 crc kubenswrapper[4856]: I1122 07:28:36.938704 4856 generic.go:334] "Generic (PLEG): container finished" podID="a9803018-06dd-4572-ae9e-eadc43492e39" containerID="4cf1ecab5a0d71491cde66ab7a4a2c05712de67111850321a6b53bfea9d001a2" exitCode=143 Nov 22 07:28:36 crc kubenswrapper[4856]: I1122 07:28:36.938726 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" event={"ID":"a9803018-06dd-4572-ae9e-eadc43492e39","Type":"ContainerDied","Data":"9f32f37097e3245ac33236c34783306c1bca68b9b54405f8b43edb400fe1e2e8"} Nov 22 07:28:36 crc kubenswrapper[4856]: I1122 07:28:36.938754 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" event={"ID":"a9803018-06dd-4572-ae9e-eadc43492e39","Type":"ContainerDied","Data":"4cf1ecab5a0d71491cde66ab7a4a2c05712de67111850321a6b53bfea9d001a2"} Nov 22 07:28:37 crc kubenswrapper[4856]: I1122 07:28:37.951299 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" event={"ID":"a9803018-06dd-4572-ae9e-eadc43492e39","Type":"ContainerStarted","Data":"25cfc123f943760b384bba6eccf6310ca76184caaab62367b5b423bc04c4c3ea"} Nov 22 07:28:37 crc kubenswrapper[4856]: I1122 07:28:37.951437 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" containerID="cri-o://f50496ada343b1b269ef32b6336f6bd23045c31ce40cb6c4b4bcc15fb847c984" gracePeriod=30 Nov 22 07:28:37 crc kubenswrapper[4856]: I1122 07:28:37.951523 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" containerID="cri-o://25cfc123f943760b384bba6eccf6310ca76184caaab62367b5b423bc04c4c3ea" gracePeriod=30 Nov 22 07:28:37 crc kubenswrapper[4856]: I1122 07:28:37.952014 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.171:9311/healthcheck\": dial tcp 10.217.0.171:9311: connect: connection refused" Nov 22 07:28:37 crc kubenswrapper[4856]: I1122 07:28:37.953531 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:37 crc kubenswrapper[4856]: I1122 07:28:37.953561 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" event={"ID":"a9803018-06dd-4572-ae9e-eadc43492e39","Type":"ContainerStarted","Data":"f50496ada343b1b269ef32b6336f6bd23045c31ce40cb6c4b4bcc15fb847c984"} Nov 22 07:28:38 crc kubenswrapper[4856]: I1122 07:28:38.916189 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 07:28:38 crc kubenswrapper[4856]: I1122 07:28:38.967980 4856 generic.go:334] "Generic (PLEG): container finished" podID="a9803018-06dd-4572-ae9e-eadc43492e39" containerID="25cfc123f943760b384bba6eccf6310ca76184caaab62367b5b423bc04c4c3ea" exitCode=1 Nov 22 07:28:38 crc kubenswrapper[4856]: I1122 07:28:38.968017 4856 generic.go:334] "Generic (PLEG): container finished" podID="a9803018-06dd-4572-ae9e-eadc43492e39" containerID="f50496ada343b1b269ef32b6336f6bd23045c31ce40cb6c4b4bcc15fb847c984" exitCode=143 Nov 22 07:28:38 crc kubenswrapper[4856]: I1122 07:28:38.968041 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" event={"ID":"a9803018-06dd-4572-ae9e-eadc43492e39","Type":"ContainerDied","Data":"25cfc123f943760b384bba6eccf6310ca76184caaab62367b5b423bc04c4c3ea"} Nov 22 07:28:38 crc kubenswrapper[4856]: I1122 07:28:38.968072 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" event={"ID":"a9803018-06dd-4572-ae9e-eadc43492e39","Type":"ContainerDied","Data":"f50496ada343b1b269ef32b6336f6bd23045c31ce40cb6c4b4bcc15fb847c984"} Nov 22 07:28:38 crc kubenswrapper[4856]: I1122 07:28:38.968094 4856 scope.go:117] "RemoveContainer" containerID="9f32f37097e3245ac33236c34783306c1bca68b9b54405f8b43edb400fe1e2e8" Nov 22 07:28:38 crc kubenswrapper[4856]: I1122 07:28:38.990278 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:28:38 crc kubenswrapper[4856]: I1122 07:28:38.990858 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="61083129-301d-45a9-92be-2afa22968773" containerName="cinder-scheduler" containerID="cri-o://bd3b30c6c08463e2bae61be00dd76c48697a39a1512ce7e84ccf873b4b1e8e4d" gracePeriod=30 Nov 22 07:28:38 crc kubenswrapper[4856]: I1122 07:28:38.990989 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="61083129-301d-45a9-92be-2afa22968773" containerName="probe" containerID="cri-o://fc66357289067032478e99cefc75960bd0feb8dd63252e993fe2df2290125a73" gracePeriod=30 Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.051052 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.091584 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.167559 4856 scope.go:117] "RemoveContainer" containerID="4cf1ecab5a0d71491cde66ab7a4a2c05712de67111850321a6b53bfea9d001a2" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.168964 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-config-data\") pod \"a9803018-06dd-4572-ae9e-eadc43492e39\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.169016 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-combined-ca-bundle\") pod \"a9803018-06dd-4572-ae9e-eadc43492e39\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.169146 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-config-data-custom\") pod \"a9803018-06dd-4572-ae9e-eadc43492e39\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.169198 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cvv4\" (UniqueName: \"kubernetes.io/projected/a9803018-06dd-4572-ae9e-eadc43492e39-kube-api-access-4cvv4\") pod \"a9803018-06dd-4572-ae9e-eadc43492e39\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.169235 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9803018-06dd-4572-ae9e-eadc43492e39-logs\") pod \"a9803018-06dd-4572-ae9e-eadc43492e39\" (UID: \"a9803018-06dd-4572-ae9e-eadc43492e39\") " Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.170027 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9803018-06dd-4572-ae9e-eadc43492e39-logs" (OuterVolumeSpecName: "logs") pod "a9803018-06dd-4572-ae9e-eadc43492e39" (UID: "a9803018-06dd-4572-ae9e-eadc43492e39"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.175130 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a9803018-06dd-4572-ae9e-eadc43492e39" (UID: "a9803018-06dd-4572-ae9e-eadc43492e39"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.175299 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9803018-06dd-4572-ae9e-eadc43492e39-kube-api-access-4cvv4" (OuterVolumeSpecName: "kube-api-access-4cvv4") pod "a9803018-06dd-4572-ae9e-eadc43492e39" (UID: "a9803018-06dd-4572-ae9e-eadc43492e39"). InnerVolumeSpecName "kube-api-access-4cvv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.201658 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9803018-06dd-4572-ae9e-eadc43492e39" (UID: "a9803018-06dd-4572-ae9e-eadc43492e39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.238250 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-config-data" (OuterVolumeSpecName: "config-data") pod "a9803018-06dd-4572-ae9e-eadc43492e39" (UID: "a9803018-06dd-4572-ae9e-eadc43492e39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.272373 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.272414 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cvv4\" (UniqueName: \"kubernetes.io/projected/a9803018-06dd-4572-ae9e-eadc43492e39-kube-api-access-4cvv4\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.272434 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9803018-06dd-4572-ae9e-eadc43492e39-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.272447 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.272459 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9803018-06dd-4572-ae9e-eadc43492e39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.986460 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.986479 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5f4cd4-gqk7t" event={"ID":"a9803018-06dd-4572-ae9e-eadc43492e39","Type":"ContainerDied","Data":"6a4c13f075a387354b957b2842020111902cb65c1c7a324b385db380ff0cca4b"} Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.987051 4856 scope.go:117] "RemoveContainer" containerID="25cfc123f943760b384bba6eccf6310ca76184caaab62367b5b423bc04c4c3ea" Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.999676 4856 generic.go:334] "Generic (PLEG): container finished" podID="61083129-301d-45a9-92be-2afa22968773" containerID="fc66357289067032478e99cefc75960bd0feb8dd63252e993fe2df2290125a73" exitCode=0 Nov 22 07:28:39 crc kubenswrapper[4856]: I1122 07:28:39.999731 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"61083129-301d-45a9-92be-2afa22968773","Type":"ContainerDied","Data":"fc66357289067032478e99cefc75960bd0feb8dd63252e993fe2df2290125a73"} Nov 22 07:28:40 crc kubenswrapper[4856]: I1122 07:28:40.040586 4856 scope.go:117] "RemoveContainer" containerID="f50496ada343b1b269ef32b6336f6bd23045c31ce40cb6c4b4bcc15fb847c984" Nov 22 07:28:40 crc kubenswrapper[4856]: I1122 07:28:40.042867 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bd5f4cd4-gqk7t"] Nov 22 07:28:40 crc kubenswrapper[4856]: I1122 07:28:40.049914 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7bd5f4cd4-gqk7t"] Nov 22 07:28:40 crc kubenswrapper[4856]: I1122 07:28:40.722359 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" path="/var/lib/kubelet/pods/a9803018-06dd-4572-ae9e-eadc43492e39/volumes" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.023050 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4bh2l"] Nov 22 07:28:43 crc kubenswrapper[4856]: E1122 07:28:43.024798 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.024812 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" Nov 22 07:28:43 crc kubenswrapper[4856]: E1122 07:28:43.024827 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.024833 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" Nov 22 07:28:43 crc kubenswrapper[4856]: E1122 07:28:43.024845 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" containerName="dnsmasq-dns" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.024851 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" containerName="dnsmasq-dns" Nov 22 07:28:43 crc kubenswrapper[4856]: E1122 07:28:43.024868 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cae477b-f4c8-416e-ac2d-de6cecccfafc" containerName="dnsmasq-dns" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.024875 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cae477b-f4c8-416e-ac2d-de6cecccfafc" containerName="dnsmasq-dns" Nov 22 07:28:43 crc kubenswrapper[4856]: E1122 07:28:43.024890 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" containerName="init" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.024896 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" containerName="init" Nov 22 07:28:43 crc kubenswrapper[4856]: E1122 07:28:43.024912 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cae477b-f4c8-416e-ac2d-de6cecccfafc" containerName="init" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.024917 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cae477b-f4c8-416e-ac2d-de6cecccfafc" containerName="init" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.025090 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.025104 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.025113 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc22daaa-cc21-45b9-b3a3-4c7f4465b0fe" containerName="dnsmasq-dns" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.025125 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.025135 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.025147 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cae477b-f4c8-416e-ac2d-de6cecccfafc" containerName="dnsmasq-dns" Nov 22 07:28:43 crc kubenswrapper[4856]: E1122 07:28:43.025499 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.025534 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api-log" Nov 22 07:28:43 crc kubenswrapper[4856]: E1122 07:28:43.025548 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.025554 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9803018-06dd-4572-ae9e-eadc43492e39" containerName="barbican-api" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.026630 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.039953 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bh2l"] Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.049266 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e149205c-8786-42a3-9531-9e17bc47d2b7-catalog-content\") pod \"redhat-marketplace-4bh2l\" (UID: \"e149205c-8786-42a3-9531-9e17bc47d2b7\") " pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.049402 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e149205c-8786-42a3-9531-9e17bc47d2b7-utilities\") pod \"redhat-marketplace-4bh2l\" (UID: \"e149205c-8786-42a3-9531-9e17bc47d2b7\") " pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.049560 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjf2p\" (UniqueName: \"kubernetes.io/projected/e149205c-8786-42a3-9531-9e17bc47d2b7-kube-api-access-bjf2p\") pod \"redhat-marketplace-4bh2l\" (UID: \"e149205c-8786-42a3-9531-9e17bc47d2b7\") " pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.152160 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjf2p\" (UniqueName: \"kubernetes.io/projected/e149205c-8786-42a3-9531-9e17bc47d2b7-kube-api-access-bjf2p\") pod \"redhat-marketplace-4bh2l\" (UID: \"e149205c-8786-42a3-9531-9e17bc47d2b7\") " pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.152351 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e149205c-8786-42a3-9531-9e17bc47d2b7-catalog-content\") pod \"redhat-marketplace-4bh2l\" (UID: \"e149205c-8786-42a3-9531-9e17bc47d2b7\") " pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.152420 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e149205c-8786-42a3-9531-9e17bc47d2b7-utilities\") pod \"redhat-marketplace-4bh2l\" (UID: \"e149205c-8786-42a3-9531-9e17bc47d2b7\") " pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.153283 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e149205c-8786-42a3-9531-9e17bc47d2b7-utilities\") pod \"redhat-marketplace-4bh2l\" (UID: \"e149205c-8786-42a3-9531-9e17bc47d2b7\") " pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.153348 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e149205c-8786-42a3-9531-9e17bc47d2b7-catalog-content\") pod \"redhat-marketplace-4bh2l\" (UID: \"e149205c-8786-42a3-9531-9e17bc47d2b7\") " pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.178394 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjf2p\" (UniqueName: \"kubernetes.io/projected/e149205c-8786-42a3-9531-9e17bc47d2b7-kube-api-access-bjf2p\") pod \"redhat-marketplace-4bh2l\" (UID: \"e149205c-8786-42a3-9531-9e17bc47d2b7\") " pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.358212 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:43 crc kubenswrapper[4856]: I1122 07:28:43.837400 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bh2l"] Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.054480 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bh2l" event={"ID":"e149205c-8786-42a3-9531-9e17bc47d2b7","Type":"ContainerStarted","Data":"f210fdb29c48f380d6de70c3f93648453504b4d7ef6535354d674db1faa89010"} Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.056472 4856 generic.go:334] "Generic (PLEG): container finished" podID="61083129-301d-45a9-92be-2afa22968773" containerID="bd3b30c6c08463e2bae61be00dd76c48697a39a1512ce7e84ccf873b4b1e8e4d" exitCode=0 Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.056529 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"61083129-301d-45a9-92be-2afa22968773","Type":"ContainerDied","Data":"bd3b30c6c08463e2bae61be00dd76c48697a39a1512ce7e84ccf873b4b1e8e4d"} Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.608900 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.684612 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-config-data-custom\") pod \"61083129-301d-45a9-92be-2afa22968773\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.684698 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-scripts\") pod \"61083129-301d-45a9-92be-2afa22968773\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.684832 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-config-data\") pod \"61083129-301d-45a9-92be-2afa22968773\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.684923 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjg6t\" (UniqueName: \"kubernetes.io/projected/61083129-301d-45a9-92be-2afa22968773-kube-api-access-kjg6t\") pod \"61083129-301d-45a9-92be-2afa22968773\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.684955 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-combined-ca-bundle\") pod \"61083129-301d-45a9-92be-2afa22968773\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.684977 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61083129-301d-45a9-92be-2afa22968773-etc-machine-id\") pod \"61083129-301d-45a9-92be-2afa22968773\" (UID: \"61083129-301d-45a9-92be-2afa22968773\") " Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.685559 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61083129-301d-45a9-92be-2afa22968773-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "61083129-301d-45a9-92be-2afa22968773" (UID: "61083129-301d-45a9-92be-2afa22968773"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.698596 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61083129-301d-45a9-92be-2afa22968773-kube-api-access-kjg6t" (OuterVolumeSpecName: "kube-api-access-kjg6t") pod "61083129-301d-45a9-92be-2afa22968773" (UID: "61083129-301d-45a9-92be-2afa22968773"). InnerVolumeSpecName "kube-api-access-kjg6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.698621 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-scripts" (OuterVolumeSpecName: "scripts") pod "61083129-301d-45a9-92be-2afa22968773" (UID: "61083129-301d-45a9-92be-2afa22968773"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.699477 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "61083129-301d-45a9-92be-2afa22968773" (UID: "61083129-301d-45a9-92be-2afa22968773"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.763808 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61083129-301d-45a9-92be-2afa22968773" (UID: "61083129-301d-45a9-92be-2afa22968773"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.787651 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjg6t\" (UniqueName: \"kubernetes.io/projected/61083129-301d-45a9-92be-2afa22968773-kube-api-access-kjg6t\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.787966 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.788055 4856 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/61083129-301d-45a9-92be-2afa22968773-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.788137 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.788205 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.808991 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-config-data" (OuterVolumeSpecName: "config-data") pod "61083129-301d-45a9-92be-2afa22968773" (UID: "61083129-301d-45a9-92be-2afa22968773"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:44 crc kubenswrapper[4856]: I1122 07:28:44.890952 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61083129-301d-45a9-92be-2afa22968773-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.077073 4856 generic.go:334] "Generic (PLEG): container finished" podID="e149205c-8786-42a3-9531-9e17bc47d2b7" containerID="02b55e56a3cc84c198d38a917f0119daba5800626cd391b3b89d232923266824" exitCode=0 Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.077356 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bh2l" event={"ID":"e149205c-8786-42a3-9531-9e17bc47d2b7","Type":"ContainerDied","Data":"02b55e56a3cc84c198d38a917f0119daba5800626cd391b3b89d232923266824"} Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.083204 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"61083129-301d-45a9-92be-2afa22968773","Type":"ContainerDied","Data":"255b55d1a0579959dfb5954b4d203490367c4d2842f3725daaac8ca420f09fa8"} Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.083270 4856 scope.go:117] "RemoveContainer" containerID="fc66357289067032478e99cefc75960bd0feb8dd63252e993fe2df2290125a73" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.083291 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.116811 4856 scope.go:117] "RemoveContainer" containerID="bd3b30c6c08463e2bae61be00dd76c48697a39a1512ce7e84ccf873b4b1e8e4d" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.155209 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.168459 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.179671 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:28:45 crc kubenswrapper[4856]: E1122 07:28:45.180074 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61083129-301d-45a9-92be-2afa22968773" containerName="cinder-scheduler" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.180091 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="61083129-301d-45a9-92be-2afa22968773" containerName="cinder-scheduler" Nov 22 07:28:45 crc kubenswrapper[4856]: E1122 07:28:45.180127 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61083129-301d-45a9-92be-2afa22968773" containerName="probe" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.180133 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="61083129-301d-45a9-92be-2afa22968773" containerName="probe" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.180307 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="61083129-301d-45a9-92be-2afa22968773" containerName="probe" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.180335 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="61083129-301d-45a9-92be-2afa22968773" containerName="cinder-scheduler" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.181280 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.188743 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.192083 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.204351 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.204414 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.204599 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lbrg\" (UniqueName: \"kubernetes.io/projected/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-kube-api-access-2lbrg\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.204679 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.204735 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-config-data\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.204801 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-scripts\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.306172 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.306736 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.306339 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.307030 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lbrg\" (UniqueName: \"kubernetes.io/projected/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-kube-api-access-2lbrg\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.307147 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.307256 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-config-data\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.307380 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-scripts\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.311014 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.311031 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.311794 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-scripts\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.314510 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-config-data\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.330336 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lbrg\" (UniqueName: \"kubernetes.io/projected/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-kube-api-access-2lbrg\") pod \"cinder-scheduler-0\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " pod="openstack/cinder-scheduler-0" Nov 22 07:28:45 crc kubenswrapper[4856]: I1122 07:28:45.554582 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:28:46 crc kubenswrapper[4856]: I1122 07:28:46.069273 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:28:46 crc kubenswrapper[4856]: I1122 07:28:46.095811 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aec2d14e-7026-4f6d-a0b2-13ff53d5e124","Type":"ContainerStarted","Data":"227074d547e57dc8859918fbb888dc891d073356f7896f051bd49804025d626c"} Nov 22 07:28:46 crc kubenswrapper[4856]: I1122 07:28:46.730178 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61083129-301d-45a9-92be-2afa22968773" path="/var/lib/kubelet/pods/61083129-301d-45a9-92be-2afa22968773/volumes" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.137088 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aec2d14e-7026-4f6d-a0b2-13ff53d5e124","Type":"ContainerStarted","Data":"e5b7a326f0ad6ee2471d7167a3c293c93e8329469da146c3d10a4dab31910b17"} Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.145865 4856 generic.go:334] "Generic (PLEG): container finished" podID="e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" containerID="85dfd3da1099f9484cabd3a3d84e1f862a7d4c96a0b8a18f7435129a0e40b58d" exitCode=137 Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.145936 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229","Type":"ContainerDied","Data":"85dfd3da1099f9484cabd3a3d84e1f862a7d4c96a0b8a18f7435129a0e40b58d"} Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.153301 4856 generic.go:334] "Generic (PLEG): container finished" podID="e149205c-8786-42a3-9531-9e17bc47d2b7" containerID="af96053151b51c3e715ef4e4bf6e02658b57829b1136e18b83d78a3017f1c33e" exitCode=0 Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.153335 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bh2l" event={"ID":"e149205c-8786-42a3-9531-9e17bc47d2b7","Type":"ContainerDied","Data":"af96053151b51c3e715ef4e4bf6e02658b57829b1136e18b83d78a3017f1c33e"} Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.381491 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.444221 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-config-data\") pod \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.444659 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-logs\") pod \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.444685 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-scripts\") pod \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.444885 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zr9md\" (UniqueName: \"kubernetes.io/projected/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-kube-api-access-zr9md\") pod \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.444928 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-etc-machine-id\") pod \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.444956 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-config-data-custom\") pod \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.445101 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-combined-ca-bundle\") pod \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\" (UID: \"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229\") " Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.445172 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-logs" (OuterVolumeSpecName: "logs") pod "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" (UID: "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.445853 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.447969 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" (UID: "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.451005 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-kube-api-access-zr9md" (OuterVolumeSpecName: "kube-api-access-zr9md") pod "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" (UID: "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229"). InnerVolumeSpecName "kube-api-access-zr9md". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.462674 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-scripts" (OuterVolumeSpecName: "scripts") pod "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" (UID: "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.479948 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" (UID: "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.508643 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" (UID: "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.547775 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.547815 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.547826 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zr9md\" (UniqueName: \"kubernetes.io/projected/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-kube-api-access-zr9md\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.547835 4856 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.547843 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.563502 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-config-data" (OuterVolumeSpecName: "config-data") pod "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" (UID: "e2d4e33a-ad0c-41f5-8f65-a20df5a7c229"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:47 crc kubenswrapper[4856]: I1122 07:28:47.649803 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.166635 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bh2l" event={"ID":"e149205c-8786-42a3-9531-9e17bc47d2b7","Type":"ContainerStarted","Data":"7a11902417d8769a8a60bc050c719206838508793470f149b69627e7db894416"} Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.179094 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aec2d14e-7026-4f6d-a0b2-13ff53d5e124","Type":"ContainerStarted","Data":"72290d753c232f9f411f4eca62ef3cf6c13d4eb7af108e1e14ff35b4c3746200"} Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.182032 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e2d4e33a-ad0c-41f5-8f65-a20df5a7c229","Type":"ContainerDied","Data":"513499d22ea0cb56432750f46b8ff8c6a28bf2a2957325549186f4feae8d14d3"} Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.182070 4856 scope.go:117] "RemoveContainer" containerID="85dfd3da1099f9484cabd3a3d84e1f862a7d4c96a0b8a18f7435129a0e40b58d" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.182148 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.199780 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4bh2l" podStartSLOduration=2.432307509 podStartE2EDuration="5.199760814s" podCreationTimestamp="2025-11-22 07:28:43 +0000 UTC" firstStartedPulling="2025-11-22 07:28:45.096432611 +0000 UTC m=+1567.509825869" lastFinishedPulling="2025-11-22 07:28:47.863885916 +0000 UTC m=+1570.277279174" observedRunningTime="2025-11-22 07:28:48.19663533 +0000 UTC m=+1570.610028598" watchObservedRunningTime="2025-11-22 07:28:48.199760814 +0000 UTC m=+1570.613154072" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.219028 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.219009898 podStartE2EDuration="3.219009898s" podCreationTimestamp="2025-11-22 07:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:28:48.213408516 +0000 UTC m=+1570.626801794" watchObservedRunningTime="2025-11-22 07:28:48.219009898 +0000 UTC m=+1570.632403156" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.240536 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.253425 4856 scope.go:117] "RemoveContainer" containerID="4dce49cd8e431cf27ee37db7afc7046d6ad9a0266ec7d18449bef4ca6f09b4de" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.256923 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.272495 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:28:48 crc kubenswrapper[4856]: E1122 07:28:48.273066 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" containerName="cinder-api" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.273091 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" containerName="cinder-api" Nov 22 07:28:48 crc kubenswrapper[4856]: E1122 07:28:48.273113 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" containerName="cinder-api-log" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.273122 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" containerName="cinder-api-log" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.273392 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" containerName="cinder-api-log" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.273433 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" containerName="cinder-api" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.274682 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.276661 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.278218 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.278383 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.286105 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.365186 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.365240 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b049e107-76c1-4669-adb3-7b92560ef90d-logs\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.365270 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.365302 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.365338 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b049e107-76c1-4669-adb3-7b92560ef90d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.365370 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndxkg\" (UniqueName: \"kubernetes.io/projected/b049e107-76c1-4669-adb3-7b92560ef90d-kube-api-access-ndxkg\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.365400 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-config-data-custom\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.365445 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-scripts\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.365471 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-config-data\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.466969 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-config-data-custom\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.467066 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-scripts\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.467099 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-config-data\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.467133 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.467162 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b049e107-76c1-4669-adb3-7b92560ef90d-logs\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.467188 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.467217 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.467254 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b049e107-76c1-4669-adb3-7b92560ef90d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.467283 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndxkg\" (UniqueName: \"kubernetes.io/projected/b049e107-76c1-4669-adb3-7b92560ef90d-kube-api-access-ndxkg\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.469881 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b049e107-76c1-4669-adb3-7b92560ef90d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.471642 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.472046 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.472829 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b049e107-76c1-4669-adb3-7b92560ef90d-logs\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.477095 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-config-data-custom\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.477554 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.480478 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-config-data\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.484136 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-scripts\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.488226 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndxkg\" (UniqueName: \"kubernetes.io/projected/b049e107-76c1-4669-adb3-7b92560ef90d-kube-api-access-ndxkg\") pod \"cinder-api-0\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.607334 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:28:48 crc kubenswrapper[4856]: I1122 07:28:48.753924 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d4e33a-ad0c-41f5-8f65-a20df5a7c229" path="/var/lib/kubelet/pods/e2d4e33a-ad0c-41f5-8f65-a20df5a7c229/volumes" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.117561 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:28:49 crc kubenswrapper[4856]: W1122 07:28:49.124002 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb049e107_76c1_4669_adb3_7b92560ef90d.slice/crio-1160ab49f39ac26a166bd02b3d5ea23beb4ec8bb76ba31c284a961efc8ae7ec7 WatchSource:0}: Error finding container 1160ab49f39ac26a166bd02b3d5ea23beb4ec8bb76ba31c284a961efc8ae7ec7: Status 404 returned error can't find the container with id 1160ab49f39ac26a166bd02b3d5ea23beb4ec8bb76ba31c284a961efc8ae7ec7 Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.182193 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.198877 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b049e107-76c1-4669-adb3-7b92560ef90d","Type":"ContainerStarted","Data":"1160ab49f39ac26a166bd02b3d5ea23beb4ec8bb76ba31c284a961efc8ae7ec7"} Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.202241 4856 generic.go:334] "Generic (PLEG): container finished" podID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerID="395665ce63a5fc2a3189872b904be1279bac4fd5e78ec92fffb8722586b9d573" exitCode=137 Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.202292 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"753b7046-c6a6-4a8a-bc9c-46b1161c43df","Type":"ContainerDied","Data":"395665ce63a5fc2a3189872b904be1279bac4fd5e78ec92fffb8722586b9d573"} Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.202313 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"753b7046-c6a6-4a8a-bc9c-46b1161c43df","Type":"ContainerDied","Data":"0195e49453eeb03a8da98c792647dcdd235fc34de81a21f71511875a29f4f525"} Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.202328 4856 scope.go:117] "RemoveContainer" containerID="395665ce63a5fc2a3189872b904be1279bac4fd5e78ec92fffb8722586b9d573" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.202450 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.247708 4856 scope.go:117] "RemoveContainer" containerID="8de4f4988bab83236a11467a1cc71bdd3e3315d1652063ac823d869b96e746fa" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.295265 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/753b7046-c6a6-4a8a-bc9c-46b1161c43df-log-httpd\") pod \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.295318 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-scripts\") pod \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.295360 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-combined-ca-bundle\") pod \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.295461 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/753b7046-c6a6-4a8a-bc9c-46b1161c43df-run-httpd\") pod \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.296146 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/753b7046-c6a6-4a8a-bc9c-46b1161c43df-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "753b7046-c6a6-4a8a-bc9c-46b1161c43df" (UID: "753b7046-c6a6-4a8a-bc9c-46b1161c43df"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.296257 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-sg-core-conf-yaml\") pod \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.296321 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9v42\" (UniqueName: \"kubernetes.io/projected/753b7046-c6a6-4a8a-bc9c-46b1161c43df-kube-api-access-w9v42\") pod \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.296397 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/753b7046-c6a6-4a8a-bc9c-46b1161c43df-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "753b7046-c6a6-4a8a-bc9c-46b1161c43df" (UID: "753b7046-c6a6-4a8a-bc9c-46b1161c43df"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.296401 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-config-data\") pod \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\" (UID: \"753b7046-c6a6-4a8a-bc9c-46b1161c43df\") " Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.297225 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/753b7046-c6a6-4a8a-bc9c-46b1161c43df-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.297244 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/753b7046-c6a6-4a8a-bc9c-46b1161c43df-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.297883 4856 scope.go:117] "RemoveContainer" containerID="562d4bdb219c308b27412d94eef347626c984f11edda6494e6814e6a306a4775" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.302683 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/753b7046-c6a6-4a8a-bc9c-46b1161c43df-kube-api-access-w9v42" (OuterVolumeSpecName: "kube-api-access-w9v42") pod "753b7046-c6a6-4a8a-bc9c-46b1161c43df" (UID: "753b7046-c6a6-4a8a-bc9c-46b1161c43df"). InnerVolumeSpecName "kube-api-access-w9v42". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.305987 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-scripts" (OuterVolumeSpecName: "scripts") pod "753b7046-c6a6-4a8a-bc9c-46b1161c43df" (UID: "753b7046-c6a6-4a8a-bc9c-46b1161c43df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.324758 4856 scope.go:117] "RemoveContainer" containerID="3276dc849e2788805c7afaa623b6bfbad6b0d732144c17a1059624f2982c13c9" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.330298 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "753b7046-c6a6-4a8a-bc9c-46b1161c43df" (UID: "753b7046-c6a6-4a8a-bc9c-46b1161c43df"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.351755 4856 scope.go:117] "RemoveContainer" containerID="395665ce63a5fc2a3189872b904be1279bac4fd5e78ec92fffb8722586b9d573" Nov 22 07:28:49 crc kubenswrapper[4856]: E1122 07:28:49.352203 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"395665ce63a5fc2a3189872b904be1279bac4fd5e78ec92fffb8722586b9d573\": container with ID starting with 395665ce63a5fc2a3189872b904be1279bac4fd5e78ec92fffb8722586b9d573 not found: ID does not exist" containerID="395665ce63a5fc2a3189872b904be1279bac4fd5e78ec92fffb8722586b9d573" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.352235 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"395665ce63a5fc2a3189872b904be1279bac4fd5e78ec92fffb8722586b9d573"} err="failed to get container status \"395665ce63a5fc2a3189872b904be1279bac4fd5e78ec92fffb8722586b9d573\": rpc error: code = NotFound desc = could not find container \"395665ce63a5fc2a3189872b904be1279bac4fd5e78ec92fffb8722586b9d573\": container with ID starting with 395665ce63a5fc2a3189872b904be1279bac4fd5e78ec92fffb8722586b9d573 not found: ID does not exist" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.352255 4856 scope.go:117] "RemoveContainer" containerID="8de4f4988bab83236a11467a1cc71bdd3e3315d1652063ac823d869b96e746fa" Nov 22 07:28:49 crc kubenswrapper[4856]: E1122 07:28:49.352450 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8de4f4988bab83236a11467a1cc71bdd3e3315d1652063ac823d869b96e746fa\": container with ID starting with 8de4f4988bab83236a11467a1cc71bdd3e3315d1652063ac823d869b96e746fa not found: ID does not exist" containerID="8de4f4988bab83236a11467a1cc71bdd3e3315d1652063ac823d869b96e746fa" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.352482 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de4f4988bab83236a11467a1cc71bdd3e3315d1652063ac823d869b96e746fa"} err="failed to get container status \"8de4f4988bab83236a11467a1cc71bdd3e3315d1652063ac823d869b96e746fa\": rpc error: code = NotFound desc = could not find container \"8de4f4988bab83236a11467a1cc71bdd3e3315d1652063ac823d869b96e746fa\": container with ID starting with 8de4f4988bab83236a11467a1cc71bdd3e3315d1652063ac823d869b96e746fa not found: ID does not exist" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.352494 4856 scope.go:117] "RemoveContainer" containerID="562d4bdb219c308b27412d94eef347626c984f11edda6494e6814e6a306a4775" Nov 22 07:28:49 crc kubenswrapper[4856]: E1122 07:28:49.353115 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"562d4bdb219c308b27412d94eef347626c984f11edda6494e6814e6a306a4775\": container with ID starting with 562d4bdb219c308b27412d94eef347626c984f11edda6494e6814e6a306a4775 not found: ID does not exist" containerID="562d4bdb219c308b27412d94eef347626c984f11edda6494e6814e6a306a4775" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.353150 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"562d4bdb219c308b27412d94eef347626c984f11edda6494e6814e6a306a4775"} err="failed to get container status \"562d4bdb219c308b27412d94eef347626c984f11edda6494e6814e6a306a4775\": rpc error: code = NotFound desc = could not find container \"562d4bdb219c308b27412d94eef347626c984f11edda6494e6814e6a306a4775\": container with ID starting with 562d4bdb219c308b27412d94eef347626c984f11edda6494e6814e6a306a4775 not found: ID does not exist" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.353166 4856 scope.go:117] "RemoveContainer" containerID="3276dc849e2788805c7afaa623b6bfbad6b0d732144c17a1059624f2982c13c9" Nov 22 07:28:49 crc kubenswrapper[4856]: E1122 07:28:49.353383 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3276dc849e2788805c7afaa623b6bfbad6b0d732144c17a1059624f2982c13c9\": container with ID starting with 3276dc849e2788805c7afaa623b6bfbad6b0d732144c17a1059624f2982c13c9 not found: ID does not exist" containerID="3276dc849e2788805c7afaa623b6bfbad6b0d732144c17a1059624f2982c13c9" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.353404 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3276dc849e2788805c7afaa623b6bfbad6b0d732144c17a1059624f2982c13c9"} err="failed to get container status \"3276dc849e2788805c7afaa623b6bfbad6b0d732144c17a1059624f2982c13c9\": rpc error: code = NotFound desc = could not find container \"3276dc849e2788805c7afaa623b6bfbad6b0d732144c17a1059624f2982c13c9\": container with ID starting with 3276dc849e2788805c7afaa623b6bfbad6b0d732144c17a1059624f2982c13c9 not found: ID does not exist" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.379830 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "753b7046-c6a6-4a8a-bc9c-46b1161c43df" (UID: "753b7046-c6a6-4a8a-bc9c-46b1161c43df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.399195 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.399229 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.399238 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.399247 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9v42\" (UniqueName: \"kubernetes.io/projected/753b7046-c6a6-4a8a-bc9c-46b1161c43df-kube-api-access-w9v42\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.400231 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-config-data" (OuterVolumeSpecName: "config-data") pod "753b7046-c6a6-4a8a-bc9c-46b1161c43df" (UID: "753b7046-c6a6-4a8a-bc9c-46b1161c43df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.501252 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/753b7046-c6a6-4a8a-bc9c-46b1161c43df-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.559446 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.575375 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.588429 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:28:49 crc kubenswrapper[4856]: E1122 07:28:49.588908 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="ceilometer-notification-agent" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.588928 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="ceilometer-notification-agent" Nov 22 07:28:49 crc kubenswrapper[4856]: E1122 07:28:49.588943 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="ceilometer-central-agent" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.588950 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="ceilometer-central-agent" Nov 22 07:28:49 crc kubenswrapper[4856]: E1122 07:28:49.588968 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="sg-core" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.588974 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="sg-core" Nov 22 07:28:49 crc kubenswrapper[4856]: E1122 07:28:49.588994 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="proxy-httpd" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.589001 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="proxy-httpd" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.589170 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="ceilometer-central-agent" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.589188 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="sg-core" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.589212 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="ceilometer-notification-agent" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.589224 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" containerName="proxy-httpd" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.592613 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.595658 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.595663 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.595909 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.705720 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-scripts\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.705787 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953e4594-07f4-459c-9a40-573e9be5a436-run-httpd\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.705810 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-config-data\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.705848 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.705874 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vw8l\" (UniqueName: \"kubernetes.io/projected/953e4594-07f4-459c-9a40-573e9be5a436-kube-api-access-6vw8l\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.705928 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953e4594-07f4-459c-9a40-573e9be5a436-log-httpd\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.705967 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.807639 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-config-data\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.807725 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.807753 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vw8l\" (UniqueName: \"kubernetes.io/projected/953e4594-07f4-459c-9a40-573e9be5a436-kube-api-access-6vw8l\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.807828 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953e4594-07f4-459c-9a40-573e9be5a436-log-httpd\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.807872 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.807896 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-scripts\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.807945 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953e4594-07f4-459c-9a40-573e9be5a436-run-httpd\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.808354 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953e4594-07f4-459c-9a40-573e9be5a436-run-httpd\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.809176 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953e4594-07f4-459c-9a40-573e9be5a436-log-httpd\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.811845 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.812695 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-config-data\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.813019 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.823727 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-scripts\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.825713 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vw8l\" (UniqueName: \"kubernetes.io/projected/953e4594-07f4-459c-9a40-573e9be5a436-kube-api-access-6vw8l\") pod \"ceilometer-0\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " pod="openstack/ceilometer-0" Nov 22 07:28:49 crc kubenswrapper[4856]: I1122 07:28:49.914752 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:28:50 crc kubenswrapper[4856]: I1122 07:28:50.230768 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b049e107-76c1-4669-adb3-7b92560ef90d","Type":"ContainerStarted","Data":"0439eeb4750e14d8f9ae56ff7b037034d989e88f9f897866cf9e1ff613634e01"} Nov 22 07:28:50 crc kubenswrapper[4856]: I1122 07:28:50.348391 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:28:50 crc kubenswrapper[4856]: W1122 07:28:50.364551 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod953e4594_07f4_459c_9a40_573e9be5a436.slice/crio-a976e456ca99761cd12827cf1b8a658b05d4605b07c9625352b6d066cce8f707 WatchSource:0}: Error finding container a976e456ca99761cd12827cf1b8a658b05d4605b07c9625352b6d066cce8f707: Status 404 returned error can't find the container with id a976e456ca99761cd12827cf1b8a658b05d4605b07c9625352b6d066cce8f707 Nov 22 07:28:50 crc kubenswrapper[4856]: I1122 07:28:50.555356 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 07:28:50 crc kubenswrapper[4856]: I1122 07:28:50.737763 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="753b7046-c6a6-4a8a-bc9c-46b1161c43df" path="/var/lib/kubelet/pods/753b7046-c6a6-4a8a-bc9c-46b1161c43df/volumes" Nov 22 07:28:51 crc kubenswrapper[4856]: I1122 07:28:51.257616 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953e4594-07f4-459c-9a40-573e9be5a436","Type":"ContainerStarted","Data":"a976e456ca99761cd12827cf1b8a658b05d4605b07c9625352b6d066cce8f707"} Nov 22 07:28:51 crc kubenswrapper[4856]: I1122 07:28:51.259683 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b049e107-76c1-4669-adb3-7b92560ef90d","Type":"ContainerStarted","Data":"ae19c364f3ae18bad5e6112fe44d680c855202a8622fbd6ab5f9cf49060fcd42"} Nov 22 07:28:51 crc kubenswrapper[4856]: I1122 07:28:51.259813 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 07:28:51 crc kubenswrapper[4856]: I1122 07:28:51.279450 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.279431455 podStartE2EDuration="3.279431455s" podCreationTimestamp="2025-11-22 07:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:28:51.278779517 +0000 UTC m=+1573.692172785" watchObservedRunningTime="2025-11-22 07:28:51.279431455 +0000 UTC m=+1573.692824713" Nov 22 07:28:52 crc kubenswrapper[4856]: I1122 07:28:52.269220 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953e4594-07f4-459c-9a40-573e9be5a436","Type":"ContainerStarted","Data":"6d647e1f810e4e07e3626e4b470591e5d95575779f1106c53d75156b669be892"} Nov 22 07:28:53 crc kubenswrapper[4856]: I1122 07:28:53.280317 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953e4594-07f4-459c-9a40-573e9be5a436","Type":"ContainerStarted","Data":"d30494bd444a87e437a062275a23e4a70e115eb9c29e3a7e53537fd1d5234e81"} Nov 22 07:28:53 crc kubenswrapper[4856]: I1122 07:28:53.359354 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:53 crc kubenswrapper[4856]: I1122 07:28:53.359703 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:53 crc kubenswrapper[4856]: I1122 07:28:53.404057 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:54 crc kubenswrapper[4856]: I1122 07:28:54.291904 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953e4594-07f4-459c-9a40-573e9be5a436","Type":"ContainerStarted","Data":"b6cf2e11c3175c0bd16e04bb89e61c6659a46f4d3b577af88c7616139b554f02"} Nov 22 07:28:54 crc kubenswrapper[4856]: I1122 07:28:54.344088 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:54 crc kubenswrapper[4856]: I1122 07:28:54.401799 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bh2l"] Nov 22 07:28:55 crc kubenswrapper[4856]: I1122 07:28:55.893228 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 07:28:56 crc kubenswrapper[4856]: I1122 07:28:56.309383 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4bh2l" podUID="e149205c-8786-42a3-9531-9e17bc47d2b7" containerName="registry-server" containerID="cri-o://7a11902417d8769a8a60bc050c719206838508793470f149b69627e7db894416" gracePeriod=2 Nov 22 07:28:57 crc kubenswrapper[4856]: I1122 07:28:57.323340 4856 generic.go:334] "Generic (PLEG): container finished" podID="e149205c-8786-42a3-9531-9e17bc47d2b7" containerID="7a11902417d8769a8a60bc050c719206838508793470f149b69627e7db894416" exitCode=0 Nov 22 07:28:57 crc kubenswrapper[4856]: I1122 07:28:57.323382 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bh2l" event={"ID":"e149205c-8786-42a3-9531-9e17bc47d2b7","Type":"ContainerDied","Data":"7a11902417d8769a8a60bc050c719206838508793470f149b69627e7db894416"} Nov 22 07:28:59 crc kubenswrapper[4856]: I1122 07:28:59.646985 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:28:59 crc kubenswrapper[4856]: I1122 07:28:59.703661 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e149205c-8786-42a3-9531-9e17bc47d2b7-catalog-content\") pod \"e149205c-8786-42a3-9531-9e17bc47d2b7\" (UID: \"e149205c-8786-42a3-9531-9e17bc47d2b7\") " Nov 22 07:28:59 crc kubenswrapper[4856]: I1122 07:28:59.703851 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e149205c-8786-42a3-9531-9e17bc47d2b7-utilities\") pod \"e149205c-8786-42a3-9531-9e17bc47d2b7\" (UID: \"e149205c-8786-42a3-9531-9e17bc47d2b7\") " Nov 22 07:28:59 crc kubenswrapper[4856]: I1122 07:28:59.703966 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjf2p\" (UniqueName: \"kubernetes.io/projected/e149205c-8786-42a3-9531-9e17bc47d2b7-kube-api-access-bjf2p\") pod \"e149205c-8786-42a3-9531-9e17bc47d2b7\" (UID: \"e149205c-8786-42a3-9531-9e17bc47d2b7\") " Nov 22 07:28:59 crc kubenswrapper[4856]: I1122 07:28:59.704873 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e149205c-8786-42a3-9531-9e17bc47d2b7-utilities" (OuterVolumeSpecName: "utilities") pod "e149205c-8786-42a3-9531-9e17bc47d2b7" (UID: "e149205c-8786-42a3-9531-9e17bc47d2b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:28:59 crc kubenswrapper[4856]: I1122 07:28:59.721961 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e149205c-8786-42a3-9531-9e17bc47d2b7-kube-api-access-bjf2p" (OuterVolumeSpecName: "kube-api-access-bjf2p") pod "e149205c-8786-42a3-9531-9e17bc47d2b7" (UID: "e149205c-8786-42a3-9531-9e17bc47d2b7"). InnerVolumeSpecName "kube-api-access-bjf2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:28:59 crc kubenswrapper[4856]: I1122 07:28:59.805889 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e149205c-8786-42a3-9531-9e17bc47d2b7-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:28:59 crc kubenswrapper[4856]: I1122 07:28:59.805937 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjf2p\" (UniqueName: \"kubernetes.io/projected/e149205c-8786-42a3-9531-9e17bc47d2b7-kube-api-access-bjf2p\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:00 crc kubenswrapper[4856]: I1122 07:29:00.347243 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4bh2l" event={"ID":"e149205c-8786-42a3-9531-9e17bc47d2b7","Type":"ContainerDied","Data":"f210fdb29c48f380d6de70c3f93648453504b4d7ef6535354d674db1faa89010"} Nov 22 07:29:00 crc kubenswrapper[4856]: I1122 07:29:00.347639 4856 scope.go:117] "RemoveContainer" containerID="7a11902417d8769a8a60bc050c719206838508793470f149b69627e7db894416" Nov 22 07:29:00 crc kubenswrapper[4856]: I1122 07:29:00.347798 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4bh2l" Nov 22 07:29:00 crc kubenswrapper[4856]: I1122 07:29:00.436372 4856 scope.go:117] "RemoveContainer" containerID="af96053151b51c3e715ef4e4bf6e02658b57829b1136e18b83d78a3017f1c33e" Nov 22 07:29:00 crc kubenswrapper[4856]: I1122 07:29:00.453181 4856 scope.go:117] "RemoveContainer" containerID="02b55e56a3cc84c198d38a917f0119daba5800626cd391b3b89d232923266824" Nov 22 07:29:00 crc kubenswrapper[4856]: I1122 07:29:00.486858 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 22 07:29:02 crc kubenswrapper[4856]: I1122 07:29:02.374874 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953e4594-07f4-459c-9a40-573e9be5a436","Type":"ContainerStarted","Data":"7978d6655a47eeea155f948222f4001d03205f5c58bcdea01cdd544c9da85f56"} Nov 22 07:29:04 crc kubenswrapper[4856]: I1122 07:29:04.401213 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:29:04 crc kubenswrapper[4856]: I1122 07:29:04.449155 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.52134594 podStartE2EDuration="15.449136202s" podCreationTimestamp="2025-11-22 07:28:49 +0000 UTC" firstStartedPulling="2025-11-22 07:28:50.373996257 +0000 UTC m=+1572.787389515" lastFinishedPulling="2025-11-22 07:29:00.301786509 +0000 UTC m=+1582.715179777" observedRunningTime="2025-11-22 07:29:04.443157029 +0000 UTC m=+1586.856550297" watchObservedRunningTime="2025-11-22 07:29:04.449136202 +0000 UTC m=+1586.862529460" Nov 22 07:29:04 crc kubenswrapper[4856]: I1122 07:29:04.987643 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e149205c-8786-42a3-9531-9e17bc47d2b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e149205c-8786-42a3-9531-9e17bc47d2b7" (UID: "e149205c-8786-42a3-9531-9e17bc47d2b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:29:05 crc kubenswrapper[4856]: I1122 07:29:05.033789 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e149205c-8786-42a3-9531-9e17bc47d2b7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:05 crc kubenswrapper[4856]: I1122 07:29:05.179850 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bh2l"] Nov 22 07:29:05 crc kubenswrapper[4856]: I1122 07:29:05.187444 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4bh2l"] Nov 22 07:29:06 crc kubenswrapper[4856]: I1122 07:29:06.720825 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e149205c-8786-42a3-9531-9e17bc47d2b7" path="/var/lib/kubelet/pods/e149205c-8786-42a3-9531-9e17bc47d2b7/volumes" Nov 22 07:29:19 crc kubenswrapper[4856]: I1122 07:29:19.942196 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 07:29:24 crc kubenswrapper[4856]: I1122 07:29:24.327136 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:29:24 crc kubenswrapper[4856]: I1122 07:29:24.328829 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="898257c3-b9a4-4d7b-8484-f3466c19e051" containerName="kube-state-metrics" containerID="cri-o://f7971572f0255cf6911c06156edf962b38287cca61b7d41c5cd4c9d5ecd2a048" gracePeriod=30 Nov 22 07:29:24 crc kubenswrapper[4856]: I1122 07:29:24.583310 4856 generic.go:334] "Generic (PLEG): container finished" podID="898257c3-b9a4-4d7b-8484-f3466c19e051" containerID="f7971572f0255cf6911c06156edf962b38287cca61b7d41c5cd4c9d5ecd2a048" exitCode=2 Nov 22 07:29:24 crc kubenswrapper[4856]: I1122 07:29:24.583386 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"898257c3-b9a4-4d7b-8484-f3466c19e051","Type":"ContainerDied","Data":"f7971572f0255cf6911c06156edf962b38287cca61b7d41c5cd4c9d5ecd2a048"} Nov 22 07:29:24 crc kubenswrapper[4856]: I1122 07:29:24.809151 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:29:24 crc kubenswrapper[4856]: I1122 07:29:24.890664 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chm6g\" (UniqueName: \"kubernetes.io/projected/898257c3-b9a4-4d7b-8484-f3466c19e051-kube-api-access-chm6g\") pod \"898257c3-b9a4-4d7b-8484-f3466c19e051\" (UID: \"898257c3-b9a4-4d7b-8484-f3466c19e051\") " Nov 22 07:29:24 crc kubenswrapper[4856]: I1122 07:29:24.897378 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/898257c3-b9a4-4d7b-8484-f3466c19e051-kube-api-access-chm6g" (OuterVolumeSpecName: "kube-api-access-chm6g") pod "898257c3-b9a4-4d7b-8484-f3466c19e051" (UID: "898257c3-b9a4-4d7b-8484-f3466c19e051"). InnerVolumeSpecName "kube-api-access-chm6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:29:24 crc kubenswrapper[4856]: I1122 07:29:24.992585 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chm6g\" (UniqueName: \"kubernetes.io/projected/898257c3-b9a4-4d7b-8484-f3466c19e051-kube-api-access-chm6g\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.602943 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"898257c3-b9a4-4d7b-8484-f3466c19e051","Type":"ContainerDied","Data":"2e2842c993f54993b4fc7cb6e515a6eddf52c462e2af859c384c462dbece99fe"} Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.603296 4856 scope.go:117] "RemoveContainer" containerID="f7971572f0255cf6911c06156edf962b38287cca61b7d41c5cd4c9d5ecd2a048" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.603182 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.667494 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.674657 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.688924 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:29:25 crc kubenswrapper[4856]: E1122 07:29:25.689630 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898257c3-b9a4-4d7b-8484-f3466c19e051" containerName="kube-state-metrics" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.689656 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="898257c3-b9a4-4d7b-8484-f3466c19e051" containerName="kube-state-metrics" Nov 22 07:29:25 crc kubenswrapper[4856]: E1122 07:29:25.689685 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e149205c-8786-42a3-9531-9e17bc47d2b7" containerName="registry-server" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.689692 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e149205c-8786-42a3-9531-9e17bc47d2b7" containerName="registry-server" Nov 22 07:29:25 crc kubenswrapper[4856]: E1122 07:29:25.689704 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e149205c-8786-42a3-9531-9e17bc47d2b7" containerName="extract-content" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.689710 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e149205c-8786-42a3-9531-9e17bc47d2b7" containerName="extract-content" Nov 22 07:29:25 crc kubenswrapper[4856]: E1122 07:29:25.689733 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e149205c-8786-42a3-9531-9e17bc47d2b7" containerName="extract-utilities" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.689744 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e149205c-8786-42a3-9531-9e17bc47d2b7" containerName="extract-utilities" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.689950 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="898257c3-b9a4-4d7b-8484-f3466c19e051" containerName="kube-state-metrics" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.690504 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e149205c-8786-42a3-9531-9e17bc47d2b7" containerName="registry-server" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.691969 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.694133 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.694265 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.700979 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.814105 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " pod="openstack/kube-state-metrics-0" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.814317 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " pod="openstack/kube-state-metrics-0" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.814420 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h85ws\" (UniqueName: \"kubernetes.io/projected/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-api-access-h85ws\") pod \"kube-state-metrics-0\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " pod="openstack/kube-state-metrics-0" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.814463 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " pod="openstack/kube-state-metrics-0" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.916518 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " pod="openstack/kube-state-metrics-0" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.916581 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h85ws\" (UniqueName: \"kubernetes.io/projected/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-api-access-h85ws\") pod \"kube-state-metrics-0\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " pod="openstack/kube-state-metrics-0" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.916602 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " pod="openstack/kube-state-metrics-0" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.916717 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " pod="openstack/kube-state-metrics-0" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.921601 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " pod="openstack/kube-state-metrics-0" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.921738 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " pod="openstack/kube-state-metrics-0" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.922407 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " pod="openstack/kube-state-metrics-0" Nov 22 07:29:25 crc kubenswrapper[4856]: I1122 07:29:25.949200 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h85ws\" (UniqueName: \"kubernetes.io/projected/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-api-access-h85ws\") pod \"kube-state-metrics-0\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " pod="openstack/kube-state-metrics-0" Nov 22 07:29:26 crc kubenswrapper[4856]: I1122 07:29:26.010858 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:29:26 crc kubenswrapper[4856]: I1122 07:29:26.466297 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:29:26 crc kubenswrapper[4856]: I1122 07:29:26.612067 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a38fdd7-2dc0-4ebc-91c7-359d0e437900","Type":"ContainerStarted","Data":"01875bf103004a78036f815cb505ee109cef1d2451273e63e585335a0418eaf0"} Nov 22 07:29:26 crc kubenswrapper[4856]: I1122 07:29:26.722993 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="898257c3-b9a4-4d7b-8484-f3466c19e051" path="/var/lib/kubelet/pods/898257c3-b9a4-4d7b-8484-f3466c19e051/volumes" Nov 22 07:29:27 crc kubenswrapper[4856]: I1122 07:29:27.177044 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:29:27 crc kubenswrapper[4856]: I1122 07:29:27.177692 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="ceilometer-central-agent" containerID="cri-o://6d647e1f810e4e07e3626e4b470591e5d95575779f1106c53d75156b669be892" gracePeriod=30 Nov 22 07:29:27 crc kubenswrapper[4856]: I1122 07:29:27.177786 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="sg-core" containerID="cri-o://b6cf2e11c3175c0bd16e04bb89e61c6659a46f4d3b577af88c7616139b554f02" gracePeriod=30 Nov 22 07:29:27 crc kubenswrapper[4856]: I1122 07:29:27.177877 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="ceilometer-notification-agent" containerID="cri-o://d30494bd444a87e437a062275a23e4a70e115eb9c29e3a7e53537fd1d5234e81" gracePeriod=30 Nov 22 07:29:27 crc kubenswrapper[4856]: I1122 07:29:27.177976 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="proxy-httpd" containerID="cri-o://7978d6655a47eeea155f948222f4001d03205f5c58bcdea01cdd544c9da85f56" gracePeriod=30 Nov 22 07:29:27 crc kubenswrapper[4856]: I1122 07:29:27.626364 4856 generic.go:334] "Generic (PLEG): container finished" podID="953e4594-07f4-459c-9a40-573e9be5a436" containerID="7978d6655a47eeea155f948222f4001d03205f5c58bcdea01cdd544c9da85f56" exitCode=0 Nov 22 07:29:27 crc kubenswrapper[4856]: I1122 07:29:27.626404 4856 generic.go:334] "Generic (PLEG): container finished" podID="953e4594-07f4-459c-9a40-573e9be5a436" containerID="b6cf2e11c3175c0bd16e04bb89e61c6659a46f4d3b577af88c7616139b554f02" exitCode=2 Nov 22 07:29:27 crc kubenswrapper[4856]: I1122 07:29:27.626430 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953e4594-07f4-459c-9a40-573e9be5a436","Type":"ContainerDied","Data":"7978d6655a47eeea155f948222f4001d03205f5c58bcdea01cdd544c9da85f56"} Nov 22 07:29:27 crc kubenswrapper[4856]: I1122 07:29:27.626532 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953e4594-07f4-459c-9a40-573e9be5a436","Type":"ContainerDied","Data":"b6cf2e11c3175c0bd16e04bb89e61c6659a46f4d3b577af88c7616139b554f02"} Nov 22 07:29:28 crc kubenswrapper[4856]: I1122 07:29:28.636587 4856 generic.go:334] "Generic (PLEG): container finished" podID="953e4594-07f4-459c-9a40-573e9be5a436" containerID="6d647e1f810e4e07e3626e4b470591e5d95575779f1106c53d75156b669be892" exitCode=0 Nov 22 07:29:28 crc kubenswrapper[4856]: I1122 07:29:28.636626 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953e4594-07f4-459c-9a40-573e9be5a436","Type":"ContainerDied","Data":"6d647e1f810e4e07e3626e4b470591e5d95575779f1106c53d75156b669be892"} Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.595958 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.651293 4856 generic.go:334] "Generic (PLEG): container finished" podID="953e4594-07f4-459c-9a40-573e9be5a436" containerID="d30494bd444a87e437a062275a23e4a70e115eb9c29e3a7e53537fd1d5234e81" exitCode=0 Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.651356 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953e4594-07f4-459c-9a40-573e9be5a436","Type":"ContainerDied","Data":"d30494bd444a87e437a062275a23e4a70e115eb9c29e3a7e53537fd1d5234e81"} Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.651383 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953e4594-07f4-459c-9a40-573e9be5a436","Type":"ContainerDied","Data":"a976e456ca99761cd12827cf1b8a658b05d4605b07c9625352b6d066cce8f707"} Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.651401 4856 scope.go:117] "RemoveContainer" containerID="7978d6655a47eeea155f948222f4001d03205f5c58bcdea01cdd544c9da85f56" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.651630 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.655613 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a38fdd7-2dc0-4ebc-91c7-359d0e437900","Type":"ContainerStarted","Data":"79ac3da01d567af671e8140ba0abef013a08691b348676216927e29a7c793bcc"} Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.655805 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.681693 4856 scope.go:117] "RemoveContainer" containerID="b6cf2e11c3175c0bd16e04bb89e61c6659a46f4d3b577af88c7616139b554f02" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.683590 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.993396347 podStartE2EDuration="4.683571739s" podCreationTimestamp="2025-11-22 07:29:25 +0000 UTC" firstStartedPulling="2025-11-22 07:29:26.477975282 +0000 UTC m=+1608.891368560" lastFinishedPulling="2025-11-22 07:29:29.168150694 +0000 UTC m=+1611.581543952" observedRunningTime="2025-11-22 07:29:29.680502615 +0000 UTC m=+1612.093895963" watchObservedRunningTime="2025-11-22 07:29:29.683571739 +0000 UTC m=+1612.096964997" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.704212 4856 scope.go:117] "RemoveContainer" containerID="d30494bd444a87e437a062275a23e4a70e115eb9c29e3a7e53537fd1d5234e81" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.722103 4856 scope.go:117] "RemoveContainer" containerID="6d647e1f810e4e07e3626e4b470591e5d95575779f1106c53d75156b669be892" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.740853 4856 scope.go:117] "RemoveContainer" containerID="7978d6655a47eeea155f948222f4001d03205f5c58bcdea01cdd544c9da85f56" Nov 22 07:29:29 crc kubenswrapper[4856]: E1122 07:29:29.741327 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7978d6655a47eeea155f948222f4001d03205f5c58bcdea01cdd544c9da85f56\": container with ID starting with 7978d6655a47eeea155f948222f4001d03205f5c58bcdea01cdd544c9da85f56 not found: ID does not exist" containerID="7978d6655a47eeea155f948222f4001d03205f5c58bcdea01cdd544c9da85f56" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.741389 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7978d6655a47eeea155f948222f4001d03205f5c58bcdea01cdd544c9da85f56"} err="failed to get container status \"7978d6655a47eeea155f948222f4001d03205f5c58bcdea01cdd544c9da85f56\": rpc error: code = NotFound desc = could not find container \"7978d6655a47eeea155f948222f4001d03205f5c58bcdea01cdd544c9da85f56\": container with ID starting with 7978d6655a47eeea155f948222f4001d03205f5c58bcdea01cdd544c9da85f56 not found: ID does not exist" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.741416 4856 scope.go:117] "RemoveContainer" containerID="b6cf2e11c3175c0bd16e04bb89e61c6659a46f4d3b577af88c7616139b554f02" Nov 22 07:29:29 crc kubenswrapper[4856]: E1122 07:29:29.741856 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6cf2e11c3175c0bd16e04bb89e61c6659a46f4d3b577af88c7616139b554f02\": container with ID starting with b6cf2e11c3175c0bd16e04bb89e61c6659a46f4d3b577af88c7616139b554f02 not found: ID does not exist" containerID="b6cf2e11c3175c0bd16e04bb89e61c6659a46f4d3b577af88c7616139b554f02" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.741892 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6cf2e11c3175c0bd16e04bb89e61c6659a46f4d3b577af88c7616139b554f02"} err="failed to get container status \"b6cf2e11c3175c0bd16e04bb89e61c6659a46f4d3b577af88c7616139b554f02\": rpc error: code = NotFound desc = could not find container \"b6cf2e11c3175c0bd16e04bb89e61c6659a46f4d3b577af88c7616139b554f02\": container with ID starting with b6cf2e11c3175c0bd16e04bb89e61c6659a46f4d3b577af88c7616139b554f02 not found: ID does not exist" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.741913 4856 scope.go:117] "RemoveContainer" containerID="d30494bd444a87e437a062275a23e4a70e115eb9c29e3a7e53537fd1d5234e81" Nov 22 07:29:29 crc kubenswrapper[4856]: E1122 07:29:29.742238 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d30494bd444a87e437a062275a23e4a70e115eb9c29e3a7e53537fd1d5234e81\": container with ID starting with d30494bd444a87e437a062275a23e4a70e115eb9c29e3a7e53537fd1d5234e81 not found: ID does not exist" containerID="d30494bd444a87e437a062275a23e4a70e115eb9c29e3a7e53537fd1d5234e81" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.742301 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d30494bd444a87e437a062275a23e4a70e115eb9c29e3a7e53537fd1d5234e81"} err="failed to get container status \"d30494bd444a87e437a062275a23e4a70e115eb9c29e3a7e53537fd1d5234e81\": rpc error: code = NotFound desc = could not find container \"d30494bd444a87e437a062275a23e4a70e115eb9c29e3a7e53537fd1d5234e81\": container with ID starting with d30494bd444a87e437a062275a23e4a70e115eb9c29e3a7e53537fd1d5234e81 not found: ID does not exist" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.742332 4856 scope.go:117] "RemoveContainer" containerID="6d647e1f810e4e07e3626e4b470591e5d95575779f1106c53d75156b669be892" Nov 22 07:29:29 crc kubenswrapper[4856]: E1122 07:29:29.742642 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d647e1f810e4e07e3626e4b470591e5d95575779f1106c53d75156b669be892\": container with ID starting with 6d647e1f810e4e07e3626e4b470591e5d95575779f1106c53d75156b669be892 not found: ID does not exist" containerID="6d647e1f810e4e07e3626e4b470591e5d95575779f1106c53d75156b669be892" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.742667 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d647e1f810e4e07e3626e4b470591e5d95575779f1106c53d75156b669be892"} err="failed to get container status \"6d647e1f810e4e07e3626e4b470591e5d95575779f1106c53d75156b669be892\": rpc error: code = NotFound desc = could not find container \"6d647e1f810e4e07e3626e4b470591e5d95575779f1106c53d75156b669be892\": container with ID starting with 6d647e1f810e4e07e3626e4b470591e5d95575779f1106c53d75156b669be892 not found: ID does not exist" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.754614 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.754690 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.795880 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-scripts\") pod \"953e4594-07f4-459c-9a40-573e9be5a436\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.795947 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953e4594-07f4-459c-9a40-573e9be5a436-log-httpd\") pod \"953e4594-07f4-459c-9a40-573e9be5a436\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.795980 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vw8l\" (UniqueName: \"kubernetes.io/projected/953e4594-07f4-459c-9a40-573e9be5a436-kube-api-access-6vw8l\") pod \"953e4594-07f4-459c-9a40-573e9be5a436\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.796017 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-combined-ca-bundle\") pod \"953e4594-07f4-459c-9a40-573e9be5a436\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.796182 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-config-data\") pod \"953e4594-07f4-459c-9a40-573e9be5a436\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.796218 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-sg-core-conf-yaml\") pod \"953e4594-07f4-459c-9a40-573e9be5a436\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.796243 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953e4594-07f4-459c-9a40-573e9be5a436-run-httpd\") pod \"953e4594-07f4-459c-9a40-573e9be5a436\" (UID: \"953e4594-07f4-459c-9a40-573e9be5a436\") " Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.797016 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/953e4594-07f4-459c-9a40-573e9be5a436-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "953e4594-07f4-459c-9a40-573e9be5a436" (UID: "953e4594-07f4-459c-9a40-573e9be5a436"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.797222 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/953e4594-07f4-459c-9a40-573e9be5a436-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "953e4594-07f4-459c-9a40-573e9be5a436" (UID: "953e4594-07f4-459c-9a40-573e9be5a436"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.803351 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-scripts" (OuterVolumeSpecName: "scripts") pod "953e4594-07f4-459c-9a40-573e9be5a436" (UID: "953e4594-07f4-459c-9a40-573e9be5a436"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.804068 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953e4594-07f4-459c-9a40-573e9be5a436-kube-api-access-6vw8l" (OuterVolumeSpecName: "kube-api-access-6vw8l") pod "953e4594-07f4-459c-9a40-573e9be5a436" (UID: "953e4594-07f4-459c-9a40-573e9be5a436"). InnerVolumeSpecName "kube-api-access-6vw8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.830678 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "953e4594-07f4-459c-9a40-573e9be5a436" (UID: "953e4594-07f4-459c-9a40-573e9be5a436"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.868282 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "953e4594-07f4-459c-9a40-573e9be5a436" (UID: "953e4594-07f4-459c-9a40-573e9be5a436"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.898398 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.898443 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953e4594-07f4-459c-9a40-573e9be5a436-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.898455 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.898468 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953e4594-07f4-459c-9a40-573e9be5a436-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.898479 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vw8l\" (UniqueName: \"kubernetes.io/projected/953e4594-07f4-459c-9a40-573e9be5a436-kube-api-access-6vw8l\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.898492 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.908912 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-config-data" (OuterVolumeSpecName: "config-data") pod "953e4594-07f4-459c-9a40-573e9be5a436" (UID: "953e4594-07f4-459c-9a40-573e9be5a436"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:29:29 crc kubenswrapper[4856]: I1122 07:29:29.991776 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.002208 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953e4594-07f4-459c-9a40-573e9be5a436-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.004111 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.017814 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:29:30 crc kubenswrapper[4856]: E1122 07:29:30.018277 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="ceilometer-notification-agent" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.018297 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="ceilometer-notification-agent" Nov 22 07:29:30 crc kubenswrapper[4856]: E1122 07:29:30.018312 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="sg-core" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.018318 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="sg-core" Nov 22 07:29:30 crc kubenswrapper[4856]: E1122 07:29:30.018328 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="proxy-httpd" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.018335 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="proxy-httpd" Nov 22 07:29:30 crc kubenswrapper[4856]: E1122 07:29:30.018347 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="ceilometer-central-agent" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.018355 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="ceilometer-central-agent" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.018591 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="ceilometer-notification-agent" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.018608 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="proxy-httpd" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.018628 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="ceilometer-central-agent" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.018643 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="953e4594-07f4-459c-9a40-573e9be5a436" containerName="sg-core" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.021175 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.023231 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.023708 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.023887 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.026811 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.205566 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-scripts\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.205641 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpm2v\" (UniqueName: \"kubernetes.io/projected/49046268-02be-4651-96da-4a3a4c3039f3-kube-api-access-fpm2v\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.205666 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49046268-02be-4651-96da-4a3a4c3039f3-run-httpd\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.205753 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.205848 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.205876 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.205897 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-config-data\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.205928 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49046268-02be-4651-96da-4a3a4c3039f3-log-httpd\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.307701 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-scripts\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.307767 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpm2v\" (UniqueName: \"kubernetes.io/projected/49046268-02be-4651-96da-4a3a4c3039f3-kube-api-access-fpm2v\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.307799 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49046268-02be-4651-96da-4a3a4c3039f3-run-httpd\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.307883 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.307955 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.308017 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.308144 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-config-data\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.308461 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49046268-02be-4651-96da-4a3a4c3039f3-log-httpd\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.308455 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49046268-02be-4651-96da-4a3a4c3039f3-run-httpd\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.308728 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49046268-02be-4651-96da-4a3a4c3039f3-log-httpd\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.311517 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-scripts\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.311524 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.311849 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.312622 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.313805 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-config-data\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.325960 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpm2v\" (UniqueName: \"kubernetes.io/projected/49046268-02be-4651-96da-4a3a4c3039f3-kube-api-access-fpm2v\") pod \"ceilometer-0\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.338128 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.722659 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953e4594-07f4-459c-9a40-573e9be5a436" path="/var/lib/kubelet/pods/953e4594-07f4-459c-9a40-573e9be5a436/volumes" Nov 22 07:29:30 crc kubenswrapper[4856]: I1122 07:29:30.765372 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:29:31 crc kubenswrapper[4856]: I1122 07:29:31.677003 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49046268-02be-4651-96da-4a3a4c3039f3","Type":"ContainerStarted","Data":"1106241933ddc2caac6ea21e2b27781d347aebbf7d4eb43d6c57024a34d50707"} Nov 22 07:29:33 crc kubenswrapper[4856]: I1122 07:29:33.701995 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49046268-02be-4651-96da-4a3a4c3039f3","Type":"ContainerStarted","Data":"946af67d87f924ac2af7f4cdf82505b05729866f6a051bc37f47293938239f38"} Nov 22 07:29:35 crc kubenswrapper[4856]: I1122 07:29:35.672675 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ghpb2"] Nov 22 07:29:35 crc kubenswrapper[4856]: I1122 07:29:35.677627 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:35 crc kubenswrapper[4856]: I1122 07:29:35.718714 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49046268-02be-4651-96da-4a3a4c3039f3","Type":"ContainerStarted","Data":"2d30b3db3f28ec8801af5d34284f06f3448849f149a49a685f92fb2fcbe9a623"} Nov 22 07:29:35 crc kubenswrapper[4856]: I1122 07:29:35.766901 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ghpb2"] Nov 22 07:29:35 crc kubenswrapper[4856]: I1122 07:29:35.810911 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78912cf-da4f-41ec-a4c8-05da36da1594-utilities\") pod \"certified-operators-ghpb2\" (UID: \"e78912cf-da4f-41ec-a4c8-05da36da1594\") " pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:35 crc kubenswrapper[4856]: I1122 07:29:35.811574 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78912cf-da4f-41ec-a4c8-05da36da1594-catalog-content\") pod \"certified-operators-ghpb2\" (UID: \"e78912cf-da4f-41ec-a4c8-05da36da1594\") " pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:35 crc kubenswrapper[4856]: I1122 07:29:35.811794 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvxwx\" (UniqueName: \"kubernetes.io/projected/e78912cf-da4f-41ec-a4c8-05da36da1594-kube-api-access-cvxwx\") pod \"certified-operators-ghpb2\" (UID: \"e78912cf-da4f-41ec-a4c8-05da36da1594\") " pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:35 crc kubenswrapper[4856]: I1122 07:29:35.915023 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvxwx\" (UniqueName: \"kubernetes.io/projected/e78912cf-da4f-41ec-a4c8-05da36da1594-kube-api-access-cvxwx\") pod \"certified-operators-ghpb2\" (UID: \"e78912cf-da4f-41ec-a4c8-05da36da1594\") " pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:35 crc kubenswrapper[4856]: I1122 07:29:35.915154 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78912cf-da4f-41ec-a4c8-05da36da1594-utilities\") pod \"certified-operators-ghpb2\" (UID: \"e78912cf-da4f-41ec-a4c8-05da36da1594\") " pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:35 crc kubenswrapper[4856]: I1122 07:29:35.915306 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78912cf-da4f-41ec-a4c8-05da36da1594-catalog-content\") pod \"certified-operators-ghpb2\" (UID: \"e78912cf-da4f-41ec-a4c8-05da36da1594\") " pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:35 crc kubenswrapper[4856]: I1122 07:29:35.916039 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78912cf-da4f-41ec-a4c8-05da36da1594-utilities\") pod \"certified-operators-ghpb2\" (UID: \"e78912cf-da4f-41ec-a4c8-05da36da1594\") " pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:35 crc kubenswrapper[4856]: I1122 07:29:35.916077 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78912cf-da4f-41ec-a4c8-05da36da1594-catalog-content\") pod \"certified-operators-ghpb2\" (UID: \"e78912cf-da4f-41ec-a4c8-05da36da1594\") " pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:35 crc kubenswrapper[4856]: I1122 07:29:35.945681 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvxwx\" (UniqueName: \"kubernetes.io/projected/e78912cf-da4f-41ec-a4c8-05da36da1594-kube-api-access-cvxwx\") pod \"certified-operators-ghpb2\" (UID: \"e78912cf-da4f-41ec-a4c8-05da36da1594\") " pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:36 crc kubenswrapper[4856]: I1122 07:29:36.040885 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:36 crc kubenswrapper[4856]: I1122 07:29:36.044044 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 22 07:29:36 crc kubenswrapper[4856]: I1122 07:29:36.602732 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ghpb2"] Nov 22 07:29:36 crc kubenswrapper[4856]: W1122 07:29:36.605130 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode78912cf_da4f_41ec_a4c8_05da36da1594.slice/crio-0b9f7e6e83f6711eadb714303a1d07c15f879487ae8d7c1a5233d51b8089ac74 WatchSource:0}: Error finding container 0b9f7e6e83f6711eadb714303a1d07c15f879487ae8d7c1a5233d51b8089ac74: Status 404 returned error can't find the container with id 0b9f7e6e83f6711eadb714303a1d07c15f879487ae8d7c1a5233d51b8089ac74 Nov 22 07:29:36 crc kubenswrapper[4856]: I1122 07:29:36.730046 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghpb2" event={"ID":"e78912cf-da4f-41ec-a4c8-05da36da1594","Type":"ContainerStarted","Data":"0b9f7e6e83f6711eadb714303a1d07c15f879487ae8d7c1a5233d51b8089ac74"} Nov 22 07:29:37 crc kubenswrapper[4856]: I1122 07:29:37.738755 4856 generic.go:334] "Generic (PLEG): container finished" podID="e78912cf-da4f-41ec-a4c8-05da36da1594" containerID="6d693a2b0ad0103313405b56a2ee7d17cf0e41f5bdf0fde811546cf88f6b2133" exitCode=0 Nov 22 07:29:37 crc kubenswrapper[4856]: I1122 07:29:37.739072 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghpb2" event={"ID":"e78912cf-da4f-41ec-a4c8-05da36da1594","Type":"ContainerDied","Data":"6d693a2b0ad0103313405b56a2ee7d17cf0e41f5bdf0fde811546cf88f6b2133"} Nov 22 07:29:38 crc kubenswrapper[4856]: I1122 07:29:38.754829 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49046268-02be-4651-96da-4a3a4c3039f3","Type":"ContainerStarted","Data":"d796e0d46078928f181f510d4c3578e84cb323d3da2b05f3773d246a65f267ae"} Nov 22 07:29:39 crc kubenswrapper[4856]: I1122 07:29:39.770967 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghpb2" event={"ID":"e78912cf-da4f-41ec-a4c8-05da36da1594","Type":"ContainerStarted","Data":"b5fca97a88728e7da878e34edd7765bc39bcba7d54f6504d93ed85bbf0158222"} Nov 22 07:29:39 crc kubenswrapper[4856]: I1122 07:29:39.777490 4856 generic.go:334] "Generic (PLEG): container finished" podID="b446b176-7d24-4bb1-ab69-7d78c1c1e99f" containerID="1e67d8cd584ceeb200c9518aba1f39886ff3c391d12da5f8ac55f49863259170" exitCode=0 Nov 22 07:29:39 crc kubenswrapper[4856]: I1122 07:29:39.777561 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-w7rvq" event={"ID":"b446b176-7d24-4bb1-ab69-7d78c1c1e99f","Type":"ContainerDied","Data":"1e67d8cd584ceeb200c9518aba1f39886ff3c391d12da5f8ac55f49863259170"} Nov 22 07:29:40 crc kubenswrapper[4856]: I1122 07:29:40.791986 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49046268-02be-4651-96da-4a3a4c3039f3","Type":"ContainerStarted","Data":"95d6c2c47b91df049aeb31824e749494e674c77f87ab15e788b4e9196bec6ae6"} Nov 22 07:29:40 crc kubenswrapper[4856]: I1122 07:29:40.792552 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:29:40 crc kubenswrapper[4856]: I1122 07:29:40.795022 4856 generic.go:334] "Generic (PLEG): container finished" podID="e78912cf-da4f-41ec-a4c8-05da36da1594" containerID="b5fca97a88728e7da878e34edd7765bc39bcba7d54f6504d93ed85bbf0158222" exitCode=0 Nov 22 07:29:40 crc kubenswrapper[4856]: I1122 07:29:40.795081 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghpb2" event={"ID":"e78912cf-da4f-41ec-a4c8-05da36da1594","Type":"ContainerDied","Data":"b5fca97a88728e7da878e34edd7765bc39bcba7d54f6504d93ed85bbf0158222"} Nov 22 07:29:40 crc kubenswrapper[4856]: I1122 07:29:40.822862 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.315779895 podStartE2EDuration="11.822845235s" podCreationTimestamp="2025-11-22 07:29:29 +0000 UTC" firstStartedPulling="2025-11-22 07:29:30.772064197 +0000 UTC m=+1613.185457455" lastFinishedPulling="2025-11-22 07:29:40.279129537 +0000 UTC m=+1622.692522795" observedRunningTime="2025-11-22 07:29:40.819797981 +0000 UTC m=+1623.233191239" watchObservedRunningTime="2025-11-22 07:29:40.822845235 +0000 UTC m=+1623.236238493" Nov 22 07:29:41 crc kubenswrapper[4856]: I1122 07:29:41.155981 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-w7rvq" Nov 22 07:29:41 crc kubenswrapper[4856]: I1122 07:29:41.315146 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-combined-ca-bundle\") pod \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\" (UID: \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\") " Nov 22 07:29:41 crc kubenswrapper[4856]: I1122 07:29:41.315248 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbxwc\" (UniqueName: \"kubernetes.io/projected/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-kube-api-access-fbxwc\") pod \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\" (UID: \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\") " Nov 22 07:29:41 crc kubenswrapper[4856]: I1122 07:29:41.315693 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-config\") pod \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\" (UID: \"b446b176-7d24-4bb1-ab69-7d78c1c1e99f\") " Nov 22 07:29:41 crc kubenswrapper[4856]: I1122 07:29:41.321132 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-kube-api-access-fbxwc" (OuterVolumeSpecName: "kube-api-access-fbxwc") pod "b446b176-7d24-4bb1-ab69-7d78c1c1e99f" (UID: "b446b176-7d24-4bb1-ab69-7d78c1c1e99f"). InnerVolumeSpecName "kube-api-access-fbxwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:29:41 crc kubenswrapper[4856]: I1122 07:29:41.347843 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b446b176-7d24-4bb1-ab69-7d78c1c1e99f" (UID: "b446b176-7d24-4bb1-ab69-7d78c1c1e99f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:29:41 crc kubenswrapper[4856]: I1122 07:29:41.357867 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-config" (OuterVolumeSpecName: "config") pod "b446b176-7d24-4bb1-ab69-7d78c1c1e99f" (UID: "b446b176-7d24-4bb1-ab69-7d78c1c1e99f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:29:41 crc kubenswrapper[4856]: I1122 07:29:41.419059 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:41 crc kubenswrapper[4856]: I1122 07:29:41.419105 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbxwc\" (UniqueName: \"kubernetes.io/projected/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-kube-api-access-fbxwc\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:41 crc kubenswrapper[4856]: I1122 07:29:41.419137 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b446b176-7d24-4bb1-ab69-7d78c1c1e99f-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:41 crc kubenswrapper[4856]: I1122 07:29:41.806387 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-w7rvq" event={"ID":"b446b176-7d24-4bb1-ab69-7d78c1c1e99f","Type":"ContainerDied","Data":"4a145fd635b3af96adc1a373b0ff0dcd6c362589ac650981c611e09a5cef50b4"} Nov 22 07:29:41 crc kubenswrapper[4856]: I1122 07:29:41.806440 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a145fd635b3af96adc1a373b0ff0dcd6c362589ac650981c611e09a5cef50b4" Nov 22 07:29:41 crc kubenswrapper[4856]: I1122 07:29:41.806489 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-w7rvq" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.068911 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77cfcbb9df-w8w5j"] Nov 22 07:29:42 crc kubenswrapper[4856]: E1122 07:29:42.069278 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b446b176-7d24-4bb1-ab69-7d78c1c1e99f" containerName="neutron-db-sync" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.069295 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b446b176-7d24-4bb1-ab69-7d78c1c1e99f" containerName="neutron-db-sync" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.069470 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b446b176-7d24-4bb1-ab69-7d78c1c1e99f" containerName="neutron-db-sync" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.070356 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.102901 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77cfcbb9df-w8w5j"] Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.231738 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7d7cc75d86-4k58n"] Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.235153 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.236398 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-config\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.236437 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-dns-svc\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.236570 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-ovsdbserver-nb\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.236600 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-dns-swift-storage-0\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.236619 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-ovsdbserver-sb\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.236646 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv45w\" (UniqueName: \"kubernetes.io/projected/3ec90c1b-704a-41bd-869c-62041bfe19ea-kube-api-access-jv45w\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.241129 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.241337 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.241383 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d7cc75d86-4k58n"] Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.241856 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-d46d7" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.242004 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.338204 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-config\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.338256 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-combined-ca-bundle\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.338280 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-dns-svc\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.338296 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-config\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.338589 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqn46\" (UniqueName: \"kubernetes.io/projected/975789c2-1cb7-43db-a687-9a6cbd45eaa0-kube-api-access-hqn46\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.338769 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-ovsdbserver-nb\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.338846 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-ovndb-tls-certs\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.338875 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-dns-swift-storage-0\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.338917 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-ovsdbserver-sb\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.339189 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv45w\" (UniqueName: \"kubernetes.io/projected/3ec90c1b-704a-41bd-869c-62041bfe19ea-kube-api-access-jv45w\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.339259 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-httpd-config\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.339680 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-dns-svc\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.339745 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-ovsdbserver-nb\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.339905 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-ovsdbserver-sb\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.339928 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-dns-swift-storage-0\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.340083 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-config\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.378020 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv45w\" (UniqueName: \"kubernetes.io/projected/3ec90c1b-704a-41bd-869c-62041bfe19ea-kube-api-access-jv45w\") pod \"dnsmasq-dns-77cfcbb9df-w8w5j\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.397944 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.440423 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqn46\" (UniqueName: \"kubernetes.io/projected/975789c2-1cb7-43db-a687-9a6cbd45eaa0-kube-api-access-hqn46\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.440546 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-ovndb-tls-certs\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.440589 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-httpd-config\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.440660 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-config\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.440696 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-combined-ca-bundle\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.453120 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-combined-ca-bundle\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.453164 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-httpd-config\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.453314 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-config\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.453441 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-ovndb-tls-certs\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.466413 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqn46\" (UniqueName: \"kubernetes.io/projected/975789c2-1cb7-43db-a687-9a6cbd45eaa0-kube-api-access-hqn46\") pod \"neutron-7d7cc75d86-4k58n\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.550997 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.825240 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghpb2" event={"ID":"e78912cf-da4f-41ec-a4c8-05da36da1594","Type":"ContainerStarted","Data":"da0690b8c426086ce41ec1c2846699d5fb81ef3913e96ef07d47aed62b6bb06a"} Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.851274 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ghpb2" podStartSLOduration=3.425316152 podStartE2EDuration="7.851253291s" podCreationTimestamp="2025-11-22 07:29:35 +0000 UTC" firstStartedPulling="2025-11-22 07:29:37.942598507 +0000 UTC m=+1620.355991765" lastFinishedPulling="2025-11-22 07:29:42.368535646 +0000 UTC m=+1624.781928904" observedRunningTime="2025-11-22 07:29:42.842014641 +0000 UTC m=+1625.255407899" watchObservedRunningTime="2025-11-22 07:29:42.851253291 +0000 UTC m=+1625.264646549" Nov 22 07:29:42 crc kubenswrapper[4856]: I1122 07:29:42.963264 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77cfcbb9df-w8w5j"] Nov 22 07:29:42 crc kubenswrapper[4856]: W1122 07:29:42.989763 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ec90c1b_704a_41bd_869c_62041bfe19ea.slice/crio-35d8c8fe40907481a4b0d9d6d9f06f9786540856041bf089d77d751edef4cf9f WatchSource:0}: Error finding container 35d8c8fe40907481a4b0d9d6d9f06f9786540856041bf089d77d751edef4cf9f: Status 404 returned error can't find the container with id 35d8c8fe40907481a4b0d9d6d9f06f9786540856041bf089d77d751edef4cf9f Nov 22 07:29:43 crc kubenswrapper[4856]: I1122 07:29:43.396309 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d7cc75d86-4k58n"] Nov 22 07:29:43 crc kubenswrapper[4856]: W1122 07:29:43.405843 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod975789c2_1cb7_43db_a687_9a6cbd45eaa0.slice/crio-f1ddf02d6c2fdfda9a3ae07ce2cac8f4f73cf4f3209eafd1cfa3035e0ef355c3 WatchSource:0}: Error finding container f1ddf02d6c2fdfda9a3ae07ce2cac8f4f73cf4f3209eafd1cfa3035e0ef355c3: Status 404 returned error can't find the container with id f1ddf02d6c2fdfda9a3ae07ce2cac8f4f73cf4f3209eafd1cfa3035e0ef355c3 Nov 22 07:29:43 crc kubenswrapper[4856]: I1122 07:29:43.835743 4856 generic.go:334] "Generic (PLEG): container finished" podID="3ec90c1b-704a-41bd-869c-62041bfe19ea" containerID="b03526c854b5155fe69d9afd92691715de6d2fe2ea93c5e1f77c78b7b4be7ccd" exitCode=0 Nov 22 07:29:43 crc kubenswrapper[4856]: I1122 07:29:43.835790 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" event={"ID":"3ec90c1b-704a-41bd-869c-62041bfe19ea","Type":"ContainerDied","Data":"b03526c854b5155fe69d9afd92691715de6d2fe2ea93c5e1f77c78b7b4be7ccd"} Nov 22 07:29:43 crc kubenswrapper[4856]: I1122 07:29:43.836088 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" event={"ID":"3ec90c1b-704a-41bd-869c-62041bfe19ea","Type":"ContainerStarted","Data":"35d8c8fe40907481a4b0d9d6d9f06f9786540856041bf089d77d751edef4cf9f"} Nov 22 07:29:43 crc kubenswrapper[4856]: I1122 07:29:43.838065 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d7cc75d86-4k58n" event={"ID":"975789c2-1cb7-43db-a687-9a6cbd45eaa0","Type":"ContainerStarted","Data":"cfaff3d6018e2606aa3f9d7f28a762f77dbb0bcefdc7c1d329e0fa7a832d13c0"} Nov 22 07:29:43 crc kubenswrapper[4856]: I1122 07:29:43.838112 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d7cc75d86-4k58n" event={"ID":"975789c2-1cb7-43db-a687-9a6cbd45eaa0","Type":"ContainerStarted","Data":"f1ddf02d6c2fdfda9a3ae07ce2cac8f4f73cf4f3209eafd1cfa3035e0ef355c3"} Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.638574 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c448d48d9-lmlhj"] Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.641092 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.643246 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.643584 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.650112 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c448d48d9-lmlhj"] Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.788379 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-internal-tls-certs\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.788443 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-public-tls-certs\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.788793 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-ovndb-tls-certs\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.788886 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-combined-ca-bundle\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.788994 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxmvb\" (UniqueName: \"kubernetes.io/projected/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-kube-api-access-wxmvb\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.789032 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-config\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.789062 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-httpd-config\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.866094 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" event={"ID":"3ec90c1b-704a-41bd-869c-62041bfe19ea","Type":"ContainerStarted","Data":"7d6c5796d486492405a3211036cbc8546024e4773abcfd3441e9ecfdc599fc62"} Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.866254 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.868963 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d7cc75d86-4k58n" event={"ID":"975789c2-1cb7-43db-a687-9a6cbd45eaa0","Type":"ContainerStarted","Data":"bb8e00173ef09996a83288402af80da1fd0227b773da9ed0230b02d55113055a"} Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.869138 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.890832 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-ovndb-tls-certs\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.890896 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-combined-ca-bundle\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.890932 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxmvb\" (UniqueName: \"kubernetes.io/projected/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-kube-api-access-wxmvb\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.890953 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-config\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.890972 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-httpd-config\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.891006 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-internal-tls-certs\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.891024 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-public-tls-certs\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.896658 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" podStartSLOduration=2.896640752 podStartE2EDuration="2.896640752s" podCreationTimestamp="2025-11-22 07:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:29:44.884709348 +0000 UTC m=+1627.298102596" watchObservedRunningTime="2025-11-22 07:29:44.896640752 +0000 UTC m=+1627.310034010" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.897462 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-combined-ca-bundle\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.897517 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-config\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.897471 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-ovndb-tls-certs\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.898177 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-httpd-config\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.901325 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-internal-tls-certs\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.904912 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7d7cc75d86-4k58n" podStartSLOduration=2.904894837 podStartE2EDuration="2.904894837s" podCreationTimestamp="2025-11-22 07:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:29:44.90243061 +0000 UTC m=+1627.315823878" watchObservedRunningTime="2025-11-22 07:29:44.904894837 +0000 UTC m=+1627.318288095" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.908317 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-public-tls-certs\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.916280 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxmvb\" (UniqueName: \"kubernetes.io/projected/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-kube-api-access-wxmvb\") pod \"neutron-5c448d48d9-lmlhj\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:44 crc kubenswrapper[4856]: I1122 07:29:44.976179 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:45 crc kubenswrapper[4856]: I1122 07:29:45.495225 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c448d48d9-lmlhj"] Nov 22 07:29:45 crc kubenswrapper[4856]: I1122 07:29:45.878826 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c448d48d9-lmlhj" event={"ID":"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313","Type":"ContainerStarted","Data":"5aa5082b981bf0694d914bdadbcba55364731f0db70a966eab09f765f33a7755"} Nov 22 07:29:46 crc kubenswrapper[4856]: I1122 07:29:46.041384 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:46 crc kubenswrapper[4856]: I1122 07:29:46.041456 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:46 crc kubenswrapper[4856]: I1122 07:29:46.089306 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:47 crc kubenswrapper[4856]: I1122 07:29:47.897070 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c448d48d9-lmlhj" event={"ID":"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313","Type":"ContainerStarted","Data":"273797dc3d1ff426732192e04e6bd642a97dc99523e657e806f91b951e7b928a"} Nov 22 07:29:51 crc kubenswrapper[4856]: I1122 07:29:51.960281 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c448d48d9-lmlhj" event={"ID":"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313","Type":"ContainerStarted","Data":"e4ce8b9ed4b91b14fe577f0657b03ac8159da3736fa9337862e230ef16a43afb"} Nov 22 07:29:52 crc kubenswrapper[4856]: I1122 07:29:52.399726 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:29:52 crc kubenswrapper[4856]: I1122 07:29:52.458858 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d4b4d75d9-r4bms"] Nov 22 07:29:52 crc kubenswrapper[4856]: I1122 07:29:52.459227 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" podUID="7a2cd411-a78b-4a0e-b667-94994b50d4da" containerName="dnsmasq-dns" containerID="cri-o://bf30ee61fbad22f1709515fa30aae4a80f402ac967ce4f3e934886ffb2310cbe" gracePeriod=10 Nov 22 07:29:52 crc kubenswrapper[4856]: I1122 07:29:52.969562 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:29:52 crc kubenswrapper[4856]: I1122 07:29:52.987682 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c448d48d9-lmlhj" podStartSLOduration=8.987661278000001 podStartE2EDuration="8.987661278s" podCreationTimestamp="2025-11-22 07:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:29:52.985844698 +0000 UTC m=+1635.399237956" watchObservedRunningTime="2025-11-22 07:29:52.987661278 +0000 UTC m=+1635.401054536" Nov 22 07:29:53 crc kubenswrapper[4856]: I1122 07:29:53.534184 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" podUID="7a2cd411-a78b-4a0e-b667-94994b50d4da" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Nov 22 07:29:53 crc kubenswrapper[4856]: I1122 07:29:53.978166 4856 generic.go:334] "Generic (PLEG): container finished" podID="7a2cd411-a78b-4a0e-b667-94994b50d4da" containerID="bf30ee61fbad22f1709515fa30aae4a80f402ac967ce4f3e934886ffb2310cbe" exitCode=0 Nov 22 07:29:53 crc kubenswrapper[4856]: I1122 07:29:53.978439 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" event={"ID":"7a2cd411-a78b-4a0e-b667-94994b50d4da","Type":"ContainerDied","Data":"bf30ee61fbad22f1709515fa30aae4a80f402ac967ce4f3e934886ffb2310cbe"} Nov 22 07:29:54 crc kubenswrapper[4856]: I1122 07:29:54.931577 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.031761 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" event={"ID":"7a2cd411-a78b-4a0e-b667-94994b50d4da","Type":"ContainerDied","Data":"c876c5078226875de0eb967f2e6faf89a3fb9849c4ec3b39fdd4171c353f98de"} Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.031826 4856 scope.go:117] "RemoveContainer" containerID="bf30ee61fbad22f1709515fa30aae4a80f402ac967ce4f3e934886ffb2310cbe" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.031988 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4b4d75d9-r4bms" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.078509 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-dns-svc\") pod \"7a2cd411-a78b-4a0e-b667-94994b50d4da\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.078601 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-dns-swift-storage-0\") pod \"7a2cd411-a78b-4a0e-b667-94994b50d4da\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.078643 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5mkt\" (UniqueName: \"kubernetes.io/projected/7a2cd411-a78b-4a0e-b667-94994b50d4da-kube-api-access-w5mkt\") pod \"7a2cd411-a78b-4a0e-b667-94994b50d4da\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.078665 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-ovsdbserver-sb\") pod \"7a2cd411-a78b-4a0e-b667-94994b50d4da\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.078740 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-ovsdbserver-nb\") pod \"7a2cd411-a78b-4a0e-b667-94994b50d4da\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.078833 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-config\") pod \"7a2cd411-a78b-4a0e-b667-94994b50d4da\" (UID: \"7a2cd411-a78b-4a0e-b667-94994b50d4da\") " Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.085630 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a2cd411-a78b-4a0e-b667-94994b50d4da-kube-api-access-w5mkt" (OuterVolumeSpecName: "kube-api-access-w5mkt") pod "7a2cd411-a78b-4a0e-b667-94994b50d4da" (UID: "7a2cd411-a78b-4a0e-b667-94994b50d4da"). InnerVolumeSpecName "kube-api-access-w5mkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.158773 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-config" (OuterVolumeSpecName: "config") pod "7a2cd411-a78b-4a0e-b667-94994b50d4da" (UID: "7a2cd411-a78b-4a0e-b667-94994b50d4da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.159183 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7a2cd411-a78b-4a0e-b667-94994b50d4da" (UID: "7a2cd411-a78b-4a0e-b667-94994b50d4da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.161965 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7a2cd411-a78b-4a0e-b667-94994b50d4da" (UID: "7a2cd411-a78b-4a0e-b667-94994b50d4da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.164133 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7a2cd411-a78b-4a0e-b667-94994b50d4da" (UID: "7a2cd411-a78b-4a0e-b667-94994b50d4da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.169068 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7a2cd411-a78b-4a0e-b667-94994b50d4da" (UID: "7a2cd411-a78b-4a0e-b667-94994b50d4da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.181566 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5mkt\" (UniqueName: \"kubernetes.io/projected/7a2cd411-a78b-4a0e-b667-94994b50d4da-kube-api-access-w5mkt\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.181602 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.181611 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.181622 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.181630 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.181638 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a2cd411-a78b-4a0e-b667-94994b50d4da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.270662 4856 scope.go:117] "RemoveContainer" containerID="2afc6abc382c4d636dbf6c18ee99d51a7bb85449371c2e3fe310052453b490d0" Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.364376 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d4b4d75d9-r4bms"] Nov 22 07:29:55 crc kubenswrapper[4856]: I1122 07:29:55.377383 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d4b4d75d9-r4bms"] Nov 22 07:29:56 crc kubenswrapper[4856]: I1122 07:29:56.089274 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:29:56 crc kubenswrapper[4856]: I1122 07:29:56.141188 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ghpb2"] Nov 22 07:29:56 crc kubenswrapper[4856]: I1122 07:29:56.719479 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a2cd411-a78b-4a0e-b667-94994b50d4da" path="/var/lib/kubelet/pods/7a2cd411-a78b-4a0e-b667-94994b50d4da/volumes" Nov 22 07:29:57 crc kubenswrapper[4856]: I1122 07:29:57.051882 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ghpb2" podUID="e78912cf-da4f-41ec-a4c8-05da36da1594" containerName="registry-server" containerID="cri-o://da0690b8c426086ce41ec1c2846699d5fb81ef3913e96ef07d47aed62b6bb06a" gracePeriod=2 Nov 22 07:29:59 crc kubenswrapper[4856]: I1122 07:29:59.754534 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:29:59 crc kubenswrapper[4856]: I1122 07:29:59.755188 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.091664 4856 generic.go:334] "Generic (PLEG): container finished" podID="e78912cf-da4f-41ec-a4c8-05da36da1594" containerID="da0690b8c426086ce41ec1c2846699d5fb81ef3913e96ef07d47aed62b6bb06a" exitCode=0 Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.091718 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghpb2" event={"ID":"e78912cf-da4f-41ec-a4c8-05da36da1594","Type":"ContainerDied","Data":"da0690b8c426086ce41ec1c2846699d5fb81ef3913e96ef07d47aed62b6bb06a"} Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.173356 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk"] Nov 22 07:30:00 crc kubenswrapper[4856]: E1122 07:30:00.174087 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2cd411-a78b-4a0e-b667-94994b50d4da" containerName="init" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.174171 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2cd411-a78b-4a0e-b667-94994b50d4da" containerName="init" Nov 22 07:30:00 crc kubenswrapper[4856]: E1122 07:30:00.174277 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a2cd411-a78b-4a0e-b667-94994b50d4da" containerName="dnsmasq-dns" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.174360 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a2cd411-a78b-4a0e-b667-94994b50d4da" containerName="dnsmasq-dns" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.174621 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a2cd411-a78b-4a0e-b667-94994b50d4da" containerName="dnsmasq-dns" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.175335 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.178699 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.178951 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.190068 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk"] Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.284573 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b19df38b-b56d-4de6-9e84-be72dd06e7b3-secret-volume\") pod \"collect-profiles-29396610-86kfk\" (UID: \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.284639 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk22b\" (UniqueName: \"kubernetes.io/projected/b19df38b-b56d-4de6-9e84-be72dd06e7b3-kube-api-access-nk22b\") pod \"collect-profiles-29396610-86kfk\" (UID: \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.284753 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b19df38b-b56d-4de6-9e84-be72dd06e7b3-config-volume\") pod \"collect-profiles-29396610-86kfk\" (UID: \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.386411 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b19df38b-b56d-4de6-9e84-be72dd06e7b3-config-volume\") pod \"collect-profiles-29396610-86kfk\" (UID: \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.386578 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b19df38b-b56d-4de6-9e84-be72dd06e7b3-secret-volume\") pod \"collect-profiles-29396610-86kfk\" (UID: \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.386601 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk22b\" (UniqueName: \"kubernetes.io/projected/b19df38b-b56d-4de6-9e84-be72dd06e7b3-kube-api-access-nk22b\") pod \"collect-profiles-29396610-86kfk\" (UID: \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.387586 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b19df38b-b56d-4de6-9e84-be72dd06e7b3-config-volume\") pod \"collect-profiles-29396610-86kfk\" (UID: \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.393771 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b19df38b-b56d-4de6-9e84-be72dd06e7b3-secret-volume\") pod \"collect-profiles-29396610-86kfk\" (UID: \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.403428 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk22b\" (UniqueName: \"kubernetes.io/projected/b19df38b-b56d-4de6-9e84-be72dd06e7b3-kube-api-access-nk22b\") pod \"collect-profiles-29396610-86kfk\" (UID: \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.499837 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.607746 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 07:30:00 crc kubenswrapper[4856]: W1122 07:30:00.967294 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb19df38b_b56d_4de6_9e84_be72dd06e7b3.slice/crio-874361821c7342997e5a5b0032cb1c2635938d9e6c94ee8ee12cb1af4bfd331a WatchSource:0}: Error finding container 874361821c7342997e5a5b0032cb1c2635938d9e6c94ee8ee12cb1af4bfd331a: Status 404 returned error can't find the container with id 874361821c7342997e5a5b0032cb1c2635938d9e6c94ee8ee12cb1af4bfd331a Nov 22 07:30:00 crc kubenswrapper[4856]: I1122 07:30:00.970041 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk"] Nov 22 07:30:01 crc kubenswrapper[4856]: I1122 07:30:01.103210 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" event={"ID":"b19df38b-b56d-4de6-9e84-be72dd06e7b3","Type":"ContainerStarted","Data":"874361821c7342997e5a5b0032cb1c2635938d9e6c94ee8ee12cb1af4bfd331a"} Nov 22 07:30:01 crc kubenswrapper[4856]: I1122 07:30:01.751993 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:30:01 crc kubenswrapper[4856]: I1122 07:30:01.814807 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvxwx\" (UniqueName: \"kubernetes.io/projected/e78912cf-da4f-41ec-a4c8-05da36da1594-kube-api-access-cvxwx\") pod \"e78912cf-da4f-41ec-a4c8-05da36da1594\" (UID: \"e78912cf-da4f-41ec-a4c8-05da36da1594\") " Nov 22 07:30:01 crc kubenswrapper[4856]: I1122 07:30:01.814990 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78912cf-da4f-41ec-a4c8-05da36da1594-catalog-content\") pod \"e78912cf-da4f-41ec-a4c8-05da36da1594\" (UID: \"e78912cf-da4f-41ec-a4c8-05da36da1594\") " Nov 22 07:30:01 crc kubenswrapper[4856]: I1122 07:30:01.815026 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78912cf-da4f-41ec-a4c8-05da36da1594-utilities\") pod \"e78912cf-da4f-41ec-a4c8-05da36da1594\" (UID: \"e78912cf-da4f-41ec-a4c8-05da36da1594\") " Nov 22 07:30:01 crc kubenswrapper[4856]: I1122 07:30:01.815853 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78912cf-da4f-41ec-a4c8-05da36da1594-utilities" (OuterVolumeSpecName: "utilities") pod "e78912cf-da4f-41ec-a4c8-05da36da1594" (UID: "e78912cf-da4f-41ec-a4c8-05da36da1594"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:30:01 crc kubenswrapper[4856]: I1122 07:30:01.821727 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78912cf-da4f-41ec-a4c8-05da36da1594-kube-api-access-cvxwx" (OuterVolumeSpecName: "kube-api-access-cvxwx") pod "e78912cf-da4f-41ec-a4c8-05da36da1594" (UID: "e78912cf-da4f-41ec-a4c8-05da36da1594"). InnerVolumeSpecName "kube-api-access-cvxwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:01 crc kubenswrapper[4856]: I1122 07:30:01.864521 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e78912cf-da4f-41ec-a4c8-05da36da1594-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e78912cf-da4f-41ec-a4c8-05da36da1594" (UID: "e78912cf-da4f-41ec-a4c8-05da36da1594"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:30:01 crc kubenswrapper[4856]: I1122 07:30:01.917454 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvxwx\" (UniqueName: \"kubernetes.io/projected/e78912cf-da4f-41ec-a4c8-05da36da1594-kube-api-access-cvxwx\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:01 crc kubenswrapper[4856]: I1122 07:30:01.917494 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e78912cf-da4f-41ec-a4c8-05da36da1594-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:01 crc kubenswrapper[4856]: I1122 07:30:01.917529 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e78912cf-da4f-41ec-a4c8-05da36da1594-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:02 crc kubenswrapper[4856]: I1122 07:30:02.111829 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" event={"ID":"b19df38b-b56d-4de6-9e84-be72dd06e7b3","Type":"ContainerStarted","Data":"19fdc225f8b1b275b9b6e6920018dfc4146c64843a5ff33faf2ec5fe1a6428e4"} Nov 22 07:30:02 crc kubenswrapper[4856]: I1122 07:30:02.114551 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghpb2" event={"ID":"e78912cf-da4f-41ec-a4c8-05da36da1594","Type":"ContainerDied","Data":"0b9f7e6e83f6711eadb714303a1d07c15f879487ae8d7c1a5233d51b8089ac74"} Nov 22 07:30:02 crc kubenswrapper[4856]: I1122 07:30:02.114586 4856 scope.go:117] "RemoveContainer" containerID="da0690b8c426086ce41ec1c2846699d5fb81ef3913e96ef07d47aed62b6bb06a" Nov 22 07:30:02 crc kubenswrapper[4856]: I1122 07:30:02.114641 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghpb2" Nov 22 07:30:02 crc kubenswrapper[4856]: I1122 07:30:02.135803 4856 scope.go:117] "RemoveContainer" containerID="b5fca97a88728e7da878e34edd7765bc39bcba7d54f6504d93ed85bbf0158222" Nov 22 07:30:02 crc kubenswrapper[4856]: I1122 07:30:02.152998 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ghpb2"] Nov 22 07:30:02 crc kubenswrapper[4856]: I1122 07:30:02.161491 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ghpb2"] Nov 22 07:30:02 crc kubenswrapper[4856]: I1122 07:30:02.187408 4856 scope.go:117] "RemoveContainer" containerID="6d693a2b0ad0103313405b56a2ee7d17cf0e41f5bdf0fde811546cf88f6b2133" Nov 22 07:30:02 crc kubenswrapper[4856]: I1122 07:30:02.721346 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e78912cf-da4f-41ec-a4c8-05da36da1594" path="/var/lib/kubelet/pods/e78912cf-da4f-41ec-a4c8-05da36da1594/volumes" Nov 22 07:30:03 crc kubenswrapper[4856]: I1122 07:30:03.123746 4856 generic.go:334] "Generic (PLEG): container finished" podID="b19df38b-b56d-4de6-9e84-be72dd06e7b3" containerID="19fdc225f8b1b275b9b6e6920018dfc4146c64843a5ff33faf2ec5fe1a6428e4" exitCode=0 Nov 22 07:30:03 crc kubenswrapper[4856]: I1122 07:30:03.123831 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" event={"ID":"b19df38b-b56d-4de6-9e84-be72dd06e7b3","Type":"ContainerDied","Data":"19fdc225f8b1b275b9b6e6920018dfc4146c64843a5ff33faf2ec5fe1a6428e4"} Nov 22 07:30:04 crc kubenswrapper[4856]: I1122 07:30:04.451704 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" Nov 22 07:30:04 crc kubenswrapper[4856]: I1122 07:30:04.563820 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b19df38b-b56d-4de6-9e84-be72dd06e7b3-secret-volume\") pod \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\" (UID: \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\") " Nov 22 07:30:04 crc kubenswrapper[4856]: I1122 07:30:04.564058 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk22b\" (UniqueName: \"kubernetes.io/projected/b19df38b-b56d-4de6-9e84-be72dd06e7b3-kube-api-access-nk22b\") pod \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\" (UID: \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\") " Nov 22 07:30:04 crc kubenswrapper[4856]: I1122 07:30:04.564262 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b19df38b-b56d-4de6-9e84-be72dd06e7b3-config-volume\") pod \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\" (UID: \"b19df38b-b56d-4de6-9e84-be72dd06e7b3\") " Nov 22 07:30:04 crc kubenswrapper[4856]: I1122 07:30:04.564999 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b19df38b-b56d-4de6-9e84-be72dd06e7b3-config-volume" (OuterVolumeSpecName: "config-volume") pod "b19df38b-b56d-4de6-9e84-be72dd06e7b3" (UID: "b19df38b-b56d-4de6-9e84-be72dd06e7b3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4856]: I1122 07:30:04.572673 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19df38b-b56d-4de6-9e84-be72dd06e7b3-kube-api-access-nk22b" (OuterVolumeSpecName: "kube-api-access-nk22b") pod "b19df38b-b56d-4de6-9e84-be72dd06e7b3" (UID: "b19df38b-b56d-4de6-9e84-be72dd06e7b3"). InnerVolumeSpecName "kube-api-access-nk22b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4856]: I1122 07:30:04.576803 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b19df38b-b56d-4de6-9e84-be72dd06e7b3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b19df38b-b56d-4de6-9e84-be72dd06e7b3" (UID: "b19df38b-b56d-4de6-9e84-be72dd06e7b3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:04 crc kubenswrapper[4856]: I1122 07:30:04.668101 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b19df38b-b56d-4de6-9e84-be72dd06e7b3-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4856]: I1122 07:30:04.668139 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk22b\" (UniqueName: \"kubernetes.io/projected/b19df38b-b56d-4de6-9e84-be72dd06e7b3-kube-api-access-nk22b\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:04 crc kubenswrapper[4856]: I1122 07:30:04.668150 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b19df38b-b56d-4de6-9e84-be72dd06e7b3-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:05 crc kubenswrapper[4856]: I1122 07:30:05.146064 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" event={"ID":"b19df38b-b56d-4de6-9e84-be72dd06e7b3","Type":"ContainerDied","Data":"874361821c7342997e5a5b0032cb1c2635938d9e6c94ee8ee12cb1af4bfd331a"} Nov 22 07:30:05 crc kubenswrapper[4856]: I1122 07:30:05.146395 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="874361821c7342997e5a5b0032cb1c2635938d9e6c94ee8ee12cb1af4bfd331a" Nov 22 07:30:05 crc kubenswrapper[4856]: I1122 07:30:05.146138 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk" Nov 22 07:30:08 crc kubenswrapper[4856]: I1122 07:30:08.108464 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:30:08 crc kubenswrapper[4856]: I1122 07:30:08.109168 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="ceilometer-central-agent" containerID="cri-o://946af67d87f924ac2af7f4cdf82505b05729866f6a051bc37f47293938239f38" gracePeriod=30 Nov 22 07:30:08 crc kubenswrapper[4856]: I1122 07:30:08.109261 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="sg-core" containerID="cri-o://d796e0d46078928f181f510d4c3578e84cb323d3da2b05f3773d246a65f267ae" gracePeriod=30 Nov 22 07:30:08 crc kubenswrapper[4856]: I1122 07:30:08.109266 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="ceilometer-notification-agent" containerID="cri-o://2d30b3db3f28ec8801af5d34284f06f3448849f149a49a685f92fb2fcbe9a623" gracePeriod=30 Nov 22 07:30:08 crc kubenswrapper[4856]: I1122 07:30:08.109239 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="proxy-httpd" containerID="cri-o://95d6c2c47b91df049aeb31824e749494e674c77f87ab15e788b4e9196bec6ae6" gracePeriod=30 Nov 22 07:30:08 crc kubenswrapper[4856]: I1122 07:30:08.180887 4856 generic.go:334] "Generic (PLEG): container finished" podID="a4ab0b87-dec0-42f2-86a2-4e12a02c7573" containerID="d74045845a7dba814efb401d7b033582ccdbf8ee08845c8e8fdf207bd5c6d465" exitCode=0 Nov 22 07:30:08 crc kubenswrapper[4856]: I1122 07:30:08.180938 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mjp7j" event={"ID":"a4ab0b87-dec0-42f2-86a2-4e12a02c7573","Type":"ContainerDied","Data":"d74045845a7dba814efb401d7b033582ccdbf8ee08845c8e8fdf207bd5c6d465"} Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.191571 4856 generic.go:334] "Generic (PLEG): container finished" podID="49046268-02be-4651-96da-4a3a4c3039f3" containerID="95d6c2c47b91df049aeb31824e749494e674c77f87ab15e788b4e9196bec6ae6" exitCode=0 Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.191878 4856 generic.go:334] "Generic (PLEG): container finished" podID="49046268-02be-4651-96da-4a3a4c3039f3" containerID="d796e0d46078928f181f510d4c3578e84cb323d3da2b05f3773d246a65f267ae" exitCode=2 Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.191892 4856 generic.go:334] "Generic (PLEG): container finished" podID="49046268-02be-4651-96da-4a3a4c3039f3" containerID="946af67d87f924ac2af7f4cdf82505b05729866f6a051bc37f47293938239f38" exitCode=0 Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.191619 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49046268-02be-4651-96da-4a3a4c3039f3","Type":"ContainerDied","Data":"95d6c2c47b91df049aeb31824e749494e674c77f87ab15e788b4e9196bec6ae6"} Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.192009 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49046268-02be-4651-96da-4a3a4c3039f3","Type":"ContainerDied","Data":"d796e0d46078928f181f510d4c3578e84cb323d3da2b05f3773d246a65f267ae"} Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.192040 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49046268-02be-4651-96da-4a3a4c3039f3","Type":"ContainerDied","Data":"946af67d87f924ac2af7f4cdf82505b05729866f6a051bc37f47293938239f38"} Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.511247 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.654982 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-config-data\") pod \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.655070 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-scripts\") pod \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.655139 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-combined-ca-bundle\") pod \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.655285 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44zsm\" (UniqueName: \"kubernetes.io/projected/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-kube-api-access-44zsm\") pod \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\" (UID: \"a4ab0b87-dec0-42f2-86a2-4e12a02c7573\") " Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.661670 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-scripts" (OuterVolumeSpecName: "scripts") pod "a4ab0b87-dec0-42f2-86a2-4e12a02c7573" (UID: "a4ab0b87-dec0-42f2-86a2-4e12a02c7573"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.661782 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-kube-api-access-44zsm" (OuterVolumeSpecName: "kube-api-access-44zsm") pod "a4ab0b87-dec0-42f2-86a2-4e12a02c7573" (UID: "a4ab0b87-dec0-42f2-86a2-4e12a02c7573"). InnerVolumeSpecName "kube-api-access-44zsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.683538 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-config-data" (OuterVolumeSpecName: "config-data") pod "a4ab0b87-dec0-42f2-86a2-4e12a02c7573" (UID: "a4ab0b87-dec0-42f2-86a2-4e12a02c7573"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.683602 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4ab0b87-dec0-42f2-86a2-4e12a02c7573" (UID: "a4ab0b87-dec0-42f2-86a2-4e12a02c7573"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.757533 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.757578 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.757589 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:09 crc kubenswrapper[4856]: I1122 07:30:09.757601 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44zsm\" (UniqueName: \"kubernetes.io/projected/a4ab0b87-dec0-42f2-86a2-4e12a02c7573-kube-api-access-44zsm\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.202464 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-mjp7j" event={"ID":"a4ab0b87-dec0-42f2-86a2-4e12a02c7573","Type":"ContainerDied","Data":"2c409d742ee56e58f573ea269a857bbbdc54d54c215a5d51289b0b2419e3ea31"} Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.202505 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c409d742ee56e58f573ea269a857bbbdc54d54c215a5d51289b0b2419e3ea31" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.202572 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-mjp7j" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.323297 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:30:10 crc kubenswrapper[4856]: E1122 07:30:10.323993 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78912cf-da4f-41ec-a4c8-05da36da1594" containerName="extract-utilities" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.324097 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78912cf-da4f-41ec-a4c8-05da36da1594" containerName="extract-utilities" Nov 22 07:30:10 crc kubenswrapper[4856]: E1122 07:30:10.324194 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78912cf-da4f-41ec-a4c8-05da36da1594" containerName="extract-content" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.324258 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78912cf-da4f-41ec-a4c8-05da36da1594" containerName="extract-content" Nov 22 07:30:10 crc kubenswrapper[4856]: E1122 07:30:10.324348 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4ab0b87-dec0-42f2-86a2-4e12a02c7573" containerName="nova-cell0-conductor-db-sync" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.324407 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4ab0b87-dec0-42f2-86a2-4e12a02c7573" containerName="nova-cell0-conductor-db-sync" Nov 22 07:30:10 crc kubenswrapper[4856]: E1122 07:30:10.324471 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78912cf-da4f-41ec-a4c8-05da36da1594" containerName="registry-server" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.324541 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78912cf-da4f-41ec-a4c8-05da36da1594" containerName="registry-server" Nov 22 07:30:10 crc kubenswrapper[4856]: E1122 07:30:10.324608 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19df38b-b56d-4de6-9e84-be72dd06e7b3" containerName="collect-profiles" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.324668 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19df38b-b56d-4de6-9e84-be72dd06e7b3" containerName="collect-profiles" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.324906 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4ab0b87-dec0-42f2-86a2-4e12a02c7573" containerName="nova-cell0-conductor-db-sync" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.325022 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19df38b-b56d-4de6-9e84-be72dd06e7b3" containerName="collect-profiles" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.325101 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e78912cf-da4f-41ec-a4c8-05da36da1594" containerName="registry-server" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.326067 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.329932 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9nqcq" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.330133 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.337844 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.471432 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fcab55-6a49-4c21-9314-435129cf376a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"18fcab55-6a49-4c21-9314-435129cf376a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.471730 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fcab55-6a49-4c21-9314-435129cf376a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"18fcab55-6a49-4c21-9314-435129cf376a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.471874 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppn7t\" (UniqueName: \"kubernetes.io/projected/18fcab55-6a49-4c21-9314-435129cf376a-kube-api-access-ppn7t\") pod \"nova-cell0-conductor-0\" (UID: \"18fcab55-6a49-4c21-9314-435129cf376a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.573925 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppn7t\" (UniqueName: \"kubernetes.io/projected/18fcab55-6a49-4c21-9314-435129cf376a-kube-api-access-ppn7t\") pod \"nova-cell0-conductor-0\" (UID: \"18fcab55-6a49-4c21-9314-435129cf376a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.574024 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fcab55-6a49-4c21-9314-435129cf376a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"18fcab55-6a49-4c21-9314-435129cf376a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.574091 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fcab55-6a49-4c21-9314-435129cf376a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"18fcab55-6a49-4c21-9314-435129cf376a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.577913 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fcab55-6a49-4c21-9314-435129cf376a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"18fcab55-6a49-4c21-9314-435129cf376a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.578055 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fcab55-6a49-4c21-9314-435129cf376a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"18fcab55-6a49-4c21-9314-435129cf376a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.593698 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppn7t\" (UniqueName: \"kubernetes.io/projected/18fcab55-6a49-4c21-9314-435129cf376a-kube-api-access-ppn7t\") pod \"nova-cell0-conductor-0\" (UID: \"18fcab55-6a49-4c21-9314-435129cf376a\") " pod="openstack/nova-cell0-conductor-0" Nov 22 07:30:10 crc kubenswrapper[4856]: I1122 07:30:10.655350 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:30:11 crc kubenswrapper[4856]: I1122 07:30:11.121399 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:30:11 crc kubenswrapper[4856]: I1122 07:30:11.218008 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"18fcab55-6a49-4c21-9314-435129cf376a","Type":"ContainerStarted","Data":"5f110a7dd89cb87d7a3272051d8d04c315023c6ced6e8dada6479123218a9953"} Nov 22 07:30:12 crc kubenswrapper[4856]: I1122 07:30:12.228914 4856 generic.go:334] "Generic (PLEG): container finished" podID="49046268-02be-4651-96da-4a3a4c3039f3" containerID="2d30b3db3f28ec8801af5d34284f06f3448849f149a49a685f92fb2fcbe9a623" exitCode=0 Nov 22 07:30:12 crc kubenswrapper[4856]: I1122 07:30:12.228993 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49046268-02be-4651-96da-4a3a4c3039f3","Type":"ContainerDied","Data":"2d30b3db3f28ec8801af5d34284f06f3448849f149a49a685f92fb2fcbe9a623"} Nov 22 07:30:12 crc kubenswrapper[4856]: I1122 07:30:12.230894 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"18fcab55-6a49-4c21-9314-435129cf376a","Type":"ContainerStarted","Data":"ae2a400802cf450e80a83dad86eb4c2623ee43d44b73239e9fc8e7d9b2dbe411"} Nov 22 07:30:12 crc kubenswrapper[4856]: I1122 07:30:12.231725 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 22 07:30:12 crc kubenswrapper[4856]: I1122 07:30:12.606131 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:30:12 crc kubenswrapper[4856]: I1122 07:30:12.627461 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.627438242 podStartE2EDuration="2.627438242s" podCreationTimestamp="2025-11-22 07:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:30:12.249576851 +0000 UTC m=+1654.662970109" watchObservedRunningTime="2025-11-22 07:30:12.627438242 +0000 UTC m=+1655.040831500" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.350662 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.446729 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-sg-core-conf-yaml\") pod \"49046268-02be-4651-96da-4a3a4c3039f3\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.446791 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-combined-ca-bundle\") pod \"49046268-02be-4651-96da-4a3a4c3039f3\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.446823 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-config-data\") pod \"49046268-02be-4651-96da-4a3a4c3039f3\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.446962 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-scripts\") pod \"49046268-02be-4651-96da-4a3a4c3039f3\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.447028 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpm2v\" (UniqueName: \"kubernetes.io/projected/49046268-02be-4651-96da-4a3a4c3039f3-kube-api-access-fpm2v\") pod \"49046268-02be-4651-96da-4a3a4c3039f3\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.447049 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-ceilometer-tls-certs\") pod \"49046268-02be-4651-96da-4a3a4c3039f3\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.447100 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49046268-02be-4651-96da-4a3a4c3039f3-run-httpd\") pod \"49046268-02be-4651-96da-4a3a4c3039f3\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.447425 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49046268-02be-4651-96da-4a3a4c3039f3-log-httpd\") pod \"49046268-02be-4651-96da-4a3a4c3039f3\" (UID: \"49046268-02be-4651-96da-4a3a4c3039f3\") " Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.447529 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49046268-02be-4651-96da-4a3a4c3039f3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "49046268-02be-4651-96da-4a3a4c3039f3" (UID: "49046268-02be-4651-96da-4a3a4c3039f3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.447796 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49046268-02be-4651-96da-4a3a4c3039f3-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.448109 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49046268-02be-4651-96da-4a3a4c3039f3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "49046268-02be-4651-96da-4a3a4c3039f3" (UID: "49046268-02be-4651-96da-4a3a4c3039f3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.452404 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49046268-02be-4651-96da-4a3a4c3039f3-kube-api-access-fpm2v" (OuterVolumeSpecName: "kube-api-access-fpm2v") pod "49046268-02be-4651-96da-4a3a4c3039f3" (UID: "49046268-02be-4651-96da-4a3a4c3039f3"). InnerVolumeSpecName "kube-api-access-fpm2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.453758 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-scripts" (OuterVolumeSpecName: "scripts") pod "49046268-02be-4651-96da-4a3a4c3039f3" (UID: "49046268-02be-4651-96da-4a3a4c3039f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.475568 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "49046268-02be-4651-96da-4a3a4c3039f3" (UID: "49046268-02be-4651-96da-4a3a4c3039f3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.500582 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "49046268-02be-4651-96da-4a3a4c3039f3" (UID: "49046268-02be-4651-96da-4a3a4c3039f3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.538010 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49046268-02be-4651-96da-4a3a4c3039f3" (UID: "49046268-02be-4651-96da-4a3a4c3039f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.550457 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49046268-02be-4651-96da-4a3a4c3039f3-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.550532 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.550547 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.550559 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.550571 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpm2v\" (UniqueName: \"kubernetes.io/projected/49046268-02be-4651-96da-4a3a4c3039f3-kube-api-access-fpm2v\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.550582 4856 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.553432 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-config-data" (OuterVolumeSpecName: "config-data") pod "49046268-02be-4651-96da-4a3a4c3039f3" (UID: "49046268-02be-4651-96da-4a3a4c3039f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:13 crc kubenswrapper[4856]: I1122 07:30:13.652782 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49046268-02be-4651-96da-4a3a4c3039f3-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.263761 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49046268-02be-4651-96da-4a3a4c3039f3","Type":"ContainerDied","Data":"1106241933ddc2caac6ea21e2b27781d347aebbf7d4eb43d6c57024a34d50707"} Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.263823 4856 scope.go:117] "RemoveContainer" containerID="95d6c2c47b91df049aeb31824e749494e674c77f87ab15e788b4e9196bec6ae6" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.263966 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.285718 4856 scope.go:117] "RemoveContainer" containerID="d796e0d46078928f181f510d4c3578e84cb323d3da2b05f3773d246a65f267ae" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.304855 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.308699 4856 scope.go:117] "RemoveContainer" containerID="2d30b3db3f28ec8801af5d34284f06f3448849f149a49a685f92fb2fcbe9a623" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.317575 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.329827 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:30:14 crc kubenswrapper[4856]: E1122 07:30:14.330196 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="ceilometer-central-agent" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.330216 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="ceilometer-central-agent" Nov 22 07:30:14 crc kubenswrapper[4856]: E1122 07:30:14.330232 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="proxy-httpd" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.330240 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="proxy-httpd" Nov 22 07:30:14 crc kubenswrapper[4856]: E1122 07:30:14.330254 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="sg-core" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.330261 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="sg-core" Nov 22 07:30:14 crc kubenswrapper[4856]: E1122 07:30:14.330273 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="ceilometer-notification-agent" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.330279 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="ceilometer-notification-agent" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.330434 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="sg-core" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.330444 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="proxy-httpd" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.330464 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="ceilometer-notification-agent" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.330477 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="49046268-02be-4651-96da-4a3a4c3039f3" containerName="ceilometer-central-agent" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.330521 4856 scope.go:117] "RemoveContainer" containerID="946af67d87f924ac2af7f4cdf82505b05729866f6a051bc37f47293938239f38" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.332085 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.334779 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.338945 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.339927 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.352005 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.469827 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-config-data\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.469922 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.469991 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.470152 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.470329 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d53cbd2-2659-4dac-a5ea-2d6285d32896-run-httpd\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.470431 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-scripts\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.470895 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2l8s\" (UniqueName: \"kubernetes.io/projected/8d53cbd2-2659-4dac-a5ea-2d6285d32896-kube-api-access-s2l8s\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.470947 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d53cbd2-2659-4dac-a5ea-2d6285d32896-log-httpd\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.573032 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-config-data\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.573356 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.573406 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.573435 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.573454 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d53cbd2-2659-4dac-a5ea-2d6285d32896-run-httpd\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.573486 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-scripts\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.573589 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2l8s\" (UniqueName: \"kubernetes.io/projected/8d53cbd2-2659-4dac-a5ea-2d6285d32896-kube-api-access-s2l8s\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.573613 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d53cbd2-2659-4dac-a5ea-2d6285d32896-log-httpd\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.574070 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d53cbd2-2659-4dac-a5ea-2d6285d32896-run-httpd\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.574388 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d53cbd2-2659-4dac-a5ea-2d6285d32896-log-httpd\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.577752 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.578114 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-scripts\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.579228 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.580970 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.581781 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-config-data\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.589394 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2l8s\" (UniqueName: \"kubernetes.io/projected/8d53cbd2-2659-4dac-a5ea-2d6285d32896-kube-api-access-s2l8s\") pod \"ceilometer-0\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.655475 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.721889 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49046268-02be-4651-96da-4a3a4c3039f3" path="/var/lib/kubelet/pods/49046268-02be-4651-96da-4a3a4c3039f3/volumes" Nov 22 07:30:14 crc kubenswrapper[4856]: I1122 07:30:14.991809 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:30:15 crc kubenswrapper[4856]: I1122 07:30:15.058590 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7d7cc75d86-4k58n"] Nov 22 07:30:15 crc kubenswrapper[4856]: I1122 07:30:15.059951 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7d7cc75d86-4k58n" podUID="975789c2-1cb7-43db-a687-9a6cbd45eaa0" containerName="neutron-api" containerID="cri-o://cfaff3d6018e2606aa3f9d7f28a762f77dbb0bcefdc7c1d329e0fa7a832d13c0" gracePeriod=30 Nov 22 07:30:15 crc kubenswrapper[4856]: I1122 07:30:15.060063 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7d7cc75d86-4k58n" podUID="975789c2-1cb7-43db-a687-9a6cbd45eaa0" containerName="neutron-httpd" containerID="cri-o://bb8e00173ef09996a83288402af80da1fd0227b773da9ed0230b02d55113055a" gracePeriod=30 Nov 22 07:30:15 crc kubenswrapper[4856]: I1122 07:30:15.120206 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:30:15 crc kubenswrapper[4856]: I1122 07:30:15.286205 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d53cbd2-2659-4dac-a5ea-2d6285d32896","Type":"ContainerStarted","Data":"f36a5dd7fcaa357acd6768ce7caa458ceb8eb496f6edf8a9d8123b9d5bf80fcf"} Nov 22 07:30:15 crc kubenswrapper[4856]: I1122 07:30:15.682554 4856 scope.go:117] "RemoveContainer" containerID="509825c540533ac55bf923080287a8b9f0531f9bae14c1af67afe971497af123" Nov 22 07:30:15 crc kubenswrapper[4856]: I1122 07:30:15.735698 4856 scope.go:117] "RemoveContainer" containerID="d241e30d1a3694faed58909e112649b7c21d6bca93c8390391a6511428f3737b" Nov 22 07:30:15 crc kubenswrapper[4856]: I1122 07:30:15.761868 4856 scope.go:117] "RemoveContainer" containerID="37e2066a7b84c2ef9c3d9d79dc7de5c5f24f7dfb09f03d6781d7007196e58e36" Nov 22 07:30:15 crc kubenswrapper[4856]: I1122 07:30:15.842177 4856 scope.go:117] "RemoveContainer" containerID="befa04b54831b9c35c148c54b8e621913675dd76361199832ffd651b0bcee91a" Nov 22 07:30:16 crc kubenswrapper[4856]: I1122 07:30:16.297405 4856 generic.go:334] "Generic (PLEG): container finished" podID="975789c2-1cb7-43db-a687-9a6cbd45eaa0" containerID="bb8e00173ef09996a83288402af80da1fd0227b773da9ed0230b02d55113055a" exitCode=0 Nov 22 07:30:16 crc kubenswrapper[4856]: I1122 07:30:16.297462 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d7cc75d86-4k58n" event={"ID":"975789c2-1cb7-43db-a687-9a6cbd45eaa0","Type":"ContainerDied","Data":"bb8e00173ef09996a83288402af80da1fd0227b773da9ed0230b02d55113055a"} Nov 22 07:30:17 crc kubenswrapper[4856]: I1122 07:30:17.310468 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d53cbd2-2659-4dac-a5ea-2d6285d32896","Type":"ContainerStarted","Data":"02504c98907f21ce7764d4fd28d98f7f096076bf7cb9878d6a5f71bfa8fcfe37"} Nov 22 07:30:18 crc kubenswrapper[4856]: I1122 07:30:18.319594 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d53cbd2-2659-4dac-a5ea-2d6285d32896","Type":"ContainerStarted","Data":"d476936173fa8b9339e378c66ed81ca4fdf164e17aac3a0cee640e38a116c3dc"} Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.342967 4856 generic.go:334] "Generic (PLEG): container finished" podID="975789c2-1cb7-43db-a687-9a6cbd45eaa0" containerID="cfaff3d6018e2606aa3f9d7f28a762f77dbb0bcefdc7c1d329e0fa7a832d13c0" exitCode=0 Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.343060 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d7cc75d86-4k58n" event={"ID":"975789c2-1cb7-43db-a687-9a6cbd45eaa0","Type":"ContainerDied","Data":"cfaff3d6018e2606aa3f9d7f28a762f77dbb0bcefdc7c1d329e0fa7a832d13c0"} Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.350575 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d53cbd2-2659-4dac-a5ea-2d6285d32896","Type":"ContainerStarted","Data":"08393bede254000c3ca4c821f71ad0d95b0c2099bf833752bffeaf293edf8a8a"} Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.579887 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.661457 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-config\") pod \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.661581 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-ovndb-tls-certs\") pod \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.661700 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-combined-ca-bundle\") pod \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.661823 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-httpd-config\") pod \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.661844 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqn46\" (UniqueName: \"kubernetes.io/projected/975789c2-1cb7-43db-a687-9a6cbd45eaa0-kube-api-access-hqn46\") pod \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\" (UID: \"975789c2-1cb7-43db-a687-9a6cbd45eaa0\") " Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.666848 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "975789c2-1cb7-43db-a687-9a6cbd45eaa0" (UID: "975789c2-1cb7-43db-a687-9a6cbd45eaa0"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.667901 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/975789c2-1cb7-43db-a687-9a6cbd45eaa0-kube-api-access-hqn46" (OuterVolumeSpecName: "kube-api-access-hqn46") pod "975789c2-1cb7-43db-a687-9a6cbd45eaa0" (UID: "975789c2-1cb7-43db-a687-9a6cbd45eaa0"). InnerVolumeSpecName "kube-api-access-hqn46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.711988 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "975789c2-1cb7-43db-a687-9a6cbd45eaa0" (UID: "975789c2-1cb7-43db-a687-9a6cbd45eaa0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.712959 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-config" (OuterVolumeSpecName: "config") pod "975789c2-1cb7-43db-a687-9a6cbd45eaa0" (UID: "975789c2-1cb7-43db-a687-9a6cbd45eaa0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.742075 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "975789c2-1cb7-43db-a687-9a6cbd45eaa0" (UID: "975789c2-1cb7-43db-a687-9a6cbd45eaa0"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.764813 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.764850 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.764870 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqn46\" (UniqueName: \"kubernetes.io/projected/975789c2-1cb7-43db-a687-9a6cbd45eaa0-kube-api-access-hqn46\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.764885 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:19 crc kubenswrapper[4856]: I1122 07:30:19.764895 4856 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/975789c2-1cb7-43db-a687-9a6cbd45eaa0-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:20 crc kubenswrapper[4856]: I1122 07:30:20.365410 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d7cc75d86-4k58n" event={"ID":"975789c2-1cb7-43db-a687-9a6cbd45eaa0","Type":"ContainerDied","Data":"f1ddf02d6c2fdfda9a3ae07ce2cac8f4f73cf4f3209eafd1cfa3035e0ef355c3"} Nov 22 07:30:20 crc kubenswrapper[4856]: I1122 07:30:20.365462 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d7cc75d86-4k58n" Nov 22 07:30:20 crc kubenswrapper[4856]: I1122 07:30:20.365479 4856 scope.go:117] "RemoveContainer" containerID="bb8e00173ef09996a83288402af80da1fd0227b773da9ed0230b02d55113055a" Nov 22 07:30:20 crc kubenswrapper[4856]: I1122 07:30:20.386960 4856 scope.go:117] "RemoveContainer" containerID="cfaff3d6018e2606aa3f9d7f28a762f77dbb0bcefdc7c1d329e0fa7a832d13c0" Nov 22 07:30:20 crc kubenswrapper[4856]: I1122 07:30:20.405199 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7d7cc75d86-4k58n"] Nov 22 07:30:20 crc kubenswrapper[4856]: I1122 07:30:20.412438 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7d7cc75d86-4k58n"] Nov 22 07:30:20 crc kubenswrapper[4856]: I1122 07:30:20.684081 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 22 07:30:20 crc kubenswrapper[4856]: I1122 07:30:20.722492 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="975789c2-1cb7-43db-a687-9a6cbd45eaa0" path="/var/lib/kubelet/pods/975789c2-1cb7-43db-a687-9a6cbd45eaa0/volumes" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.209076 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-nr4d2"] Nov 22 07:30:21 crc kubenswrapper[4856]: E1122 07:30:21.209706 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975789c2-1cb7-43db-a687-9a6cbd45eaa0" containerName="neutron-api" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.209722 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="975789c2-1cb7-43db-a687-9a6cbd45eaa0" containerName="neutron-api" Nov 22 07:30:21 crc kubenswrapper[4856]: E1122 07:30:21.209765 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975789c2-1cb7-43db-a687-9a6cbd45eaa0" containerName="neutron-httpd" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.209772 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="975789c2-1cb7-43db-a687-9a6cbd45eaa0" containerName="neutron-httpd" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.209926 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="975789c2-1cb7-43db-a687-9a6cbd45eaa0" containerName="neutron-api" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.209955 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="975789c2-1cb7-43db-a687-9a6cbd45eaa0" containerName="neutron-httpd" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.210480 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.227453 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.227765 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.271629 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nr4d2"] Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.295547 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-scripts\") pod \"nova-cell0-cell-mapping-nr4d2\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.295604 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nr4d2\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.295659 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-config-data\") pod \"nova-cell0-cell-mapping-nr4d2\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.295710 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m558q\" (UniqueName: \"kubernetes.io/projected/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-kube-api-access-m558q\") pod \"nova-cell0-cell-mapping-nr4d2\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.397603 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m558q\" (UniqueName: \"kubernetes.io/projected/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-kube-api-access-m558q\") pod \"nova-cell0-cell-mapping-nr4d2\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.397790 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-scripts\") pod \"nova-cell0-cell-mapping-nr4d2\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.397820 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nr4d2\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.397885 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-config-data\") pod \"nova-cell0-cell-mapping-nr4d2\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.411907 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.413474 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.419264 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-nr4d2\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.423040 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.423878 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-scripts\") pod \"nova-cell0-cell-mapping-nr4d2\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.463985 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-config-data\") pod \"nova-cell0-cell-mapping-nr4d2\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.464152 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m558q\" (UniqueName: \"kubernetes.io/projected/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-kube-api-access-m558q\") pod \"nova-cell0-cell-mapping-nr4d2\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.493861 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.499692 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6d02fe-0574-4567-b934-7245e9788210-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.499819 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6d02fe-0574-4567-b934-7245e9788210-logs\") pod \"nova-api-0\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.499858 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kctjn\" (UniqueName: \"kubernetes.io/projected/3a6d02fe-0574-4567-b934-7245e9788210-kube-api-access-kctjn\") pod \"nova-api-0\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.499915 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6d02fe-0574-4567-b934-7245e9788210-config-data\") pod \"nova-api-0\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.537602 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.540555 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.547243 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.605537 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.645241 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6d02fe-0574-4567-b934-7245e9788210-config-data\") pod \"nova-api-0\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.645482 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6d02fe-0574-4567-b934-7245e9788210-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.656258 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6d02fe-0574-4567-b934-7245e9788210-logs\") pod \"nova-api-0\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.694175 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6d02fe-0574-4567-b934-7245e9788210-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.694420 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6d02fe-0574-4567-b934-7245e9788210-logs\") pod \"nova-api-0\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.707991 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6d02fe-0574-4567-b934-7245e9788210-config-data\") pod \"nova-api-0\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.710689 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kctjn\" (UniqueName: \"kubernetes.io/projected/3a6d02fe-0574-4567-b934-7245e9788210-kube-api-access-kctjn\") pod \"nova-api-0\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.734393 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kctjn\" (UniqueName: \"kubernetes.io/projected/3a6d02fe-0574-4567-b934-7245e9788210-kube-api-access-kctjn\") pod \"nova-api-0\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.734464 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.765396 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.767296 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.770276 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.788147 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-588dc4df7-wm5rv"] Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.790244 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.817341 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.819753 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxbgr\" (UniqueName: \"kubernetes.io/projected/369dc315-311b-4701-b4ff-4c0925c06d03-kube-api-access-vxbgr\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.819826 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c174c09c-9aab-48b7-9c81-33fe98b2d401-config-data\") pod \"nova-scheduler-0\" (UID: \"c174c09c-9aab-48b7-9c81-33fe98b2d401\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.819876 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-logs\") pod \"nova-metadata-0\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.819903 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-ovsdbserver-sb\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.819946 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c174c09c-9aab-48b7-9c81-33fe98b2d401-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c174c09c-9aab-48b7-9c81-33fe98b2d401\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.819971 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q4vg\" (UniqueName: \"kubernetes.io/projected/c174c09c-9aab-48b7-9c81-33fe98b2d401-kube-api-access-8q4vg\") pod \"nova-scheduler-0\" (UID: \"c174c09c-9aab-48b7-9c81-33fe98b2d401\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.820018 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8jp8\" (UniqueName: \"kubernetes.io/projected/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-kube-api-access-m8jp8\") pod \"nova-metadata-0\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.820044 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-config-data\") pod \"nova-metadata-0\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.820073 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-dns-svc\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.821820 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-ovsdbserver-nb\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.821984 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-config\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.822031 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-dns-swift-storage-0\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.822073 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.842568 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-588dc4df7-wm5rv"] Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.851649 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.853157 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.861585 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.873852 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.923743 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-config\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.923789 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-dns-swift-storage-0\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.923817 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123058e1-a3df-48c7-af5e-5edcf61b4d44-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"123058e1-a3df-48c7-af5e-5edcf61b4d44\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.924723 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.924773 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxbgr\" (UniqueName: \"kubernetes.io/projected/369dc315-311b-4701-b4ff-4c0925c06d03-kube-api-access-vxbgr\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.924605 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-config\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.924798 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c174c09c-9aab-48b7-9c81-33fe98b2d401-config-data\") pod \"nova-scheduler-0\" (UID: \"c174c09c-9aab-48b7-9c81-33fe98b2d401\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.924821 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x66gc\" (UniqueName: \"kubernetes.io/projected/123058e1-a3df-48c7-af5e-5edcf61b4d44-kube-api-access-x66gc\") pod \"nova-cell1-novncproxy-0\" (UID: \"123058e1-a3df-48c7-af5e-5edcf61b4d44\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.924840 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-dns-swift-storage-0\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.924852 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-logs\") pod \"nova-metadata-0\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.925340 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-logs\") pod \"nova-metadata-0\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.925420 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-ovsdbserver-sb\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.925524 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c174c09c-9aab-48b7-9c81-33fe98b2d401-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c174c09c-9aab-48b7-9c81-33fe98b2d401\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.925606 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q4vg\" (UniqueName: \"kubernetes.io/projected/c174c09c-9aab-48b7-9c81-33fe98b2d401-kube-api-access-8q4vg\") pod \"nova-scheduler-0\" (UID: \"c174c09c-9aab-48b7-9c81-33fe98b2d401\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.925665 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8jp8\" (UniqueName: \"kubernetes.io/projected/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-kube-api-access-m8jp8\") pod \"nova-metadata-0\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.925695 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-config-data\") pod \"nova-metadata-0\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.925761 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-dns-svc\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.925952 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-ovsdbserver-nb\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.926068 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123058e1-a3df-48c7-af5e-5edcf61b4d44-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"123058e1-a3df-48c7-af5e-5edcf61b4d44\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.926795 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-ovsdbserver-sb\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.927870 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-dns-svc\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.928301 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-ovsdbserver-nb\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.935311 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-config-data\") pod \"nova-metadata-0\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.936569 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.942531 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c174c09c-9aab-48b7-9c81-33fe98b2d401-config-data\") pod \"nova-scheduler-0\" (UID: \"c174c09c-9aab-48b7-9c81-33fe98b2d401\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.949982 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c174c09c-9aab-48b7-9c81-33fe98b2d401-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c174c09c-9aab-48b7-9c81-33fe98b2d401\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.950033 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8jp8\" (UniqueName: \"kubernetes.io/projected/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-kube-api-access-m8jp8\") pod \"nova-metadata-0\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.952864 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.961230 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q4vg\" (UniqueName: \"kubernetes.io/projected/c174c09c-9aab-48b7-9c81-33fe98b2d401-kube-api-access-8q4vg\") pod \"nova-scheduler-0\" (UID: \"c174c09c-9aab-48b7-9c81-33fe98b2d401\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:21 crc kubenswrapper[4856]: I1122 07:30:21.965225 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxbgr\" (UniqueName: \"kubernetes.io/projected/369dc315-311b-4701-b4ff-4c0925c06d03-kube-api-access-vxbgr\") pod \"dnsmasq-dns-588dc4df7-wm5rv\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.017667 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.028021 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x66gc\" (UniqueName: \"kubernetes.io/projected/123058e1-a3df-48c7-af5e-5edcf61b4d44-kube-api-access-x66gc\") pod \"nova-cell1-novncproxy-0\" (UID: \"123058e1-a3df-48c7-af5e-5edcf61b4d44\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.028178 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123058e1-a3df-48c7-af5e-5edcf61b4d44-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"123058e1-a3df-48c7-af5e-5edcf61b4d44\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.028234 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123058e1-a3df-48c7-af5e-5edcf61b4d44-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"123058e1-a3df-48c7-af5e-5edcf61b4d44\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.036115 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123058e1-a3df-48c7-af5e-5edcf61b4d44-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"123058e1-a3df-48c7-af5e-5edcf61b4d44\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.042300 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123058e1-a3df-48c7-af5e-5edcf61b4d44-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"123058e1-a3df-48c7-af5e-5edcf61b4d44\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.076366 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x66gc\" (UniqueName: \"kubernetes.io/projected/123058e1-a3df-48c7-af5e-5edcf61b4d44-kube-api-access-x66gc\") pod \"nova-cell1-novncproxy-0\" (UID: \"123058e1-a3df-48c7-af5e-5edcf61b4d44\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.107261 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.131269 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.204619 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.303906 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-nr4d2"] Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.437903 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d53cbd2-2659-4dac-a5ea-2d6285d32896","Type":"ContainerStarted","Data":"888656c2851622693d777a2fa9c526789697aa0aeda8c832c7f7253e04075778"} Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.438446 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.445053 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nr4d2" event={"ID":"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d","Type":"ContainerStarted","Data":"9287ad81bd77bd0ed8584b3da6427d71e8a8890ac16a3f20be0221f3b031ee9a"} Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.486166 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.568789313 podStartE2EDuration="8.486118226s" podCreationTimestamp="2025-11-22 07:30:14 +0000 UTC" firstStartedPulling="2025-11-22 07:30:15.133683632 +0000 UTC m=+1657.547076900" lastFinishedPulling="2025-11-22 07:30:21.051012555 +0000 UTC m=+1663.464405813" observedRunningTime="2025-11-22 07:30:22.475863217 +0000 UTC m=+1664.889256475" watchObservedRunningTime="2025-11-22 07:30:22.486118226 +0000 UTC m=+1664.899511484" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.639573 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gd2pc"] Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.640828 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.642856 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.648352 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.653368 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gd2pc"] Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.687299 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.745775 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-config-data\") pod \"nova-cell1-conductor-db-sync-gd2pc\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.745887 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66b6x\" (UniqueName: \"kubernetes.io/projected/e782df2b-d7a8-4319-aead-d5165a61314a-kube-api-access-66b6x\") pod \"nova-cell1-conductor-db-sync-gd2pc\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.745918 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gd2pc\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.745996 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-scripts\") pod \"nova-cell1-conductor-db-sync-gd2pc\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: W1122 07:30:22.846286 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9cf8d61_cd00_4fca_9bcf_6ad3b96e35f6.slice/crio-72d3c29661b08833d465309f649f6b2797674b679bebdcc3b118478a5d4473a1 WatchSource:0}: Error finding container 72d3c29661b08833d465309f649f6b2797674b679bebdcc3b118478a5d4473a1: Status 404 returned error can't find the container with id 72d3c29661b08833d465309f649f6b2797674b679bebdcc3b118478a5d4473a1 Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.847639 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66b6x\" (UniqueName: \"kubernetes.io/projected/e782df2b-d7a8-4319-aead-d5165a61314a-kube-api-access-66b6x\") pod \"nova-cell1-conductor-db-sync-gd2pc\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.847684 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gd2pc\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.847778 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-scripts\") pod \"nova-cell1-conductor-db-sync-gd2pc\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.847862 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-config-data\") pod \"nova-cell1-conductor-db-sync-gd2pc\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.850713 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.854129 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-config-data\") pod \"nova-cell1-conductor-db-sync-gd2pc\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.854554 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-gd2pc\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.871964 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-588dc4df7-wm5rv"] Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.874126 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66b6x\" (UniqueName: \"kubernetes.io/projected/e782df2b-d7a8-4319-aead-d5165a61314a-kube-api-access-66b6x\") pod \"nova-cell1-conductor-db-sync-gd2pc\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.874613 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-scripts\") pod \"nova-cell1-conductor-db-sync-gd2pc\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.962034 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.981598 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:30:22 crc kubenswrapper[4856]: I1122 07:30:22.992523 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:30:23 crc kubenswrapper[4856]: W1122 07:30:23.003215 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod123058e1_a3df_48c7_af5e_5edcf61b4d44.slice/crio-dd34ded9a4e122adca3f853416005836e161720b939650b1a57544b32e61b798 WatchSource:0}: Error finding container dd34ded9a4e122adca3f853416005836e161720b939650b1a57544b32e61b798: Status 404 returned error can't find the container with id dd34ded9a4e122adca3f853416005836e161720b939650b1a57544b32e61b798 Nov 22 07:30:23 crc kubenswrapper[4856]: W1122 07:30:23.018904 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc174c09c_9aab_48b7_9c81_33fe98b2d401.slice/crio-aaec3a70560457f2e1ed33d890dcba5bbc8e03847401e5f5768aabc9b36fc6ee WatchSource:0}: Error finding container aaec3a70560457f2e1ed33d890dcba5bbc8e03847401e5f5768aabc9b36fc6ee: Status 404 returned error can't find the container with id aaec3a70560457f2e1ed33d890dcba5bbc8e03847401e5f5768aabc9b36fc6ee Nov 22 07:30:23 crc kubenswrapper[4856]: I1122 07:30:23.457971 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6","Type":"ContainerStarted","Data":"72d3c29661b08833d465309f649f6b2797674b679bebdcc3b118478a5d4473a1"} Nov 22 07:30:23 crc kubenswrapper[4856]: I1122 07:30:23.460657 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6d02fe-0574-4567-b934-7245e9788210","Type":"ContainerStarted","Data":"c24a251cd709566439d3028c7fd22f738590b70f7e73e5b18cb68d6a0c036d0d"} Nov 22 07:30:23 crc kubenswrapper[4856]: I1122 07:30:23.462503 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" event={"ID":"369dc315-311b-4701-b4ff-4c0925c06d03","Type":"ContainerStarted","Data":"a7c3453807bdc06296e7874e557924c79679641c804605c7f318382a7c9c9d5e"} Nov 22 07:30:23 crc kubenswrapper[4856]: I1122 07:30:23.464141 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c174c09c-9aab-48b7-9c81-33fe98b2d401","Type":"ContainerStarted","Data":"aaec3a70560457f2e1ed33d890dcba5bbc8e03847401e5f5768aabc9b36fc6ee"} Nov 22 07:30:23 crc kubenswrapper[4856]: I1122 07:30:23.468182 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"123058e1-a3df-48c7-af5e-5edcf61b4d44","Type":"ContainerStarted","Data":"dd34ded9a4e122adca3f853416005836e161720b939650b1a57544b32e61b798"} Nov 22 07:30:23 crc kubenswrapper[4856]: I1122 07:30:23.532533 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gd2pc"] Nov 22 07:30:24 crc kubenswrapper[4856]: I1122 07:30:24.481081 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nr4d2" event={"ID":"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d","Type":"ContainerStarted","Data":"e3477f3418da229a71284b8471efbfa54d35d6398ff6c275fa37d3833c1d430c"} Nov 22 07:30:24 crc kubenswrapper[4856]: I1122 07:30:24.491078 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gd2pc" event={"ID":"e782df2b-d7a8-4319-aead-d5165a61314a","Type":"ContainerStarted","Data":"dfb96d957f6cb86c56972d43dc87c8482e105284bd355469527ebc982327a614"} Nov 22 07:30:24 crc kubenswrapper[4856]: I1122 07:30:24.491130 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gd2pc" event={"ID":"e782df2b-d7a8-4319-aead-d5165a61314a","Type":"ContainerStarted","Data":"fbb778ebcd5865153b3e48b7da54e6fd79bfa2c9cd0a6b42ceb92f4420aa35f1"} Nov 22 07:30:24 crc kubenswrapper[4856]: I1122 07:30:24.500157 4856 generic.go:334] "Generic (PLEG): container finished" podID="369dc315-311b-4701-b4ff-4c0925c06d03" containerID="655621f6b54f6f59a9017161f1f54b0acfe90ff8102781853dc683c11d05f10e" exitCode=0 Nov 22 07:30:24 crc kubenswrapper[4856]: I1122 07:30:24.500206 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" event={"ID":"369dc315-311b-4701-b4ff-4c0925c06d03","Type":"ContainerDied","Data":"655621f6b54f6f59a9017161f1f54b0acfe90ff8102781853dc683c11d05f10e"} Nov 22 07:30:24 crc kubenswrapper[4856]: I1122 07:30:24.500639 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-nr4d2" podStartSLOduration=3.5006215149999997 podStartE2EDuration="3.500621515s" podCreationTimestamp="2025-11-22 07:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:30:24.497991564 +0000 UTC m=+1666.911384832" watchObservedRunningTime="2025-11-22 07:30:24.500621515 +0000 UTC m=+1666.914014773" Nov 22 07:30:24 crc kubenswrapper[4856]: I1122 07:30:24.520932 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-gd2pc" podStartSLOduration=2.520909637 podStartE2EDuration="2.520909637s" podCreationTimestamp="2025-11-22 07:30:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:30:24.514664358 +0000 UTC m=+1666.928057646" watchObservedRunningTime="2025-11-22 07:30:24.520909637 +0000 UTC m=+1666.934302895" Nov 22 07:30:25 crc kubenswrapper[4856]: I1122 07:30:25.029831 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:30:25 crc kubenswrapper[4856]: I1122 07:30:25.047323 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:25 crc kubenswrapper[4856]: I1122 07:30:25.517672 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" event={"ID":"369dc315-311b-4701-b4ff-4c0925c06d03","Type":"ContainerStarted","Data":"cd506a9cce5f375fe71e07b7e5f119bb0d475f8558d939f650f3dc3b13206492"} Nov 22 07:30:25 crc kubenswrapper[4856]: I1122 07:30:25.517859 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:25 crc kubenswrapper[4856]: I1122 07:30:25.549880 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" podStartSLOduration=4.549858798 podStartE2EDuration="4.549858798s" podCreationTimestamp="2025-11-22 07:30:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:30:25.537379518 +0000 UTC m=+1667.950772776" watchObservedRunningTime="2025-11-22 07:30:25.549858798 +0000 UTC m=+1667.963252056" Nov 22 07:30:28 crc kubenswrapper[4856]: I1122 07:30:28.562378 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c174c09c-9aab-48b7-9c81-33fe98b2d401","Type":"ContainerStarted","Data":"5958269df9150681dd441552a75538702068f0f32c90021941e316abd296fac8"} Nov 22 07:30:28 crc kubenswrapper[4856]: I1122 07:30:28.565030 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"123058e1-a3df-48c7-af5e-5edcf61b4d44","Type":"ContainerStarted","Data":"8a4091a03777983698af4404e0183df884d433a4302a486d6176c40e2f6c5256"} Nov 22 07:30:28 crc kubenswrapper[4856]: I1122 07:30:28.565198 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="123058e1-a3df-48c7-af5e-5edcf61b4d44" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8a4091a03777983698af4404e0183df884d433a4302a486d6176c40e2f6c5256" gracePeriod=30 Nov 22 07:30:28 crc kubenswrapper[4856]: I1122 07:30:28.569687 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6","Type":"ContainerStarted","Data":"8e1e6b46ea67cf436cc69bd3b850ec860bb6090b6ef87700ad51fe90fe151365"} Nov 22 07:30:28 crc kubenswrapper[4856]: I1122 07:30:28.569744 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6","Type":"ContainerStarted","Data":"ba1c2425b486cc7241dbdecb80e054f1a9d3eea46d6670305d77f294037866fd"} Nov 22 07:30:28 crc kubenswrapper[4856]: I1122 07:30:28.569913 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" containerName="nova-metadata-log" containerID="cri-o://ba1c2425b486cc7241dbdecb80e054f1a9d3eea46d6670305d77f294037866fd" gracePeriod=30 Nov 22 07:30:28 crc kubenswrapper[4856]: I1122 07:30:28.570020 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" containerName="nova-metadata-metadata" containerID="cri-o://8e1e6b46ea67cf436cc69bd3b850ec860bb6090b6ef87700ad51fe90fe151365" gracePeriod=30 Nov 22 07:30:28 crc kubenswrapper[4856]: I1122 07:30:28.573925 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6d02fe-0574-4567-b934-7245e9788210","Type":"ContainerStarted","Data":"89cdd001df7801b445de99ac1cd0d1ad9f94f9868fc26d5e211218c30596f805"} Nov 22 07:30:28 crc kubenswrapper[4856]: I1122 07:30:28.573973 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6d02fe-0574-4567-b934-7245e9788210","Type":"ContainerStarted","Data":"ffdd3ca38408b987d9d0ea61512a955ab061b2e9e99a4ac866fe731cbc23b7ff"} Nov 22 07:30:28 crc kubenswrapper[4856]: I1122 07:30:28.598047 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.128129481 podStartE2EDuration="7.598025346s" podCreationTimestamp="2025-11-22 07:30:21 +0000 UTC" firstStartedPulling="2025-11-22 07:30:23.024643291 +0000 UTC m=+1665.438036549" lastFinishedPulling="2025-11-22 07:30:27.494539156 +0000 UTC m=+1669.907932414" observedRunningTime="2025-11-22 07:30:28.579597864 +0000 UTC m=+1670.992991122" watchObservedRunningTime="2025-11-22 07:30:28.598025346 +0000 UTC m=+1671.011418604" Nov 22 07:30:28 crc kubenswrapper[4856]: I1122 07:30:28.605237 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.022993068 podStartE2EDuration="7.605219362s" podCreationTimestamp="2025-11-22 07:30:21 +0000 UTC" firstStartedPulling="2025-11-22 07:30:22.853747407 +0000 UTC m=+1665.267140665" lastFinishedPulling="2025-11-22 07:30:27.435973701 +0000 UTC m=+1669.849366959" observedRunningTime="2025-11-22 07:30:28.600871144 +0000 UTC m=+1671.014264392" watchObservedRunningTime="2025-11-22 07:30:28.605219362 +0000 UTC m=+1671.018612610" Nov 22 07:30:28 crc kubenswrapper[4856]: I1122 07:30:28.636655 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.16516685 podStartE2EDuration="7.636636208s" podCreationTimestamp="2025-11-22 07:30:21 +0000 UTC" firstStartedPulling="2025-11-22 07:30:23.01947423 +0000 UTC m=+1665.432867498" lastFinishedPulling="2025-11-22 07:30:27.490943608 +0000 UTC m=+1669.904336856" observedRunningTime="2025-11-22 07:30:28.632412493 +0000 UTC m=+1671.045805761" watchObservedRunningTime="2025-11-22 07:30:28.636636208 +0000 UTC m=+1671.050029466" Nov 22 07:30:28 crc kubenswrapper[4856]: I1122 07:30:28.710732 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.916683483 podStartE2EDuration="7.710709475s" podCreationTimestamp="2025-11-22 07:30:21 +0000 UTC" firstStartedPulling="2025-11-22 07:30:22.695879468 +0000 UTC m=+1665.109272726" lastFinishedPulling="2025-11-22 07:30:27.48990547 +0000 UTC m=+1669.903298718" observedRunningTime="2025-11-22 07:30:28.703750566 +0000 UTC m=+1671.117143834" watchObservedRunningTime="2025-11-22 07:30:28.710709475 +0000 UTC m=+1671.124102733" Nov 22 07:30:29 crc kubenswrapper[4856]: I1122 07:30:29.584330 4856 generic.go:334] "Generic (PLEG): container finished" podID="d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" containerID="ba1c2425b486cc7241dbdecb80e054f1a9d3eea46d6670305d77f294037866fd" exitCode=143 Nov 22 07:30:29 crc kubenswrapper[4856]: I1122 07:30:29.584465 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6","Type":"ContainerDied","Data":"ba1c2425b486cc7241dbdecb80e054f1a9d3eea46d6670305d77f294037866fd"} Nov 22 07:30:29 crc kubenswrapper[4856]: I1122 07:30:29.755035 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:30:29 crc kubenswrapper[4856]: I1122 07:30:29.755099 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:30:29 crc kubenswrapper[4856]: I1122 07:30:29.755142 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:30:29 crc kubenswrapper[4856]: I1122 07:30:29.755909 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:30:29 crc kubenswrapper[4856]: I1122 07:30:29.755972 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" gracePeriod=600 Nov 22 07:30:30 crc kubenswrapper[4856]: I1122 07:30:30.596181 4856 generic.go:334] "Generic (PLEG): container finished" podID="d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" containerID="8e1e6b46ea67cf436cc69bd3b850ec860bb6090b6ef87700ad51fe90fe151365" exitCode=0 Nov 22 07:30:30 crc kubenswrapper[4856]: I1122 07:30:30.596220 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6","Type":"ContainerDied","Data":"8e1e6b46ea67cf436cc69bd3b850ec860bb6090b6ef87700ad51fe90fe151365"} Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.585692 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.623424 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" exitCode=0 Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.623560 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167"} Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.623599 4856 scope.go:117] "RemoveContainer" containerID="b2d6ca7441dd492e3a581af2bfbc9e9d1023d20289aecd1a0ad5d8af62f035ce" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.633842 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6","Type":"ContainerDied","Data":"72d3c29661b08833d465309f649f6b2797674b679bebdcc3b118478a5d4473a1"} Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.633957 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.686165 4856 scope.go:117] "RemoveContainer" containerID="8e1e6b46ea67cf436cc69bd3b850ec860bb6090b6ef87700ad51fe90fe151365" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.708412 4856 scope.go:117] "RemoveContainer" containerID="ba1c2425b486cc7241dbdecb80e054f1a9d3eea46d6670305d77f294037866fd" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.733726 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-config-data\") pod \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.733805 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8jp8\" (UniqueName: \"kubernetes.io/projected/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-kube-api-access-m8jp8\") pod \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.733860 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-combined-ca-bundle\") pod \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.733974 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-logs\") pod \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\" (UID: \"d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6\") " Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.735320 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-logs" (OuterVolumeSpecName: "logs") pod "d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" (UID: "d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.744133 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-kube-api-access-m8jp8" (OuterVolumeSpecName: "kube-api-access-m8jp8") pod "d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" (UID: "d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6"). InnerVolumeSpecName "kube-api-access-m8jp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.764690 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" (UID: "d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.768558 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-config-data" (OuterVolumeSpecName: "config-data") pod "d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" (UID: "d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:31 crc kubenswrapper[4856]: E1122 07:30:31.779162 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.837050 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.837084 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.837097 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8jp8\" (UniqueName: \"kubernetes.io/projected/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-kube-api-access-m8jp8\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.837109 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.955700 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.955740 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:30:31 crc kubenswrapper[4856]: I1122 07:30:31.994213 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.004210 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.016089 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:32 crc kubenswrapper[4856]: E1122 07:30:32.016827 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" containerName="nova-metadata-metadata" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.016856 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" containerName="nova-metadata-metadata" Nov 22 07:30:32 crc kubenswrapper[4856]: E1122 07:30:32.016915 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" containerName="nova-metadata-log" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.016928 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" containerName="nova-metadata-log" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.017198 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" containerName="nova-metadata-log" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.017221 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" containerName="nova-metadata-metadata" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.019109 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.025112 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.025399 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.054196 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.108154 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.108202 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.133763 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.137204 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.143957 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfkjc\" (UniqueName: \"kubernetes.io/projected/9710c392-7608-4cc9-8201-7d39af56e340-kube-api-access-cfkjc\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.144041 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-config-data\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.144126 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.144635 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.145070 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9710c392-7608-4cc9-8201-7d39af56e340-logs\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.205089 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.245245 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77cfcbb9df-w8w5j"] Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.245602 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" podUID="3ec90c1b-704a-41bd-869c-62041bfe19ea" containerName="dnsmasq-dns" containerID="cri-o://7d6c5796d486492405a3211036cbc8546024e4773abcfd3441e9ecfdc599fc62" gracePeriod=10 Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.247260 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfkjc\" (UniqueName: \"kubernetes.io/projected/9710c392-7608-4cc9-8201-7d39af56e340-kube-api-access-cfkjc\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.247318 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-config-data\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.247342 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.247463 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.247595 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9710c392-7608-4cc9-8201-7d39af56e340-logs\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.250769 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9710c392-7608-4cc9-8201-7d39af56e340-logs\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.261220 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-config-data\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.272010 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfkjc\" (UniqueName: \"kubernetes.io/projected/9710c392-7608-4cc9-8201-7d39af56e340-kube-api-access-cfkjc\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.272701 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.275409 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.352568 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:30:32 crc kubenswrapper[4856]: I1122 07:30:32.399082 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" podUID="3ec90c1b-704a-41bd-869c-62041bfe19ea" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.184:5353: connect: connection refused" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.677627 4856 generic.go:334] "Generic (PLEG): container finished" podID="3ec90c1b-704a-41bd-869c-62041bfe19ea" containerID="7d6c5796d486492405a3211036cbc8546024e4773abcfd3441e9ecfdc599fc62" exitCode=0 Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.677698 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" event={"ID":"3ec90c1b-704a-41bd-869c-62041bfe19ea","Type":"ContainerDied","Data":"7d6c5796d486492405a3211036cbc8546024e4773abcfd3441e9ecfdc599fc62"} Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.688831 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:30:36 crc kubenswrapper[4856]: E1122 07:30:32.691621 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.695773 4856 generic.go:334] "Generic (PLEG): container finished" podID="db34970f-8e46-4f6f-9c3c-437b1a6d7a2d" containerID="e3477f3418da229a71284b8471efbfa54d35d6398ff6c275fa37d3833c1d430c" exitCode=0 Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.696492 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nr4d2" event={"ID":"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d","Type":"ContainerDied","Data":"e3477f3418da229a71284b8471efbfa54d35d6398ff6c275fa37d3833c1d430c"} Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.742982 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6" path="/var/lib/kubelet/pods/d9cf8d61-cd00-4fca-9bcf-6ad3b96e35f6/volumes" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.770898 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.827998 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.974496 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-config\") pod \"3ec90c1b-704a-41bd-869c-62041bfe19ea\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.974718 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-ovsdbserver-nb\") pod \"3ec90c1b-704a-41bd-869c-62041bfe19ea\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.974762 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-dns-svc\") pod \"3ec90c1b-704a-41bd-869c-62041bfe19ea\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.974819 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv45w\" (UniqueName: \"kubernetes.io/projected/3ec90c1b-704a-41bd-869c-62041bfe19ea-kube-api-access-jv45w\") pod \"3ec90c1b-704a-41bd-869c-62041bfe19ea\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.974855 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-dns-swift-storage-0\") pod \"3ec90c1b-704a-41bd-869c-62041bfe19ea\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.974933 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-ovsdbserver-sb\") pod \"3ec90c1b-704a-41bd-869c-62041bfe19ea\" (UID: \"3ec90c1b-704a-41bd-869c-62041bfe19ea\") " Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:32.983780 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec90c1b-704a-41bd-869c-62041bfe19ea-kube-api-access-jv45w" (OuterVolumeSpecName: "kube-api-access-jv45w") pod "3ec90c1b-704a-41bd-869c-62041bfe19ea" (UID: "3ec90c1b-704a-41bd-869c-62041bfe19ea"). InnerVolumeSpecName "kube-api-access-jv45w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.038737 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3a6d02fe-0574-4567-b934-7245e9788210" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.038968 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3a6d02fe-0574-4567-b934-7245e9788210" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.050053 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ec90c1b-704a-41bd-869c-62041bfe19ea" (UID: "3ec90c1b-704a-41bd-869c-62041bfe19ea"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.067158 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3ec90c1b-704a-41bd-869c-62041bfe19ea" (UID: "3ec90c1b-704a-41bd-869c-62041bfe19ea"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.068269 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ec90c1b-704a-41bd-869c-62041bfe19ea" (UID: "3ec90c1b-704a-41bd-869c-62041bfe19ea"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.077401 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.077434 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv45w\" (UniqueName: \"kubernetes.io/projected/3ec90c1b-704a-41bd-869c-62041bfe19ea-kube-api-access-jv45w\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.077469 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.077482 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.086567 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-config" (OuterVolumeSpecName: "config") pod "3ec90c1b-704a-41bd-869c-62041bfe19ea" (UID: "3ec90c1b-704a-41bd-869c-62041bfe19ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.087802 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ec90c1b-704a-41bd-869c-62041bfe19ea" (UID: "3ec90c1b-704a-41bd-869c-62041bfe19ea"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.180403 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.180433 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ec90c1b-704a-41bd-869c-62041bfe19ea-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.709093 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" event={"ID":"3ec90c1b-704a-41bd-869c-62041bfe19ea","Type":"ContainerDied","Data":"35d8c8fe40907481a4b0d9d6d9f06f9786540856041bf089d77d751edef4cf9f"} Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.709142 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77cfcbb9df-w8w5j" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.709202 4856 scope.go:117] "RemoveContainer" containerID="7d6c5796d486492405a3211036cbc8546024e4773abcfd3441e9ecfdc599fc62" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.750670 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77cfcbb9df-w8w5j"] Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.759403 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77cfcbb9df-w8w5j"] Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:33.766015 4856 scope.go:117] "RemoveContainer" containerID="b03526c854b5155fe69d9afd92691715de6d2fe2ea93c5e1f77c78b7b4be7ccd" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:34.727036 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ec90c1b-704a-41bd-869c-62041bfe19ea" path="/var/lib/kubelet/pods/3ec90c1b-704a-41bd-869c-62041bfe19ea/volumes" Nov 22 07:30:36 crc kubenswrapper[4856]: W1122 07:30:36.513298 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9710c392_7608_4cc9_8201_7d39af56e340.slice/crio-4a5b6dc901447cbae9c87887e68a8d1601783e60cc014079f31c05eeea9367b1 WatchSource:0}: Error finding container 4a5b6dc901447cbae9c87887e68a8d1601783e60cc014079f31c05eeea9367b1: Status 404 returned error can't find the container with id 4a5b6dc901447cbae9c87887e68a8d1601783e60cc014079f31c05eeea9367b1 Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.518336 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.578003 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.736618 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-nr4d2" event={"ID":"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d","Type":"ContainerDied","Data":"9287ad81bd77bd0ed8584b3da6427d71e8a8890ac16a3f20be0221f3b031ee9a"} Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.736655 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9287ad81bd77bd0ed8584b3da6427d71e8a8890ac16a3f20be0221f3b031ee9a" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.736624 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-nr4d2" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.737548 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9710c392-7608-4cc9-8201-7d39af56e340","Type":"ContainerStarted","Data":"4a5b6dc901447cbae9c87887e68a8d1601783e60cc014079f31c05eeea9367b1"} Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.748400 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-config-data\") pod \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.748594 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-scripts\") pod \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.748662 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m558q\" (UniqueName: \"kubernetes.io/projected/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-kube-api-access-m558q\") pod \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.748730 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-combined-ca-bundle\") pod \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\" (UID: \"db34970f-8e46-4f6f-9c3c-437b1a6d7a2d\") " Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.753793 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-kube-api-access-m558q" (OuterVolumeSpecName: "kube-api-access-m558q") pod "db34970f-8e46-4f6f-9c3c-437b1a6d7a2d" (UID: "db34970f-8e46-4f6f-9c3c-437b1a6d7a2d"). InnerVolumeSpecName "kube-api-access-m558q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.754031 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-scripts" (OuterVolumeSpecName: "scripts") pod "db34970f-8e46-4f6f-9c3c-437b1a6d7a2d" (UID: "db34970f-8e46-4f6f-9c3c-437b1a6d7a2d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.777127 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db34970f-8e46-4f6f-9c3c-437b1a6d7a2d" (UID: "db34970f-8e46-4f6f-9c3c-437b1a6d7a2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.800390 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-config-data" (OuterVolumeSpecName: "config-data") pod "db34970f-8e46-4f6f-9c3c-437b1a6d7a2d" (UID: "db34970f-8e46-4f6f-9c3c-437b1a6d7a2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.850924 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.851213 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.851223 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m558q\" (UniqueName: \"kubernetes.io/projected/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-kube-api-access-m558q\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:36 crc kubenswrapper[4856]: I1122 07:30:36.851231 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:37 crc kubenswrapper[4856]: I1122 07:30:37.747930 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9710c392-7608-4cc9-8201-7d39af56e340","Type":"ContainerStarted","Data":"e09adc60dc2696460cadead69fc33186d049bb6d556d196dd0d802a26bf9781c"} Nov 22 07:30:37 crc kubenswrapper[4856]: I1122 07:30:37.747981 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9710c392-7608-4cc9-8201-7d39af56e340","Type":"ContainerStarted","Data":"e63c976c1752f36ee37d4a458357148bbde09c5527090d703bc6406a8b26d523"} Nov 22 07:30:37 crc kubenswrapper[4856]: I1122 07:30:37.757923 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:30:37 crc kubenswrapper[4856]: I1122 07:30:37.758197 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3a6d02fe-0574-4567-b934-7245e9788210" containerName="nova-api-log" containerID="cri-o://ffdd3ca38408b987d9d0ea61512a955ab061b2e9e99a4ac866fe731cbc23b7ff" gracePeriod=30 Nov 22 07:30:37 crc kubenswrapper[4856]: I1122 07:30:37.758229 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3a6d02fe-0574-4567-b934-7245e9788210" containerName="nova-api-api" containerID="cri-o://89cdd001df7801b445de99ac1cd0d1ad9f94f9868fc26d5e211218c30596f805" gracePeriod=30 Nov 22 07:30:37 crc kubenswrapper[4856]: I1122 07:30:37.769316 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:30:37 crc kubenswrapper[4856]: I1122 07:30:37.769849 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c174c09c-9aab-48b7-9c81-33fe98b2d401" containerName="nova-scheduler-scheduler" containerID="cri-o://5958269df9150681dd441552a75538702068f0f32c90021941e316abd296fac8" gracePeriod=30 Nov 22 07:30:37 crc kubenswrapper[4856]: I1122 07:30:37.773788 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=6.7737712420000005 podStartE2EDuration="6.773771242s" podCreationTimestamp="2025-11-22 07:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:30:37.76930493 +0000 UTC m=+1680.182698188" watchObservedRunningTime="2025-11-22 07:30:37.773771242 +0000 UTC m=+1680.187164500" Nov 22 07:30:37 crc kubenswrapper[4856]: I1122 07:30:37.819234 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:38 crc kubenswrapper[4856]: I1122 07:30:38.757272 4856 generic.go:334] "Generic (PLEG): container finished" podID="3a6d02fe-0574-4567-b934-7245e9788210" containerID="ffdd3ca38408b987d9d0ea61512a955ab061b2e9e99a4ac866fe731cbc23b7ff" exitCode=143 Nov 22 07:30:38 crc kubenswrapper[4856]: I1122 07:30:38.757481 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6d02fe-0574-4567-b934-7245e9788210","Type":"ContainerDied","Data":"ffdd3ca38408b987d9d0ea61512a955ab061b2e9e99a4ac866fe731cbc23b7ff"} Nov 22 07:30:39 crc kubenswrapper[4856]: I1122 07:30:39.764775 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9710c392-7608-4cc9-8201-7d39af56e340" containerName="nova-metadata-metadata" containerID="cri-o://e09adc60dc2696460cadead69fc33186d049bb6d556d196dd0d802a26bf9781c" gracePeriod=30 Nov 22 07:30:39 crc kubenswrapper[4856]: I1122 07:30:39.764843 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9710c392-7608-4cc9-8201-7d39af56e340" containerName="nova-metadata-log" containerID="cri-o://e63c976c1752f36ee37d4a458357148bbde09c5527090d703bc6406a8b26d523" gracePeriod=30 Nov 22 07:30:40 crc kubenswrapper[4856]: I1122 07:30:40.777227 4856 generic.go:334] "Generic (PLEG): container finished" podID="9710c392-7608-4cc9-8201-7d39af56e340" containerID="e09adc60dc2696460cadead69fc33186d049bb6d556d196dd0d802a26bf9781c" exitCode=0 Nov 22 07:30:40 crc kubenswrapper[4856]: I1122 07:30:40.777261 4856 generic.go:334] "Generic (PLEG): container finished" podID="9710c392-7608-4cc9-8201-7d39af56e340" containerID="e63c976c1752f36ee37d4a458357148bbde09c5527090d703bc6406a8b26d523" exitCode=143 Nov 22 07:30:40 crc kubenswrapper[4856]: I1122 07:30:40.777287 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9710c392-7608-4cc9-8201-7d39af56e340","Type":"ContainerDied","Data":"e09adc60dc2696460cadead69fc33186d049bb6d556d196dd0d802a26bf9781c"} Nov 22 07:30:40 crc kubenswrapper[4856]: I1122 07:30:40.777320 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9710c392-7608-4cc9-8201-7d39af56e340","Type":"ContainerDied","Data":"e63c976c1752f36ee37d4a458357148bbde09c5527090d703bc6406a8b26d523"} Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.066680 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.232906 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-config-data\") pod \"9710c392-7608-4cc9-8201-7d39af56e340\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.232994 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-combined-ca-bundle\") pod \"9710c392-7608-4cc9-8201-7d39af56e340\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.233028 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9710c392-7608-4cc9-8201-7d39af56e340-logs\") pod \"9710c392-7608-4cc9-8201-7d39af56e340\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.233105 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfkjc\" (UniqueName: \"kubernetes.io/projected/9710c392-7608-4cc9-8201-7d39af56e340-kube-api-access-cfkjc\") pod \"9710c392-7608-4cc9-8201-7d39af56e340\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.233292 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-nova-metadata-tls-certs\") pod \"9710c392-7608-4cc9-8201-7d39af56e340\" (UID: \"9710c392-7608-4cc9-8201-7d39af56e340\") " Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.234656 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9710c392-7608-4cc9-8201-7d39af56e340-logs" (OuterVolumeSpecName: "logs") pod "9710c392-7608-4cc9-8201-7d39af56e340" (UID: "9710c392-7608-4cc9-8201-7d39af56e340"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.240448 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9710c392-7608-4cc9-8201-7d39af56e340-kube-api-access-cfkjc" (OuterVolumeSpecName: "kube-api-access-cfkjc") pod "9710c392-7608-4cc9-8201-7d39af56e340" (UID: "9710c392-7608-4cc9-8201-7d39af56e340"). InnerVolumeSpecName "kube-api-access-cfkjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.266308 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9710c392-7608-4cc9-8201-7d39af56e340" (UID: "9710c392-7608-4cc9-8201-7d39af56e340"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.274491 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-config-data" (OuterVolumeSpecName: "config-data") pod "9710c392-7608-4cc9-8201-7d39af56e340" (UID: "9710c392-7608-4cc9-8201-7d39af56e340"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.294858 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9710c392-7608-4cc9-8201-7d39af56e340" (UID: "9710c392-7608-4cc9-8201-7d39af56e340"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.335219 4856 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.335264 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.335283 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9710c392-7608-4cc9-8201-7d39af56e340-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.335294 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9710c392-7608-4cc9-8201-7d39af56e340-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.335306 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfkjc\" (UniqueName: \"kubernetes.io/projected/9710c392-7608-4cc9-8201-7d39af56e340-kube-api-access-cfkjc\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.786792 4856 generic.go:334] "Generic (PLEG): container finished" podID="c174c09c-9aab-48b7-9c81-33fe98b2d401" containerID="5958269df9150681dd441552a75538702068f0f32c90021941e316abd296fac8" exitCode=0 Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.786988 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c174c09c-9aab-48b7-9c81-33fe98b2d401","Type":"ContainerDied","Data":"5958269df9150681dd441552a75538702068f0f32c90021941e316abd296fac8"} Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.795412 4856 generic.go:334] "Generic (PLEG): container finished" podID="3a6d02fe-0574-4567-b934-7245e9788210" containerID="89cdd001df7801b445de99ac1cd0d1ad9f94f9868fc26d5e211218c30596f805" exitCode=0 Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.795544 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6d02fe-0574-4567-b934-7245e9788210","Type":"ContainerDied","Data":"89cdd001df7801b445de99ac1cd0d1ad9f94f9868fc26d5e211218c30596f805"} Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.795609 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a6d02fe-0574-4567-b934-7245e9788210","Type":"ContainerDied","Data":"c24a251cd709566439d3028c7fd22f738590b70f7e73e5b18cb68d6a0c036d0d"} Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.795632 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c24a251cd709566439d3028c7fd22f738590b70f7e73e5b18cb68d6a0c036d0d" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.798916 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9710c392-7608-4cc9-8201-7d39af56e340","Type":"ContainerDied","Data":"4a5b6dc901447cbae9c87887e68a8d1601783e60cc014079f31c05eeea9367b1"} Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.798981 4856 scope.go:117] "RemoveContainer" containerID="e09adc60dc2696460cadead69fc33186d049bb6d556d196dd0d802a26bf9781c" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.799529 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.836090 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.849623 4856 scope.go:117] "RemoveContainer" containerID="e63c976c1752f36ee37d4a458357148bbde09c5527090d703bc6406a8b26d523" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.850598 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.858422 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.879427 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:41 crc kubenswrapper[4856]: E1122 07:30:41.879926 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9710c392-7608-4cc9-8201-7d39af56e340" containerName="nova-metadata-log" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.879945 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9710c392-7608-4cc9-8201-7d39af56e340" containerName="nova-metadata-log" Nov 22 07:30:41 crc kubenswrapper[4856]: E1122 07:30:41.879969 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec90c1b-704a-41bd-869c-62041bfe19ea" containerName="init" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.879987 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec90c1b-704a-41bd-869c-62041bfe19ea" containerName="init" Nov 22 07:30:41 crc kubenswrapper[4856]: E1122 07:30:41.880003 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db34970f-8e46-4f6f-9c3c-437b1a6d7a2d" containerName="nova-manage" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.880012 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="db34970f-8e46-4f6f-9c3c-437b1a6d7a2d" containerName="nova-manage" Nov 22 07:30:41 crc kubenswrapper[4856]: E1122 07:30:41.880040 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6d02fe-0574-4567-b934-7245e9788210" containerName="nova-api-api" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.880047 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6d02fe-0574-4567-b934-7245e9788210" containerName="nova-api-api" Nov 22 07:30:41 crc kubenswrapper[4856]: E1122 07:30:41.880064 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6d02fe-0574-4567-b934-7245e9788210" containerName="nova-api-log" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.880072 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6d02fe-0574-4567-b934-7245e9788210" containerName="nova-api-log" Nov 22 07:30:41 crc kubenswrapper[4856]: E1122 07:30:41.880088 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9710c392-7608-4cc9-8201-7d39af56e340" containerName="nova-metadata-metadata" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.880096 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9710c392-7608-4cc9-8201-7d39af56e340" containerName="nova-metadata-metadata" Nov 22 07:30:41 crc kubenswrapper[4856]: E1122 07:30:41.880120 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec90c1b-704a-41bd-869c-62041bfe19ea" containerName="dnsmasq-dns" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.880130 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec90c1b-704a-41bd-869c-62041bfe19ea" containerName="dnsmasq-dns" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.880328 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9710c392-7608-4cc9-8201-7d39af56e340" containerName="nova-metadata-metadata" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.880342 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec90c1b-704a-41bd-869c-62041bfe19ea" containerName="dnsmasq-dns" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.880364 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="db34970f-8e46-4f6f-9c3c-437b1a6d7a2d" containerName="nova-manage" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.880378 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6d02fe-0574-4567-b934-7245e9788210" containerName="nova-api-log" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.880389 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6d02fe-0574-4567-b934-7245e9788210" containerName="nova-api-api" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.880396 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9710c392-7608-4cc9-8201-7d39af56e340" containerName="nova-metadata-log" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.882844 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.884761 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.885056 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.922791 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.953563 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6d02fe-0574-4567-b934-7245e9788210-combined-ca-bundle\") pod \"3a6d02fe-0574-4567-b934-7245e9788210\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.953611 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kctjn\" (UniqueName: \"kubernetes.io/projected/3a6d02fe-0574-4567-b934-7245e9788210-kube-api-access-kctjn\") pod \"3a6d02fe-0574-4567-b934-7245e9788210\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.953698 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6d02fe-0574-4567-b934-7245e9788210-config-data\") pod \"3a6d02fe-0574-4567-b934-7245e9788210\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.953742 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6d02fe-0574-4567-b934-7245e9788210-logs\") pod \"3a6d02fe-0574-4567-b934-7245e9788210\" (UID: \"3a6d02fe-0574-4567-b934-7245e9788210\") " Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.954952 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a6d02fe-0574-4567-b934-7245e9788210-logs" (OuterVolumeSpecName: "logs") pod "3a6d02fe-0574-4567-b934-7245e9788210" (UID: "3a6d02fe-0574-4567-b934-7245e9788210"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:30:41 crc kubenswrapper[4856]: I1122 07:30:41.962067 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a6d02fe-0574-4567-b934-7245e9788210-kube-api-access-kctjn" (OuterVolumeSpecName: "kube-api-access-kctjn") pod "3a6d02fe-0574-4567-b934-7245e9788210" (UID: "3a6d02fe-0574-4567-b934-7245e9788210"). InnerVolumeSpecName "kube-api-access-kctjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.008239 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6d02fe-0574-4567-b934-7245e9788210-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a6d02fe-0574-4567-b934-7245e9788210" (UID: "3a6d02fe-0574-4567-b934-7245e9788210"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.011809 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a6d02fe-0574-4567-b934-7245e9788210-config-data" (OuterVolumeSpecName: "config-data") pod "3a6d02fe-0574-4567-b934-7245e9788210" (UID: "3a6d02fe-0574-4567-b934-7245e9788210"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.055472 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.055763 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-config-data\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.055803 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02dac877-8061-402a-b0bc-30f86a9305d6-logs\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.055997 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.056042 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6xbq\" (UniqueName: \"kubernetes.io/projected/02dac877-8061-402a-b0bc-30f86a9305d6-kube-api-access-r6xbq\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.056101 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a6d02fe-0574-4567-b934-7245e9788210-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.056116 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kctjn\" (UniqueName: \"kubernetes.io/projected/3a6d02fe-0574-4567-b934-7245e9788210-kube-api-access-kctjn\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.056127 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a6d02fe-0574-4567-b934-7245e9788210-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.056137 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a6d02fe-0574-4567-b934-7245e9788210-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:42 crc kubenswrapper[4856]: E1122 07:30:42.110347 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5958269df9150681dd441552a75538702068f0f32c90021941e316abd296fac8 is running failed: container process not found" containerID="5958269df9150681dd441552a75538702068f0f32c90021941e316abd296fac8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:30:42 crc kubenswrapper[4856]: E1122 07:30:42.110995 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5958269df9150681dd441552a75538702068f0f32c90021941e316abd296fac8 is running failed: container process not found" containerID="5958269df9150681dd441552a75538702068f0f32c90021941e316abd296fac8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:30:42 crc kubenswrapper[4856]: E1122 07:30:42.112583 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5958269df9150681dd441552a75538702068f0f32c90021941e316abd296fac8 is running failed: container process not found" containerID="5958269df9150681dd441552a75538702068f0f32c90021941e316abd296fac8" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:30:42 crc kubenswrapper[4856]: E1122 07:30:42.112653 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5958269df9150681dd441552a75538702068f0f32c90021941e316abd296fac8 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c174c09c-9aab-48b7-9c81-33fe98b2d401" containerName="nova-scheduler-scheduler" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.158042 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6xbq\" (UniqueName: \"kubernetes.io/projected/02dac877-8061-402a-b0bc-30f86a9305d6-kube-api-access-r6xbq\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.158204 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.158251 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-config-data\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.158343 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02dac877-8061-402a-b0bc-30f86a9305d6-logs\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.158465 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.160085 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02dac877-8061-402a-b0bc-30f86a9305d6-logs\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.163455 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.168349 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.170898 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-config-data\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.176170 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6xbq\" (UniqueName: \"kubernetes.io/projected/02dac877-8061-402a-b0bc-30f86a9305d6-kube-api-access-r6xbq\") pod \"nova-metadata-0\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.198535 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.219312 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.362128 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c174c09c-9aab-48b7-9c81-33fe98b2d401-combined-ca-bundle\") pod \"c174c09c-9aab-48b7-9c81-33fe98b2d401\" (UID: \"c174c09c-9aab-48b7-9c81-33fe98b2d401\") " Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.362200 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c174c09c-9aab-48b7-9c81-33fe98b2d401-config-data\") pod \"c174c09c-9aab-48b7-9c81-33fe98b2d401\" (UID: \"c174c09c-9aab-48b7-9c81-33fe98b2d401\") " Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.362550 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q4vg\" (UniqueName: \"kubernetes.io/projected/c174c09c-9aab-48b7-9c81-33fe98b2d401-kube-api-access-8q4vg\") pod \"c174c09c-9aab-48b7-9c81-33fe98b2d401\" (UID: \"c174c09c-9aab-48b7-9c81-33fe98b2d401\") " Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.366776 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c174c09c-9aab-48b7-9c81-33fe98b2d401-kube-api-access-8q4vg" (OuterVolumeSpecName: "kube-api-access-8q4vg") pod "c174c09c-9aab-48b7-9c81-33fe98b2d401" (UID: "c174c09c-9aab-48b7-9c81-33fe98b2d401"). InnerVolumeSpecName "kube-api-access-8q4vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.391428 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c174c09c-9aab-48b7-9c81-33fe98b2d401-config-data" (OuterVolumeSpecName: "config-data") pod "c174c09c-9aab-48b7-9c81-33fe98b2d401" (UID: "c174c09c-9aab-48b7-9c81-33fe98b2d401"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.391887 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c174c09c-9aab-48b7-9c81-33fe98b2d401-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c174c09c-9aab-48b7-9c81-33fe98b2d401" (UID: "c174c09c-9aab-48b7-9c81-33fe98b2d401"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.464886 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q4vg\" (UniqueName: \"kubernetes.io/projected/c174c09c-9aab-48b7-9c81-33fe98b2d401-kube-api-access-8q4vg\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.464924 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c174c09c-9aab-48b7-9c81-33fe98b2d401-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.464934 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c174c09c-9aab-48b7-9c81-33fe98b2d401-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.657497 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.725123 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9710c392-7608-4cc9-8201-7d39af56e340" path="/var/lib/kubelet/pods/9710c392-7608-4cc9-8201-7d39af56e340/volumes" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.812477 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c174c09c-9aab-48b7-9c81-33fe98b2d401","Type":"ContainerDied","Data":"aaec3a70560457f2e1ed33d890dcba5bbc8e03847401e5f5768aabc9b36fc6ee"} Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.812546 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.812583 4856 scope.go:117] "RemoveContainer" containerID="5958269df9150681dd441552a75538702068f0f32c90021941e316abd296fac8" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.818902 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.819881 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"02dac877-8061-402a-b0bc-30f86a9305d6","Type":"ContainerStarted","Data":"15b931d66e0d84d5f7a5bf87595d5333a49653cc8371a274c7aaae206ba0031c"} Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.847155 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.869548 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.901886 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.916654 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.926973 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:30:42 crc kubenswrapper[4856]: E1122 07:30:42.927455 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c174c09c-9aab-48b7-9c81-33fe98b2d401" containerName="nova-scheduler-scheduler" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.927482 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c174c09c-9aab-48b7-9c81-33fe98b2d401" containerName="nova-scheduler-scheduler" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.927769 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c174c09c-9aab-48b7-9c81-33fe98b2d401" containerName="nova-scheduler-scheduler" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.928399 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.933058 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.937786 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.954172 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.955704 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.957553 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:30:42 crc kubenswrapper[4856]: I1122 07:30:42.958677 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.076797 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-config-data\") pod \"nova-api-0\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " pod="openstack/nova-api-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.077223 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa204458-2052-48a7-9b65-10fdfb68e792-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fa204458-2052-48a7-9b65-10fdfb68e792\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.077268 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " pod="openstack/nova-api-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.077340 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-logs\") pod \"nova-api-0\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " pod="openstack/nova-api-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.077372 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa204458-2052-48a7-9b65-10fdfb68e792-config-data\") pod \"nova-scheduler-0\" (UID: \"fa204458-2052-48a7-9b65-10fdfb68e792\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.077452 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mbp\" (UniqueName: \"kubernetes.io/projected/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-kube-api-access-w8mbp\") pod \"nova-api-0\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " pod="openstack/nova-api-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.077483 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pf88\" (UniqueName: \"kubernetes.io/projected/fa204458-2052-48a7-9b65-10fdfb68e792-kube-api-access-7pf88\") pod \"nova-scheduler-0\" (UID: \"fa204458-2052-48a7-9b65-10fdfb68e792\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.179259 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-logs\") pod \"nova-api-0\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " pod="openstack/nova-api-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.179333 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa204458-2052-48a7-9b65-10fdfb68e792-config-data\") pod \"nova-scheduler-0\" (UID: \"fa204458-2052-48a7-9b65-10fdfb68e792\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.179486 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mbp\" (UniqueName: \"kubernetes.io/projected/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-kube-api-access-w8mbp\") pod \"nova-api-0\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " pod="openstack/nova-api-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.179545 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pf88\" (UniqueName: \"kubernetes.io/projected/fa204458-2052-48a7-9b65-10fdfb68e792-kube-api-access-7pf88\") pod \"nova-scheduler-0\" (UID: \"fa204458-2052-48a7-9b65-10fdfb68e792\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.179613 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-config-data\") pod \"nova-api-0\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " pod="openstack/nova-api-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.179643 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa204458-2052-48a7-9b65-10fdfb68e792-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fa204458-2052-48a7-9b65-10fdfb68e792\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.179694 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " pod="openstack/nova-api-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.180526 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-logs\") pod \"nova-api-0\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " pod="openstack/nova-api-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.187085 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " pod="openstack/nova-api-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.189256 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa204458-2052-48a7-9b65-10fdfb68e792-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fa204458-2052-48a7-9b65-10fdfb68e792\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.190322 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa204458-2052-48a7-9b65-10fdfb68e792-config-data\") pod \"nova-scheduler-0\" (UID: \"fa204458-2052-48a7-9b65-10fdfb68e792\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.192348 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-config-data\") pod \"nova-api-0\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " pod="openstack/nova-api-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.199152 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mbp\" (UniqueName: \"kubernetes.io/projected/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-kube-api-access-w8mbp\") pod \"nova-api-0\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " pod="openstack/nova-api-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.202336 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pf88\" (UniqueName: \"kubernetes.io/projected/fa204458-2052-48a7-9b65-10fdfb68e792-kube-api-access-7pf88\") pod \"nova-scheduler-0\" (UID: \"fa204458-2052-48a7-9b65-10fdfb68e792\") " pod="openstack/nova-scheduler-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.324343 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.336408 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.834969 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"02dac877-8061-402a-b0bc-30f86a9305d6","Type":"ContainerStarted","Data":"df58c745712dbcedd311578fd3181594596212dea0770672bcd0884a14ab9ebd"} Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.835287 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"02dac877-8061-402a-b0bc-30f86a9305d6","Type":"ContainerStarted","Data":"bb1db2ba156b51b1119b4f4bfe5136e3b64ef6280be201e4ce1427a7ccd5d2ba"} Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.856462 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.856440806 podStartE2EDuration="2.856440806s" podCreationTimestamp="2025-11-22 07:30:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:30:43.856081066 +0000 UTC m=+1686.269474324" watchObservedRunningTime="2025-11-22 07:30:43.856440806 +0000 UTC m=+1686.269834064" Nov 22 07:30:43 crc kubenswrapper[4856]: I1122 07:30:43.911817 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:30:44 crc kubenswrapper[4856]: I1122 07:30:44.055115 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:30:44 crc kubenswrapper[4856]: I1122 07:30:44.675177 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 07:30:44 crc kubenswrapper[4856]: I1122 07:30:44.720363 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a6d02fe-0574-4567-b934-7245e9788210" path="/var/lib/kubelet/pods/3a6d02fe-0574-4567-b934-7245e9788210/volumes" Nov 22 07:30:44 crc kubenswrapper[4856]: I1122 07:30:44.721276 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c174c09c-9aab-48b7-9c81-33fe98b2d401" path="/var/lib/kubelet/pods/c174c09c-9aab-48b7-9c81-33fe98b2d401/volumes" Nov 22 07:30:44 crc kubenswrapper[4856]: I1122 07:30:44.847035 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8","Type":"ContainerStarted","Data":"6e3dd8b924b897acb9b184243b472d1bfded14d6b7301c4d1e6a22fc4e1cb1fa"} Nov 22 07:30:44 crc kubenswrapper[4856]: I1122 07:30:44.847078 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8","Type":"ContainerStarted","Data":"1305ade93c45b7db5cb683dda815f78f4b84965b753342ff5aa6010603f877fd"} Nov 22 07:30:44 crc kubenswrapper[4856]: I1122 07:30:44.847088 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8","Type":"ContainerStarted","Data":"3ef5d523c58cc9076466196d896dcf4e71a0a76b7d6bf314b25457022c127f48"} Nov 22 07:30:44 crc kubenswrapper[4856]: I1122 07:30:44.851668 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa204458-2052-48a7-9b65-10fdfb68e792","Type":"ContainerStarted","Data":"fd39b333647f5bd987347453fe62460369f3d71510528f5459ac17f4c944929d"} Nov 22 07:30:44 crc kubenswrapper[4856]: I1122 07:30:44.851727 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa204458-2052-48a7-9b65-10fdfb68e792","Type":"ContainerStarted","Data":"b8bfae512ccf1b72164a2de13546ddab483dfb0ffd27e7de56bc460c1790e524"} Nov 22 07:30:44 crc kubenswrapper[4856]: I1122 07:30:44.870483 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.8704679300000002 podStartE2EDuration="2.87046793s" podCreationTimestamp="2025-11-22 07:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:30:44.865801753 +0000 UTC m=+1687.279195011" watchObservedRunningTime="2025-11-22 07:30:44.87046793 +0000 UTC m=+1687.283861188" Nov 22 07:30:44 crc kubenswrapper[4856]: I1122 07:30:44.883953 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.883932257 podStartE2EDuration="2.883932257s" podCreationTimestamp="2025-11-22 07:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:30:44.879204637 +0000 UTC m=+1687.292597895" watchObservedRunningTime="2025-11-22 07:30:44.883932257 +0000 UTC m=+1687.297325515" Nov 22 07:30:47 crc kubenswrapper[4856]: I1122 07:30:47.219862 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:30:47 crc kubenswrapper[4856]: I1122 07:30:47.220178 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:30:47 crc kubenswrapper[4856]: I1122 07:30:47.709629 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:30:47 crc kubenswrapper[4856]: E1122 07:30:47.709960 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:30:48 crc kubenswrapper[4856]: I1122 07:30:48.325310 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 07:30:52 crc kubenswrapper[4856]: I1122 07:30:52.220385 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:30:52 crc kubenswrapper[4856]: I1122 07:30:52.220697 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:30:53 crc kubenswrapper[4856]: I1122 07:30:53.236947 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="02dac877-8061-402a-b0bc-30f86a9305d6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:30:53 crc kubenswrapper[4856]: I1122 07:30:53.236980 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="02dac877-8061-402a-b0bc-30f86a9305d6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:30:53 crc kubenswrapper[4856]: I1122 07:30:53.325143 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 07:30:53 crc kubenswrapper[4856]: I1122 07:30:53.337354 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:30:53 crc kubenswrapper[4856]: I1122 07:30:53.337403 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:30:53 crc kubenswrapper[4856]: I1122 07:30:53.356147 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 07:30:53 crc kubenswrapper[4856]: I1122 07:30:53.976857 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 07:30:54 crc kubenswrapper[4856]: I1122 07:30:54.419813 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:30:54 crc kubenswrapper[4856]: I1122 07:30:54.419929 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.200:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 07:30:58 crc kubenswrapper[4856]: E1122 07:30:58.853343 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod123058e1_a3df_48c7_af5e_5edcf61b4d44.slice/crio-conmon-8a4091a03777983698af4404e0183df884d433a4302a486d6176c40e2f6c5256.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod123058e1_a3df_48c7_af5e_5edcf61b4d44.slice/crio-8a4091a03777983698af4404e0183df884d433a4302a486d6176c40e2f6c5256.scope\": RecentStats: unable to find data in memory cache]" Nov 22 07:30:58 crc kubenswrapper[4856]: I1122 07:30:58.986230 4856 generic.go:334] "Generic (PLEG): container finished" podID="123058e1-a3df-48c7-af5e-5edcf61b4d44" containerID="8a4091a03777983698af4404e0183df884d433a4302a486d6176c40e2f6c5256" exitCode=137 Nov 22 07:30:58 crc kubenswrapper[4856]: I1122 07:30:58.986281 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"123058e1-a3df-48c7-af5e-5edcf61b4d44","Type":"ContainerDied","Data":"8a4091a03777983698af4404e0183df884d433a4302a486d6176c40e2f6c5256"} Nov 22 07:30:59 crc kubenswrapper[4856]: I1122 07:30:59.498472 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:30:59 crc kubenswrapper[4856]: I1122 07:30:59.593650 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x66gc\" (UniqueName: \"kubernetes.io/projected/123058e1-a3df-48c7-af5e-5edcf61b4d44-kube-api-access-x66gc\") pod \"123058e1-a3df-48c7-af5e-5edcf61b4d44\" (UID: \"123058e1-a3df-48c7-af5e-5edcf61b4d44\") " Nov 22 07:30:59 crc kubenswrapper[4856]: I1122 07:30:59.593899 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123058e1-a3df-48c7-af5e-5edcf61b4d44-combined-ca-bundle\") pod \"123058e1-a3df-48c7-af5e-5edcf61b4d44\" (UID: \"123058e1-a3df-48c7-af5e-5edcf61b4d44\") " Nov 22 07:30:59 crc kubenswrapper[4856]: I1122 07:30:59.593930 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123058e1-a3df-48c7-af5e-5edcf61b4d44-config-data\") pod \"123058e1-a3df-48c7-af5e-5edcf61b4d44\" (UID: \"123058e1-a3df-48c7-af5e-5edcf61b4d44\") " Nov 22 07:30:59 crc kubenswrapper[4856]: I1122 07:30:59.600674 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/123058e1-a3df-48c7-af5e-5edcf61b4d44-kube-api-access-x66gc" (OuterVolumeSpecName: "kube-api-access-x66gc") pod "123058e1-a3df-48c7-af5e-5edcf61b4d44" (UID: "123058e1-a3df-48c7-af5e-5edcf61b4d44"). InnerVolumeSpecName "kube-api-access-x66gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:30:59 crc kubenswrapper[4856]: I1122 07:30:59.625691 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123058e1-a3df-48c7-af5e-5edcf61b4d44-config-data" (OuterVolumeSpecName: "config-data") pod "123058e1-a3df-48c7-af5e-5edcf61b4d44" (UID: "123058e1-a3df-48c7-af5e-5edcf61b4d44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:59 crc kubenswrapper[4856]: I1122 07:30:59.626194 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/123058e1-a3df-48c7-af5e-5edcf61b4d44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "123058e1-a3df-48c7-af5e-5edcf61b4d44" (UID: "123058e1-a3df-48c7-af5e-5edcf61b4d44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:30:59 crc kubenswrapper[4856]: I1122 07:30:59.696125 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/123058e1-a3df-48c7-af5e-5edcf61b4d44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:59 crc kubenswrapper[4856]: I1122 07:30:59.696180 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/123058e1-a3df-48c7-af5e-5edcf61b4d44-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:59 crc kubenswrapper[4856]: I1122 07:30:59.696192 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x66gc\" (UniqueName: \"kubernetes.io/projected/123058e1-a3df-48c7-af5e-5edcf61b4d44-kube-api-access-x66gc\") on node \"crc\" DevicePath \"\"" Nov 22 07:30:59 crc kubenswrapper[4856]: I1122 07:30:59.997395 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"123058e1-a3df-48c7-af5e-5edcf61b4d44","Type":"ContainerDied","Data":"dd34ded9a4e122adca3f853416005836e161720b939650b1a57544b32e61b798"} Nov 22 07:30:59 crc kubenswrapper[4856]: I1122 07:30:59.998587 4856 scope.go:117] "RemoveContainer" containerID="8a4091a03777983698af4404e0183df884d433a4302a486d6176c40e2f6c5256" Nov 22 07:30:59 crc kubenswrapper[4856]: I1122 07:30:59.997488 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.031360 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.039722 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.061559 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:31:00 crc kubenswrapper[4856]: E1122 07:31:00.061991 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="123058e1-a3df-48c7-af5e-5edcf61b4d44" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.062019 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="123058e1-a3df-48c7-af5e-5edcf61b4d44" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.062252 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="123058e1-a3df-48c7-af5e-5edcf61b4d44" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.063018 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.065458 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.065636 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.074157 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.080054 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.208316 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.208398 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vrn9\" (UniqueName: \"kubernetes.io/projected/8f9815d1-2297-4a66-9793-ba485053ca2a-kube-api-access-6vrn9\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.208440 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.208465 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.208796 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.310977 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.311113 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.311145 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vrn9\" (UniqueName: \"kubernetes.io/projected/8f9815d1-2297-4a66-9793-ba485053ca2a-kube-api-access-6vrn9\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.311167 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.311186 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.316002 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.316296 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.317330 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.317694 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.332477 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vrn9\" (UniqueName: \"kubernetes.io/projected/8f9815d1-2297-4a66-9793-ba485053ca2a-kube-api-access-6vrn9\") pod \"nova-cell1-novncproxy-0\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.382654 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.735653 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="123058e1-a3df-48c7-af5e-5edcf61b4d44" path="/var/lib/kubelet/pods/123058e1-a3df-48c7-af5e-5edcf61b4d44/volumes" Nov 22 07:31:00 crc kubenswrapper[4856]: I1122 07:31:00.838508 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:31:00 crc kubenswrapper[4856]: W1122 07:31:00.844044 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f9815d1_2297_4a66_9793_ba485053ca2a.slice/crio-1d4ac275fe718fb06abe26d3a57ceedc8ad5abd3b62553ba6c0ce64fc14b2756 WatchSource:0}: Error finding container 1d4ac275fe718fb06abe26d3a57ceedc8ad5abd3b62553ba6c0ce64fc14b2756: Status 404 returned error can't find the container with id 1d4ac275fe718fb06abe26d3a57ceedc8ad5abd3b62553ba6c0ce64fc14b2756 Nov 22 07:31:01 crc kubenswrapper[4856]: I1122 07:31:01.011224 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8f9815d1-2297-4a66-9793-ba485053ca2a","Type":"ContainerStarted","Data":"1d4ac275fe718fb06abe26d3a57ceedc8ad5abd3b62553ba6c0ce64fc14b2756"} Nov 22 07:31:02 crc kubenswrapper[4856]: I1122 07:31:02.025326 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8f9815d1-2297-4a66-9793-ba485053ca2a","Type":"ContainerStarted","Data":"75e814a4cfa4f97ecc9bfab324de4d5b2b33d836ae12cc47b87c6782b91c5dae"} Nov 22 07:31:02 crc kubenswrapper[4856]: I1122 07:31:02.047010 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.046991295 podStartE2EDuration="2.046991295s" podCreationTimestamp="2025-11-22 07:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:31:02.046185222 +0000 UTC m=+1704.459578490" watchObservedRunningTime="2025-11-22 07:31:02.046991295 +0000 UTC m=+1704.460384553" Nov 22 07:31:02 crc kubenswrapper[4856]: I1122 07:31:02.225043 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 07:31:02 crc kubenswrapper[4856]: I1122 07:31:02.227708 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 07:31:02 crc kubenswrapper[4856]: I1122 07:31:02.230233 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 07:31:02 crc kubenswrapper[4856]: I1122 07:31:02.710605 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:31:02 crc kubenswrapper[4856]: E1122 07:31:02.711174 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:31:03 crc kubenswrapper[4856]: I1122 07:31:03.039543 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 07:31:03 crc kubenswrapper[4856]: I1122 07:31:03.342659 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:31:03 crc kubenswrapper[4856]: I1122 07:31:03.343173 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:31:03 crc kubenswrapper[4856]: I1122 07:31:03.343296 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:31:03 crc kubenswrapper[4856]: I1122 07:31:03.348803 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.041613 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.049743 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.207116 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b57bd9f89-z95qh"] Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.209402 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.233553 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b57bd9f89-z95qh"] Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.297884 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-config\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.298022 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-ovsdbserver-nb\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.298093 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-ovsdbserver-sb\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.298123 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-dns-swift-storage-0\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.298170 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcpmw\" (UniqueName: \"kubernetes.io/projected/8c76350d-ce88-42c5-8f7c-68c084a511e2-kube-api-access-zcpmw\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.298196 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-dns-svc\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.399499 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-ovsdbserver-nb\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.399597 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-ovsdbserver-sb\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.399620 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-dns-swift-storage-0\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.399656 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcpmw\" (UniqueName: \"kubernetes.io/projected/8c76350d-ce88-42c5-8f7c-68c084a511e2-kube-api-access-zcpmw\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.399673 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-dns-svc\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.399738 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-config\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.400896 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-config\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.400918 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-ovsdbserver-nb\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.401967 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-ovsdbserver-sb\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.402543 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-dns-swift-storage-0\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.402624 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-dns-svc\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.423398 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcpmw\" (UniqueName: \"kubernetes.io/projected/8c76350d-ce88-42c5-8f7c-68c084a511e2-kube-api-access-zcpmw\") pod \"dnsmasq-dns-6b57bd9f89-z95qh\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:04 crc kubenswrapper[4856]: I1122 07:31:04.548786 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:05 crc kubenswrapper[4856]: I1122 07:31:05.131114 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b57bd9f89-z95qh"] Nov 22 07:31:05 crc kubenswrapper[4856]: I1122 07:31:05.383827 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:06 crc kubenswrapper[4856]: I1122 07:31:06.060966 4856 generic.go:334] "Generic (PLEG): container finished" podID="8c76350d-ce88-42c5-8f7c-68c084a511e2" containerID="da67b84177209e9078903a9d7ca7f3ae2a9d1b2f39212601d306011553e1be52" exitCode=0 Nov 22 07:31:06 crc kubenswrapper[4856]: I1122 07:31:06.061027 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" event={"ID":"8c76350d-ce88-42c5-8f7c-68c084a511e2","Type":"ContainerDied","Data":"da67b84177209e9078903a9d7ca7f3ae2a9d1b2f39212601d306011553e1be52"} Nov 22 07:31:06 crc kubenswrapper[4856]: I1122 07:31:06.061494 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" event={"ID":"8c76350d-ce88-42c5-8f7c-68c084a511e2","Type":"ContainerStarted","Data":"5322cf2859055f4460688dafde41c71b7d57a773b39f772049d7578374f41363"} Nov 22 07:31:06 crc kubenswrapper[4856]: I1122 07:31:06.309798 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:31:06 crc kubenswrapper[4856]: I1122 07:31:06.310144 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="ceilometer-central-agent" containerID="cri-o://02504c98907f21ce7764d4fd28d98f7f096076bf7cb9878d6a5f71bfa8fcfe37" gracePeriod=30 Nov 22 07:31:06 crc kubenswrapper[4856]: I1122 07:31:06.310248 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="proxy-httpd" containerID="cri-o://888656c2851622693d777a2fa9c526789697aa0aeda8c832c7f7253e04075778" gracePeriod=30 Nov 22 07:31:06 crc kubenswrapper[4856]: I1122 07:31:06.310344 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="ceilometer-notification-agent" containerID="cri-o://d476936173fa8b9339e378c66ed81ca4fdf164e17aac3a0cee640e38a116c3dc" gracePeriod=30 Nov 22 07:31:06 crc kubenswrapper[4856]: I1122 07:31:06.310468 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="sg-core" containerID="cri-o://08393bede254000c3ca4c821f71ad0d95b0c2099bf833752bffeaf293edf8a8a" gracePeriod=30 Nov 22 07:31:07 crc kubenswrapper[4856]: I1122 07:31:07.072921 4856 generic.go:334] "Generic (PLEG): container finished" podID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerID="888656c2851622693d777a2fa9c526789697aa0aeda8c832c7f7253e04075778" exitCode=0 Nov 22 07:31:07 crc kubenswrapper[4856]: I1122 07:31:07.073167 4856 generic.go:334] "Generic (PLEG): container finished" podID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerID="08393bede254000c3ca4c821f71ad0d95b0c2099bf833752bffeaf293edf8a8a" exitCode=2 Nov 22 07:31:07 crc kubenswrapper[4856]: I1122 07:31:07.073179 4856 generic.go:334] "Generic (PLEG): container finished" podID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerID="02504c98907f21ce7764d4fd28d98f7f096076bf7cb9878d6a5f71bfa8fcfe37" exitCode=0 Nov 22 07:31:07 crc kubenswrapper[4856]: I1122 07:31:07.073221 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d53cbd2-2659-4dac-a5ea-2d6285d32896","Type":"ContainerDied","Data":"888656c2851622693d777a2fa9c526789697aa0aeda8c832c7f7253e04075778"} Nov 22 07:31:07 crc kubenswrapper[4856]: I1122 07:31:07.073266 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d53cbd2-2659-4dac-a5ea-2d6285d32896","Type":"ContainerDied","Data":"08393bede254000c3ca4c821f71ad0d95b0c2099bf833752bffeaf293edf8a8a"} Nov 22 07:31:07 crc kubenswrapper[4856]: I1122 07:31:07.073279 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d53cbd2-2659-4dac-a5ea-2d6285d32896","Type":"ContainerDied","Data":"02504c98907f21ce7764d4fd28d98f7f096076bf7cb9878d6a5f71bfa8fcfe37"} Nov 22 07:31:07 crc kubenswrapper[4856]: I1122 07:31:07.076178 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" event={"ID":"8c76350d-ce88-42c5-8f7c-68c084a511e2","Type":"ContainerStarted","Data":"926e6fd1571566f17f16d953955d8eb260b1f5bed95ee74d64360f30908b0a98"} Nov 22 07:31:07 crc kubenswrapper[4856]: I1122 07:31:07.077201 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:07 crc kubenswrapper[4856]: I1122 07:31:07.104752 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" podStartSLOduration=3.104731217 podStartE2EDuration="3.104731217s" podCreationTimestamp="2025-11-22 07:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:31:07.09823509 +0000 UTC m=+1709.511628368" watchObservedRunningTime="2025-11-22 07:31:07.104731217 +0000 UTC m=+1709.518124475" Nov 22 07:31:07 crc kubenswrapper[4856]: I1122 07:31:07.152264 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:31:07 crc kubenswrapper[4856]: I1122 07:31:07.152609 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" containerName="nova-api-api" containerID="cri-o://6e3dd8b924b897acb9b184243b472d1bfded14d6b7301c4d1e6a22fc4e1cb1fa" gracePeriod=30 Nov 22 07:31:07 crc kubenswrapper[4856]: I1122 07:31:07.152618 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" containerName="nova-api-log" containerID="cri-o://1305ade93c45b7db5cb683dda815f78f4b84965b753342ff5aa6010603f877fd" gracePeriod=30 Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.088771 4856 generic.go:334] "Generic (PLEG): container finished" podID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerID="d476936173fa8b9339e378c66ed81ca4fdf164e17aac3a0cee640e38a116c3dc" exitCode=0 Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.088880 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d53cbd2-2659-4dac-a5ea-2d6285d32896","Type":"ContainerDied","Data":"d476936173fa8b9339e378c66ed81ca4fdf164e17aac3a0cee640e38a116c3dc"} Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.093212 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" containerID="1305ade93c45b7db5cb683dda815f78f4b84965b753342ff5aa6010603f877fd" exitCode=143 Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.094139 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8","Type":"ContainerDied","Data":"1305ade93c45b7db5cb683dda815f78f4b84965b753342ff5aa6010603f877fd"} Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.450820 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.516366 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d53cbd2-2659-4dac-a5ea-2d6285d32896-log-httpd\") pod \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.516435 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-ceilometer-tls-certs\") pod \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.516489 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-config-data\") pod \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.516592 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d53cbd2-2659-4dac-a5ea-2d6285d32896-run-httpd\") pod \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.516640 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-combined-ca-bundle\") pod \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.516693 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-scripts\") pod \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.516719 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-sg-core-conf-yaml\") pod \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.516734 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2l8s\" (UniqueName: \"kubernetes.io/projected/8d53cbd2-2659-4dac-a5ea-2d6285d32896-kube-api-access-s2l8s\") pod \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\" (UID: \"8d53cbd2-2659-4dac-a5ea-2d6285d32896\") " Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.517489 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d53cbd2-2659-4dac-a5ea-2d6285d32896-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8d53cbd2-2659-4dac-a5ea-2d6285d32896" (UID: "8d53cbd2-2659-4dac-a5ea-2d6285d32896"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.517816 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d53cbd2-2659-4dac-a5ea-2d6285d32896-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8d53cbd2-2659-4dac-a5ea-2d6285d32896" (UID: "8d53cbd2-2659-4dac-a5ea-2d6285d32896"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.523185 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d53cbd2-2659-4dac-a5ea-2d6285d32896-kube-api-access-s2l8s" (OuterVolumeSpecName: "kube-api-access-s2l8s") pod "8d53cbd2-2659-4dac-a5ea-2d6285d32896" (UID: "8d53cbd2-2659-4dac-a5ea-2d6285d32896"). InnerVolumeSpecName "kube-api-access-s2l8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.538739 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-scripts" (OuterVolumeSpecName: "scripts") pod "8d53cbd2-2659-4dac-a5ea-2d6285d32896" (UID: "8d53cbd2-2659-4dac-a5ea-2d6285d32896"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.569218 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8d53cbd2-2659-4dac-a5ea-2d6285d32896" (UID: "8d53cbd2-2659-4dac-a5ea-2d6285d32896"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.581883 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8d53cbd2-2659-4dac-a5ea-2d6285d32896" (UID: "8d53cbd2-2659-4dac-a5ea-2d6285d32896"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.618484 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.618538 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.618553 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2l8s\" (UniqueName: \"kubernetes.io/projected/8d53cbd2-2659-4dac-a5ea-2d6285d32896-kube-api-access-s2l8s\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.618567 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d53cbd2-2659-4dac-a5ea-2d6285d32896-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.618579 4856 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.618589 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d53cbd2-2659-4dac-a5ea-2d6285d32896-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.640687 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d53cbd2-2659-4dac-a5ea-2d6285d32896" (UID: "8d53cbd2-2659-4dac-a5ea-2d6285d32896"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.665089 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-config-data" (OuterVolumeSpecName: "config-data") pod "8d53cbd2-2659-4dac-a5ea-2d6285d32896" (UID: "8d53cbd2-2659-4dac-a5ea-2d6285d32896"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.731295 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:08 crc kubenswrapper[4856]: I1122 07:31:08.731348 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d53cbd2-2659-4dac-a5ea-2d6285d32896-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.106471 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d53cbd2-2659-4dac-a5ea-2d6285d32896","Type":"ContainerDied","Data":"f36a5dd7fcaa357acd6768ce7caa458ceb8eb496f6edf8a9d8123b9d5bf80fcf"} Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.106610 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.107483 4856 scope.go:117] "RemoveContainer" containerID="888656c2851622693d777a2fa9c526789697aa0aeda8c832c7f7253e04075778" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.142670 4856 scope.go:117] "RemoveContainer" containerID="08393bede254000c3ca4c821f71ad0d95b0c2099bf833752bffeaf293edf8a8a" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.147157 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.162367 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.167105 4856 scope.go:117] "RemoveContainer" containerID="d476936173fa8b9339e378c66ed81ca4fdf164e17aac3a0cee640e38a116c3dc" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.169702 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:31:09 crc kubenswrapper[4856]: E1122 07:31:09.170117 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="sg-core" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.170130 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="sg-core" Nov 22 07:31:09 crc kubenswrapper[4856]: E1122 07:31:09.170163 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="ceilometer-central-agent" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.170168 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="ceilometer-central-agent" Nov 22 07:31:09 crc kubenswrapper[4856]: E1122 07:31:09.170177 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="proxy-httpd" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.170183 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="proxy-httpd" Nov 22 07:31:09 crc kubenswrapper[4856]: E1122 07:31:09.170201 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="ceilometer-notification-agent" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.170207 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="ceilometer-notification-agent" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.170366 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="ceilometer-notification-agent" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.170379 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="proxy-httpd" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.170395 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="ceilometer-central-agent" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.170405 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" containerName="sg-core" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.172095 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.177849 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.193088 4856 scope.go:117] "RemoveContainer" containerID="02504c98907f21ce7764d4fd28d98f7f096076bf7cb9878d6a5f71bfa8fcfe37" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.195628 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.195729 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.195943 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.241176 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11484882-5c9a-4546-8ea5-52e0299e55bb-run-httpd\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.241261 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-scripts\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.241285 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-config-data\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.241333 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.241359 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.241424 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9zdb\" (UniqueName: \"kubernetes.io/projected/11484882-5c9a-4546-8ea5-52e0299e55bb-kube-api-access-f9zdb\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.241451 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11484882-5c9a-4546-8ea5-52e0299e55bb-log-httpd\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.241499 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.343275 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-scripts\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.343338 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-config-data\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.343398 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.343425 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.343499 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9zdb\" (UniqueName: \"kubernetes.io/projected/11484882-5c9a-4546-8ea5-52e0299e55bb-kube-api-access-f9zdb\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.343559 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11484882-5c9a-4546-8ea5-52e0299e55bb-log-httpd\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.343581 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.343655 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11484882-5c9a-4546-8ea5-52e0299e55bb-run-httpd\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.344467 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11484882-5c9a-4546-8ea5-52e0299e55bb-log-httpd\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.344526 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11484882-5c9a-4546-8ea5-52e0299e55bb-run-httpd\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.348985 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-scripts\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.350178 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-config-data\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.351778 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.358489 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.362095 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.364885 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9zdb\" (UniqueName: \"kubernetes.io/projected/11484882-5c9a-4546-8ea5-52e0299e55bb-kube-api-access-f9zdb\") pod \"ceilometer-0\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.521050 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.715227 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:31:09 crc kubenswrapper[4856]: I1122 07:31:09.991800 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:31:09 crc kubenswrapper[4856]: W1122 07:31:09.997860 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11484882_5c9a_4546_8ea5_52e0299e55bb.slice/crio-7b5512852e1ec6d9e9e45f8c9f0d2d0cf1d0d3f9bba16bc6b7629d4ae31aa27f WatchSource:0}: Error finding container 7b5512852e1ec6d9e9e45f8c9f0d2d0cf1d0d3f9bba16bc6b7629d4ae31aa27f: Status 404 returned error can't find the container with id 7b5512852e1ec6d9e9e45f8c9f0d2d0cf1d0d3f9bba16bc6b7629d4ae31aa27f Nov 22 07:31:10 crc kubenswrapper[4856]: I1122 07:31:10.138665 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11484882-5c9a-4546-8ea5-52e0299e55bb","Type":"ContainerStarted","Data":"7b5512852e1ec6d9e9e45f8c9f0d2d0cf1d0d3f9bba16bc6b7629d4ae31aa27f"} Nov 22 07:31:10 crc kubenswrapper[4856]: I1122 07:31:10.383158 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:10 crc kubenswrapper[4856]: I1122 07:31:10.402364 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:10 crc kubenswrapper[4856]: I1122 07:31:10.721733 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d53cbd2-2659-4dac-a5ea-2d6285d32896" path="/var/lib/kubelet/pods/8d53cbd2-2659-4dac-a5ea-2d6285d32896/volumes" Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.154042 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" containerID="6e3dd8b924b897acb9b184243b472d1bfded14d6b7301c4d1e6a22fc4e1cb1fa" exitCode=0 Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.155178 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8","Type":"ContainerDied","Data":"6e3dd8b924b897acb9b184243b472d1bfded14d6b7301c4d1e6a22fc4e1cb1fa"} Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.169357 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.304369 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.384500 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-config-data\") pod \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.384740 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8mbp\" (UniqueName: \"kubernetes.io/projected/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-kube-api-access-w8mbp\") pod \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.384815 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-combined-ca-bundle\") pod \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.384855 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-logs\") pod \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\" (UID: \"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8\") " Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.385620 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-logs" (OuterVolumeSpecName: "logs") pod "8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" (UID: "8b5414b3-6e41-4caa-b030-ecbae1bf8ac8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.391533 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-kube-api-access-w8mbp" (OuterVolumeSpecName: "kube-api-access-w8mbp") pod "8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" (UID: "8b5414b3-6e41-4caa-b030-ecbae1bf8ac8"). InnerVolumeSpecName "kube-api-access-w8mbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.424680 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-config-data" (OuterVolumeSpecName: "config-data") pod "8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" (UID: "8b5414b3-6e41-4caa-b030-ecbae1bf8ac8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.446018 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" (UID: "8b5414b3-6e41-4caa-b030-ecbae1bf8ac8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.486618 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.486643 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8mbp\" (UniqueName: \"kubernetes.io/projected/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-kube-api-access-w8mbp\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.486654 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:11 crc kubenswrapper[4856]: I1122 07:31:11.486664 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.165389 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b5414b3-6e41-4caa-b030-ecbae1bf8ac8","Type":"ContainerDied","Data":"3ef5d523c58cc9076466196d896dcf4e71a0a76b7d6bf314b25457022c127f48"} Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.165421 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.165856 4856 scope.go:117] "RemoveContainer" containerID="6e3dd8b924b897acb9b184243b472d1bfded14d6b7301c4d1e6a22fc4e1cb1fa" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.167483 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11484882-5c9a-4546-8ea5-52e0299e55bb","Type":"ContainerStarted","Data":"171fa6d910ec890fc4e231b90b555eb91c0dd9281736a4f8988948a4510de215"} Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.199752 4856 scope.go:117] "RemoveContainer" containerID="1305ade93c45b7db5cb683dda815f78f4b84965b753342ff5aa6010603f877fd" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.206733 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.215173 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.240155 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:31:12 crc kubenswrapper[4856]: E1122 07:31:12.240691 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" containerName="nova-api-api" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.240714 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" containerName="nova-api-api" Nov 22 07:31:12 crc kubenswrapper[4856]: E1122 07:31:12.240726 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" containerName="nova-api-log" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.240733 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" containerName="nova-api-log" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.240942 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" containerName="nova-api-api" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.240963 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" containerName="nova-api-log" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.242278 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.244407 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.245006 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.245136 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.261785 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.303450 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-public-tls-certs\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.303533 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-config-data\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.303555 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.303629 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a263525-c6c2-4256-8e77-7ecf12df9caf-logs\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.303720 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.303924 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pvt2\" (UniqueName: \"kubernetes.io/projected/3a263525-c6c2-4256-8e77-7ecf12df9caf-kube-api-access-5pvt2\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.406506 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-public-tls-certs\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.406593 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-config-data\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.406629 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.406732 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a263525-c6c2-4256-8e77-7ecf12df9caf-logs\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.406760 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.406840 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pvt2\" (UniqueName: \"kubernetes.io/projected/3a263525-c6c2-4256-8e77-7ecf12df9caf-kube-api-access-5pvt2\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.407183 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a263525-c6c2-4256-8e77-7ecf12df9caf-logs\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.411623 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-public-tls-certs\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.411659 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.412251 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-config-data\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.415184 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.427648 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pvt2\" (UniqueName: \"kubernetes.io/projected/3a263525-c6c2-4256-8e77-7ecf12df9caf-kube-api-access-5pvt2\") pod \"nova-api-0\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.578400 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:31:12 crc kubenswrapper[4856]: I1122 07:31:12.722577 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b5414b3-6e41-4caa-b030-ecbae1bf8ac8" path="/var/lib/kubelet/pods/8b5414b3-6e41-4caa-b030-ecbae1bf8ac8/volumes" Nov 22 07:31:13 crc kubenswrapper[4856]: I1122 07:31:13.025381 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:31:13 crc kubenswrapper[4856]: W1122 07:31:13.027552 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a263525_c6c2_4256_8e77_7ecf12df9caf.slice/crio-10b7ef438530f610286d692bc7999e5f9b9fa24d40ad1e0f23cd1221d54f8a9b WatchSource:0}: Error finding container 10b7ef438530f610286d692bc7999e5f9b9fa24d40ad1e0f23cd1221d54f8a9b: Status 404 returned error can't find the container with id 10b7ef438530f610286d692bc7999e5f9b9fa24d40ad1e0f23cd1221d54f8a9b Nov 22 07:31:13 crc kubenswrapper[4856]: I1122 07:31:13.194494 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a263525-c6c2-4256-8e77-7ecf12df9caf","Type":"ContainerStarted","Data":"10b7ef438530f610286d692bc7999e5f9b9fa24d40ad1e0f23cd1221d54f8a9b"} Nov 22 07:31:14 crc kubenswrapper[4856]: I1122 07:31:14.207275 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a263525-c6c2-4256-8e77-7ecf12df9caf","Type":"ContainerStarted","Data":"4e140130f70e83959c9825437a4356abd59518f1bc7f588c31811c6ba07d3a8c"} Nov 22 07:31:14 crc kubenswrapper[4856]: I1122 07:31:14.558956 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:31:14 crc kubenswrapper[4856]: I1122 07:31:14.631474 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-588dc4df7-wm5rv"] Nov 22 07:31:14 crc kubenswrapper[4856]: I1122 07:31:14.631771 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" podUID="369dc315-311b-4701-b4ff-4c0925c06d03" containerName="dnsmasq-dns" containerID="cri-o://cd506a9cce5f375fe71e07b7e5f119bb0d475f8558d939f650f3dc3b13206492" gracePeriod=10 Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.234849 4856 generic.go:334] "Generic (PLEG): container finished" podID="369dc315-311b-4701-b4ff-4c0925c06d03" containerID="cd506a9cce5f375fe71e07b7e5f119bb0d475f8558d939f650f3dc3b13206492" exitCode=0 Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.234948 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" event={"ID":"369dc315-311b-4701-b4ff-4c0925c06d03","Type":"ContainerDied","Data":"cd506a9cce5f375fe71e07b7e5f119bb0d475f8558d939f650f3dc3b13206492"} Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.275029 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a263525-c6c2-4256-8e77-7ecf12df9caf","Type":"ContainerStarted","Data":"3b896043efdc8c004a38ab6b7b5f5f48c16d9c632c413f3028ca037ccf425d7c"} Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.308256 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.308231866 podStartE2EDuration="3.308231866s" podCreationTimestamp="2025-11-22 07:31:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:31:15.304721671 +0000 UTC m=+1717.718114929" watchObservedRunningTime="2025-11-22 07:31:15.308231866 +0000 UTC m=+1717.721625134" Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.681582 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.783252 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-ovsdbserver-sb\") pod \"369dc315-311b-4701-b4ff-4c0925c06d03\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.783350 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-config\") pod \"369dc315-311b-4701-b4ff-4c0925c06d03\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.783470 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-dns-swift-storage-0\") pod \"369dc315-311b-4701-b4ff-4c0925c06d03\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.783496 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxbgr\" (UniqueName: \"kubernetes.io/projected/369dc315-311b-4701-b4ff-4c0925c06d03-kube-api-access-vxbgr\") pod \"369dc315-311b-4701-b4ff-4c0925c06d03\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.783583 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-dns-svc\") pod \"369dc315-311b-4701-b4ff-4c0925c06d03\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.783627 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-ovsdbserver-nb\") pod \"369dc315-311b-4701-b4ff-4c0925c06d03\" (UID: \"369dc315-311b-4701-b4ff-4c0925c06d03\") " Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.800750 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/369dc315-311b-4701-b4ff-4c0925c06d03-kube-api-access-vxbgr" (OuterVolumeSpecName: "kube-api-access-vxbgr") pod "369dc315-311b-4701-b4ff-4c0925c06d03" (UID: "369dc315-311b-4701-b4ff-4c0925c06d03"). InnerVolumeSpecName "kube-api-access-vxbgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.840270 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "369dc315-311b-4701-b4ff-4c0925c06d03" (UID: "369dc315-311b-4701-b4ff-4c0925c06d03"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.842933 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "369dc315-311b-4701-b4ff-4c0925c06d03" (UID: "369dc315-311b-4701-b4ff-4c0925c06d03"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.847497 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "369dc315-311b-4701-b4ff-4c0925c06d03" (UID: "369dc315-311b-4701-b4ff-4c0925c06d03"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.860549 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "369dc315-311b-4701-b4ff-4c0925c06d03" (UID: "369dc315-311b-4701-b4ff-4c0925c06d03"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.872038 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-config" (OuterVolumeSpecName: "config") pod "369dc315-311b-4701-b4ff-4c0925c06d03" (UID: "369dc315-311b-4701-b4ff-4c0925c06d03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.887214 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.887274 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.887290 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.887301 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.887313 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/369dc315-311b-4701-b4ff-4c0925c06d03-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:15 crc kubenswrapper[4856]: I1122 07:31:15.887326 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxbgr\" (UniqueName: \"kubernetes.io/projected/369dc315-311b-4701-b4ff-4c0925c06d03-kube-api-access-vxbgr\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:16 crc kubenswrapper[4856]: I1122 07:31:16.206266 4856 scope.go:117] "RemoveContainer" containerID="d1872b43d48a7dd796d09ce684c61535aa2f733c293336f262363d2a33ae0724" Nov 22 07:31:16 crc kubenswrapper[4856]: I1122 07:31:16.236206 4856 scope.go:117] "RemoveContainer" containerID="d08b0f39d313ca2bfb10a627ce9f6382f91298f77a6a3de7131c3e98a404d232" Nov 22 07:31:16 crc kubenswrapper[4856]: I1122 07:31:16.289343 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" event={"ID":"369dc315-311b-4701-b4ff-4c0925c06d03","Type":"ContainerDied","Data":"a7c3453807bdc06296e7874e557924c79679641c804605c7f318382a7c9c9d5e"} Nov 22 07:31:16 crc kubenswrapper[4856]: I1122 07:31:16.289390 4856 scope.go:117] "RemoveContainer" containerID="cd506a9cce5f375fe71e07b7e5f119bb0d475f8558d939f650f3dc3b13206492" Nov 22 07:31:16 crc kubenswrapper[4856]: I1122 07:31:16.289531 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-588dc4df7-wm5rv" Nov 22 07:31:16 crc kubenswrapper[4856]: I1122 07:31:16.297409 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11484882-5c9a-4546-8ea5-52e0299e55bb","Type":"ContainerStarted","Data":"1d8f0793289746fe0ee9ff1811082e38824e851312af0a978bd6a5ff6a17a83e"} Nov 22 07:31:16 crc kubenswrapper[4856]: I1122 07:31:16.333389 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-588dc4df7-wm5rv"] Nov 22 07:31:16 crc kubenswrapper[4856]: I1122 07:31:16.340243 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-588dc4df7-wm5rv"] Nov 22 07:31:16 crc kubenswrapper[4856]: I1122 07:31:16.476013 4856 scope.go:117] "RemoveContainer" containerID="ab5ac1bf362ac787b5174e82db4e12310e323fb9748e101c85540403217c1862" Nov 22 07:31:16 crc kubenswrapper[4856]: I1122 07:31:16.557559 4856 scope.go:117] "RemoveContainer" containerID="655621f6b54f6f59a9017161f1f54b0acfe90ff8102781853dc683c11d05f10e" Nov 22 07:31:16 crc kubenswrapper[4856]: I1122 07:31:16.710534 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:31:16 crc kubenswrapper[4856]: E1122 07:31:16.712889 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:31:16 crc kubenswrapper[4856]: I1122 07:31:16.724430 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="369dc315-311b-4701-b4ff-4c0925c06d03" path="/var/lib/kubelet/pods/369dc315-311b-4701-b4ff-4c0925c06d03/volumes" Nov 22 07:31:17 crc kubenswrapper[4856]: I1122 07:31:17.282304 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:31:17 crc kubenswrapper[4856]: I1122 07:31:17.311321 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11484882-5c9a-4546-8ea5-52e0299e55bb","Type":"ContainerStarted","Data":"389df6ca82ff75731f5f2d7981db2f380ac156cd95ed559c8e40cc89c52dfffa"} Nov 22 07:31:20 crc kubenswrapper[4856]: I1122 07:31:20.347415 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11484882-5c9a-4546-8ea5-52e0299e55bb","Type":"ContainerStarted","Data":"c3455f6296a6206f802a9d415c009d6bfef7cd1292c7fc6ef802cefd6baf086a"} Nov 22 07:31:20 crc kubenswrapper[4856]: I1122 07:31:20.348006 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:31:20 crc kubenswrapper[4856]: I1122 07:31:20.347862 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="sg-core" containerID="cri-o://389df6ca82ff75731f5f2d7981db2f380ac156cd95ed559c8e40cc89c52dfffa" gracePeriod=30 Nov 22 07:31:20 crc kubenswrapper[4856]: I1122 07:31:20.347675 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="ceilometer-central-agent" containerID="cri-o://171fa6d910ec890fc4e231b90b555eb91c0dd9281736a4f8988948a4510de215" gracePeriod=30 Nov 22 07:31:20 crc kubenswrapper[4856]: I1122 07:31:20.347897 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="ceilometer-notification-agent" containerID="cri-o://1d8f0793289746fe0ee9ff1811082e38824e851312af0a978bd6a5ff6a17a83e" gracePeriod=30 Nov 22 07:31:20 crc kubenswrapper[4856]: I1122 07:31:20.347897 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="proxy-httpd" containerID="cri-o://c3455f6296a6206f802a9d415c009d6bfef7cd1292c7fc6ef802cefd6baf086a" gracePeriod=30 Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.363722 4856 generic.go:334] "Generic (PLEG): container finished" podID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerID="c3455f6296a6206f802a9d415c009d6bfef7cd1292c7fc6ef802cefd6baf086a" exitCode=0 Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.364017 4856 generic.go:334] "Generic (PLEG): container finished" podID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerID="389df6ca82ff75731f5f2d7981db2f380ac156cd95ed559c8e40cc89c52dfffa" exitCode=2 Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.364032 4856 generic.go:334] "Generic (PLEG): container finished" podID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerID="1d8f0793289746fe0ee9ff1811082e38824e851312af0a978bd6a5ff6a17a83e" exitCode=0 Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.364042 4856 generic.go:334] "Generic (PLEG): container finished" podID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerID="171fa6d910ec890fc4e231b90b555eb91c0dd9281736a4f8988948a4510de215" exitCode=0 Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.363798 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11484882-5c9a-4546-8ea5-52e0299e55bb","Type":"ContainerDied","Data":"c3455f6296a6206f802a9d415c009d6bfef7cd1292c7fc6ef802cefd6baf086a"} Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.364083 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11484882-5c9a-4546-8ea5-52e0299e55bb","Type":"ContainerDied","Data":"389df6ca82ff75731f5f2d7981db2f380ac156cd95ed559c8e40cc89c52dfffa"} Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.364101 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11484882-5c9a-4546-8ea5-52e0299e55bb","Type":"ContainerDied","Data":"1d8f0793289746fe0ee9ff1811082e38824e851312af0a978bd6a5ff6a17a83e"} Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.364113 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11484882-5c9a-4546-8ea5-52e0299e55bb","Type":"ContainerDied","Data":"171fa6d910ec890fc4e231b90b555eb91c0dd9281736a4f8988948a4510de215"} Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.701822 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.810656 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-combined-ca-bundle\") pod \"11484882-5c9a-4546-8ea5-52e0299e55bb\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.811054 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-sg-core-conf-yaml\") pod \"11484882-5c9a-4546-8ea5-52e0299e55bb\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.811234 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-scripts\") pod \"11484882-5c9a-4546-8ea5-52e0299e55bb\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.811389 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-ceilometer-tls-certs\") pod \"11484882-5c9a-4546-8ea5-52e0299e55bb\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.811578 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11484882-5c9a-4546-8ea5-52e0299e55bb-run-httpd\") pod \"11484882-5c9a-4546-8ea5-52e0299e55bb\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.812083 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-config-data\") pod \"11484882-5c9a-4546-8ea5-52e0299e55bb\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.812497 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11484882-5c9a-4546-8ea5-52e0299e55bb-log-httpd\") pod \"11484882-5c9a-4546-8ea5-52e0299e55bb\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.811999 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11484882-5c9a-4546-8ea5-52e0299e55bb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "11484882-5c9a-4546-8ea5-52e0299e55bb" (UID: "11484882-5c9a-4546-8ea5-52e0299e55bb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.812857 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11484882-5c9a-4546-8ea5-52e0299e55bb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "11484882-5c9a-4546-8ea5-52e0299e55bb" (UID: "11484882-5c9a-4546-8ea5-52e0299e55bb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.813899 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9zdb\" (UniqueName: \"kubernetes.io/projected/11484882-5c9a-4546-8ea5-52e0299e55bb-kube-api-access-f9zdb\") pod \"11484882-5c9a-4546-8ea5-52e0299e55bb\" (UID: \"11484882-5c9a-4546-8ea5-52e0299e55bb\") " Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.815912 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11484882-5c9a-4546-8ea5-52e0299e55bb-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.816828 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11484882-5c9a-4546-8ea5-52e0299e55bb-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.817670 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-scripts" (OuterVolumeSpecName: "scripts") pod "11484882-5c9a-4546-8ea5-52e0299e55bb" (UID: "11484882-5c9a-4546-8ea5-52e0299e55bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.817849 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11484882-5c9a-4546-8ea5-52e0299e55bb-kube-api-access-f9zdb" (OuterVolumeSpecName: "kube-api-access-f9zdb") pod "11484882-5c9a-4546-8ea5-52e0299e55bb" (UID: "11484882-5c9a-4546-8ea5-52e0299e55bb"). InnerVolumeSpecName "kube-api-access-f9zdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.839118 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "11484882-5c9a-4546-8ea5-52e0299e55bb" (UID: "11484882-5c9a-4546-8ea5-52e0299e55bb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.863822 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "11484882-5c9a-4546-8ea5-52e0299e55bb" (UID: "11484882-5c9a-4546-8ea5-52e0299e55bb"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.886452 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11484882-5c9a-4546-8ea5-52e0299e55bb" (UID: "11484882-5c9a-4546-8ea5-52e0299e55bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.908368 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-config-data" (OuterVolumeSpecName: "config-data") pod "11484882-5c9a-4546-8ea5-52e0299e55bb" (UID: "11484882-5c9a-4546-8ea5-52e0299e55bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.919336 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.919371 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.919384 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.919394 4856 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.919403 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11484882-5c9a-4546-8ea5-52e0299e55bb-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:21 crc kubenswrapper[4856]: I1122 07:31:21.919413 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9zdb\" (UniqueName: \"kubernetes.io/projected/11484882-5c9a-4546-8ea5-52e0299e55bb-kube-api-access-f9zdb\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.378785 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11484882-5c9a-4546-8ea5-52e0299e55bb","Type":"ContainerDied","Data":"7b5512852e1ec6d9e9e45f8c9f0d2d0cf1d0d3f9bba16bc6b7629d4ae31aa27f"} Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.378940 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.379110 4856 scope.go:117] "RemoveContainer" containerID="c3455f6296a6206f802a9d415c009d6bfef7cd1292c7fc6ef802cefd6baf086a" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.401827 4856 scope.go:117] "RemoveContainer" containerID="389df6ca82ff75731f5f2d7981db2f380ac156cd95ed559c8e40cc89c52dfffa" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.430488 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.436376 4856 scope.go:117] "RemoveContainer" containerID="1d8f0793289746fe0ee9ff1811082e38824e851312af0a978bd6a5ff6a17a83e" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.439402 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.460264 4856 scope.go:117] "RemoveContainer" containerID="171fa6d910ec890fc4e231b90b555eb91c0dd9281736a4f8988948a4510de215" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.468340 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:31:22 crc kubenswrapper[4856]: E1122 07:31:22.468852 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="ceilometer-central-agent" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.468881 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="ceilometer-central-agent" Nov 22 07:31:22 crc kubenswrapper[4856]: E1122 07:31:22.468907 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="proxy-httpd" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.468914 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="proxy-httpd" Nov 22 07:31:22 crc kubenswrapper[4856]: E1122 07:31:22.468928 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="369dc315-311b-4701-b4ff-4c0925c06d03" containerName="dnsmasq-dns" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.468934 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="369dc315-311b-4701-b4ff-4c0925c06d03" containerName="dnsmasq-dns" Nov 22 07:31:22 crc kubenswrapper[4856]: E1122 07:31:22.468948 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="369dc315-311b-4701-b4ff-4c0925c06d03" containerName="init" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.468954 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="369dc315-311b-4701-b4ff-4c0925c06d03" containerName="init" Nov 22 07:31:22 crc kubenswrapper[4856]: E1122 07:31:22.468966 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="sg-core" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.468973 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="sg-core" Nov 22 07:31:22 crc kubenswrapper[4856]: E1122 07:31:22.468984 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="ceilometer-notification-agent" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.468990 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="ceilometer-notification-agent" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.469151 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="ceilometer-notification-agent" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.469173 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="sg-core" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.469183 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="ceilometer-central-agent" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.469193 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" containerName="proxy-httpd" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.469214 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="369dc315-311b-4701-b4ff-4c0925c06d03" containerName="dnsmasq-dns" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.470904 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.476701 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.483747 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.484448 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.484854 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.542108 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-scripts\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.542431 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft2fw\" (UniqueName: \"kubernetes.io/projected/e0f8403e-a06a-4804-b60a-98974506f547-kube-api-access-ft2fw\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.542564 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-config-data\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.542841 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.542974 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.543081 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0f8403e-a06a-4804-b60a-98974506f547-run-httpd\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.543217 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.543591 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0f8403e-a06a-4804-b60a-98974506f547-log-httpd\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.579647 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.579758 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.645489 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.645609 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.645637 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0f8403e-a06a-4804-b60a-98974506f547-run-httpd\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.645694 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.645737 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0f8403e-a06a-4804-b60a-98974506f547-log-httpd\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.645779 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-scripts\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.645837 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft2fw\" (UniqueName: \"kubernetes.io/projected/e0f8403e-a06a-4804-b60a-98974506f547-kube-api-access-ft2fw\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.645874 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-config-data\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.646598 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0f8403e-a06a-4804-b60a-98974506f547-log-httpd\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.647010 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0f8403e-a06a-4804-b60a-98974506f547-run-httpd\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.650392 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.650927 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-scripts\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.651755 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.661867 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-config-data\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.664577 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.665968 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft2fw\" (UniqueName: \"kubernetes.io/projected/e0f8403e-a06a-4804-b60a-98974506f547-kube-api-access-ft2fw\") pod \"ceilometer-0\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " pod="openstack/ceilometer-0" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.720399 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11484882-5c9a-4546-8ea5-52e0299e55bb" path="/var/lib/kubelet/pods/11484882-5c9a-4546-8ea5-52e0299e55bb/volumes" Nov 22 07:31:22 crc kubenswrapper[4856]: I1122 07:31:22.797115 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:31:23 crc kubenswrapper[4856]: I1122 07:31:23.044021 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:31:23 crc kubenswrapper[4856]: I1122 07:31:23.390269 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0f8403e-a06a-4804-b60a-98974506f547","Type":"ContainerStarted","Data":"6be3023150b988cc76c05c3bd45b087a3c33a10aafda575fc6de920562f9d152"} Nov 22 07:31:23 crc kubenswrapper[4856]: I1122 07:31:23.591746 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3a263525-c6c2-4256-8e77-7ecf12df9caf" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:31:23 crc kubenswrapper[4856]: I1122 07:31:23.591766 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3a263525-c6c2-4256-8e77-7ecf12df9caf" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:31:24 crc kubenswrapper[4856]: I1122 07:31:24.434169 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0f8403e-a06a-4804-b60a-98974506f547","Type":"ContainerStarted","Data":"24259cf1c1f38f1bc7f64997b64b9ed69fb4bf62d123b79b4fadefd0f143056d"} Nov 22 07:31:26 crc kubenswrapper[4856]: I1122 07:31:26.451529 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0f8403e-a06a-4804-b60a-98974506f547","Type":"ContainerStarted","Data":"02a270d659156bdef916a33cbab50d2c8c0cc0527187e2d9fcd2dc12495e6671"} Nov 22 07:31:28 crc kubenswrapper[4856]: I1122 07:31:28.475084 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0f8403e-a06a-4804-b60a-98974506f547","Type":"ContainerStarted","Data":"b22af23b8eca911c39bf860e938113315fcb9f3dd60e8b97761359b25855b4a1"} Nov 22 07:31:29 crc kubenswrapper[4856]: I1122 07:31:29.710078 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:31:29 crc kubenswrapper[4856]: E1122 07:31:29.710752 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:31:32 crc kubenswrapper[4856]: I1122 07:31:32.591128 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:31:32 crc kubenswrapper[4856]: I1122 07:31:32.592046 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:31:32 crc kubenswrapper[4856]: I1122 07:31:32.592590 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:31:32 crc kubenswrapper[4856]: I1122 07:31:32.597387 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:31:33 crc kubenswrapper[4856]: I1122 07:31:33.537921 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0f8403e-a06a-4804-b60a-98974506f547","Type":"ContainerStarted","Data":"b68c3e9d5fec381205cff7840dff84ed802d1d3dd4294ad59eed929c11d88ac0"} Nov 22 07:31:33 crc kubenswrapper[4856]: I1122 07:31:33.538305 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:31:33 crc kubenswrapper[4856]: I1122 07:31:33.545802 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:31:33 crc kubenswrapper[4856]: I1122 07:31:33.566224 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9352135860000002 podStartE2EDuration="11.56620379s" podCreationTimestamp="2025-11-22 07:31:22 +0000 UTC" firstStartedPulling="2025-11-22 07:31:23.052889819 +0000 UTC m=+1725.466283077" lastFinishedPulling="2025-11-22 07:31:32.683880023 +0000 UTC m=+1735.097273281" observedRunningTime="2025-11-22 07:31:33.559624351 +0000 UTC m=+1735.973017659" watchObservedRunningTime="2025-11-22 07:31:33.56620379 +0000 UTC m=+1735.979597048" Nov 22 07:31:34 crc kubenswrapper[4856]: I1122 07:31:34.546031 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 07:31:36 crc kubenswrapper[4856]: I1122 07:31:36.576460 4856 generic.go:334] "Generic (PLEG): container finished" podID="e782df2b-d7a8-4319-aead-d5165a61314a" containerID="dfb96d957f6cb86c56972d43dc87c8482e105284bd355469527ebc982327a614" exitCode=0 Nov 22 07:31:36 crc kubenswrapper[4856]: I1122 07:31:36.577001 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gd2pc" event={"ID":"e782df2b-d7a8-4319-aead-d5165a61314a","Type":"ContainerDied","Data":"dfb96d957f6cb86c56972d43dc87c8482e105284bd355469527ebc982327a614"} Nov 22 07:31:37 crc kubenswrapper[4856]: I1122 07:31:37.941343 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.048695 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66b6x\" (UniqueName: \"kubernetes.io/projected/e782df2b-d7a8-4319-aead-d5165a61314a-kube-api-access-66b6x\") pod \"e782df2b-d7a8-4319-aead-d5165a61314a\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.048828 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-combined-ca-bundle\") pod \"e782df2b-d7a8-4319-aead-d5165a61314a\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.048911 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-scripts\") pod \"e782df2b-d7a8-4319-aead-d5165a61314a\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.048973 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-config-data\") pod \"e782df2b-d7a8-4319-aead-d5165a61314a\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.054084 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e782df2b-d7a8-4319-aead-d5165a61314a-kube-api-access-66b6x" (OuterVolumeSpecName: "kube-api-access-66b6x") pod "e782df2b-d7a8-4319-aead-d5165a61314a" (UID: "e782df2b-d7a8-4319-aead-d5165a61314a"). InnerVolumeSpecName "kube-api-access-66b6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.054204 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-scripts" (OuterVolumeSpecName: "scripts") pod "e782df2b-d7a8-4319-aead-d5165a61314a" (UID: "e782df2b-d7a8-4319-aead-d5165a61314a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:38 crc kubenswrapper[4856]: E1122 07:31:38.073071 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-combined-ca-bundle podName:e782df2b-d7a8-4319-aead-d5165a61314a nodeName:}" failed. No retries permitted until 2025-11-22 07:31:38.573042713 +0000 UTC m=+1740.986435971 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-combined-ca-bundle") pod "e782df2b-d7a8-4319-aead-d5165a61314a" (UID: "e782df2b-d7a8-4319-aead-d5165a61314a") : error deleting /var/lib/kubelet/pods/e782df2b-d7a8-4319-aead-d5165a61314a/volume-subpaths: remove /var/lib/kubelet/pods/e782df2b-d7a8-4319-aead-d5165a61314a/volume-subpaths: no such file or directory Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.075810 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-config-data" (OuterVolumeSpecName: "config-data") pod "e782df2b-d7a8-4319-aead-d5165a61314a" (UID: "e782df2b-d7a8-4319-aead-d5165a61314a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.151422 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66b6x\" (UniqueName: \"kubernetes.io/projected/e782df2b-d7a8-4319-aead-d5165a61314a-kube-api-access-66b6x\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.151457 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.151469 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.596995 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-gd2pc" event={"ID":"e782df2b-d7a8-4319-aead-d5165a61314a","Type":"ContainerDied","Data":"fbb778ebcd5865153b3e48b7da54e6fd79bfa2c9cd0a6b42ceb92f4420aa35f1"} Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.597064 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbb778ebcd5865153b3e48b7da54e6fd79bfa2c9cd0a6b42ceb92f4420aa35f1" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.597193 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-gd2pc" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.659495 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-combined-ca-bundle\") pod \"e782df2b-d7a8-4319-aead-d5165a61314a\" (UID: \"e782df2b-d7a8-4319-aead-d5165a61314a\") " Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.666792 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e782df2b-d7a8-4319-aead-d5165a61314a" (UID: "e782df2b-d7a8-4319-aead-d5165a61314a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.673990 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:31:38 crc kubenswrapper[4856]: E1122 07:31:38.674348 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e782df2b-d7a8-4319-aead-d5165a61314a" containerName="nova-cell1-conductor-db-sync" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.674364 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e782df2b-d7a8-4319-aead-d5165a61314a" containerName="nova-cell1-conductor-db-sync" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.674568 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e782df2b-d7a8-4319-aead-d5165a61314a" containerName="nova-cell1-conductor-db-sync" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.675132 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.705699 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.761555 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b88f55c-12d5-4cba-a155-aa00c19c94f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.761770 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgdmw\" (UniqueName: \"kubernetes.io/projected/2b88f55c-12d5-4cba-a155-aa00c19c94f4-kube-api-access-mgdmw\") pod \"nova-cell1-conductor-0\" (UID: \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.762036 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b88f55c-12d5-4cba-a155-aa00c19c94f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.762190 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e782df2b-d7a8-4319-aead-d5165a61314a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.864209 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b88f55c-12d5-4cba-a155-aa00c19c94f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.864270 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgdmw\" (UniqueName: \"kubernetes.io/projected/2b88f55c-12d5-4cba-a155-aa00c19c94f4-kube-api-access-mgdmw\") pod \"nova-cell1-conductor-0\" (UID: \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.864349 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b88f55c-12d5-4cba-a155-aa00c19c94f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.867571 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b88f55c-12d5-4cba-a155-aa00c19c94f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.868303 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b88f55c-12d5-4cba-a155-aa00c19c94f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:31:38 crc kubenswrapper[4856]: I1122 07:31:38.881041 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgdmw\" (UniqueName: \"kubernetes.io/projected/2b88f55c-12d5-4cba-a155-aa00c19c94f4-kube-api-access-mgdmw\") pod \"nova-cell1-conductor-0\" (UID: \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\") " pod="openstack/nova-cell1-conductor-0" Nov 22 07:31:39 crc kubenswrapper[4856]: I1122 07:31:39.031022 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:31:39 crc kubenswrapper[4856]: W1122 07:31:39.445877 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b88f55c_12d5_4cba_a155_aa00c19c94f4.slice/crio-30e27716525ee234195b0b17bda99ec7ef8a3f241fa24c80ac5dc2e3afb9fe20 WatchSource:0}: Error finding container 30e27716525ee234195b0b17bda99ec7ef8a3f241fa24c80ac5dc2e3afb9fe20: Status 404 returned error can't find the container with id 30e27716525ee234195b0b17bda99ec7ef8a3f241fa24c80ac5dc2e3afb9fe20 Nov 22 07:31:39 crc kubenswrapper[4856]: I1122 07:31:39.451784 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:31:39 crc kubenswrapper[4856]: I1122 07:31:39.608893 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2b88f55c-12d5-4cba-a155-aa00c19c94f4","Type":"ContainerStarted","Data":"30e27716525ee234195b0b17bda99ec7ef8a3f241fa24c80ac5dc2e3afb9fe20"} Nov 22 07:31:39 crc kubenswrapper[4856]: E1122 07:31:39.882443 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode782df2b_d7a8_4319_aead_d5165a61314a.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:31:40 crc kubenswrapper[4856]: I1122 07:31:40.620021 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2b88f55c-12d5-4cba-a155-aa00c19c94f4","Type":"ContainerStarted","Data":"889ab0aa1988eeb2448a9ab0bc42e314c5c9c7e3df09896245e4cd6f9448c8fb"} Nov 22 07:31:40 crc kubenswrapper[4856]: I1122 07:31:40.620280 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 22 07:31:40 crc kubenswrapper[4856]: I1122 07:31:40.646119 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.646101143 podStartE2EDuration="2.646101143s" podCreationTimestamp="2025-11-22 07:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:31:40.641129927 +0000 UTC m=+1743.054523185" watchObservedRunningTime="2025-11-22 07:31:40.646101143 +0000 UTC m=+1743.059494401" Nov 22 07:31:41 crc kubenswrapper[4856]: I1122 07:31:41.710591 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:31:41 crc kubenswrapper[4856]: E1122 07:31:41.711478 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:31:44 crc kubenswrapper[4856]: I1122 07:31:44.059405 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.100608 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-qmflx"] Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.102097 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.105235 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.105482 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.137158 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qmflx"] Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.178631 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qmflx\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.178733 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-config-data\") pod \"nova-cell1-cell-mapping-qmflx\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.178778 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-scripts\") pod \"nova-cell1-cell-mapping-qmflx\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.178863 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwrmf\" (UniqueName: \"kubernetes.io/projected/bdcf6fb4-5003-482a-88eb-995e4626c8c8-kube-api-access-xwrmf\") pod \"nova-cell1-cell-mapping-qmflx\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.281161 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qmflx\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.281220 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-config-data\") pod \"nova-cell1-cell-mapping-qmflx\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.281261 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-scripts\") pod \"nova-cell1-cell-mapping-qmflx\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.281329 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwrmf\" (UniqueName: \"kubernetes.io/projected/bdcf6fb4-5003-482a-88eb-995e4626c8c8-kube-api-access-xwrmf\") pod \"nova-cell1-cell-mapping-qmflx\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.288254 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-config-data\") pod \"nova-cell1-cell-mapping-qmflx\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.289003 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-qmflx\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.302559 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwrmf\" (UniqueName: \"kubernetes.io/projected/bdcf6fb4-5003-482a-88eb-995e4626c8c8-kube-api-access-xwrmf\") pod \"nova-cell1-cell-mapping-qmflx\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.304213 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-scripts\") pod \"nova-cell1-cell-mapping-qmflx\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.425221 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:45 crc kubenswrapper[4856]: I1122 07:31:45.907646 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-qmflx"] Nov 22 07:31:46 crc kubenswrapper[4856]: I1122 07:31:46.680632 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qmflx" event={"ID":"bdcf6fb4-5003-482a-88eb-995e4626c8c8","Type":"ContainerStarted","Data":"ee12aaf0afa1e8898092ae79bf6a5ca333cd078f19c65b37949306518a4fa5b2"} Nov 22 07:31:46 crc kubenswrapper[4856]: I1122 07:31:46.681937 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qmflx" event={"ID":"bdcf6fb4-5003-482a-88eb-995e4626c8c8","Type":"ContainerStarted","Data":"6d0ede9cb4b6276f9839885f743b00bcfb56bc38e4a312bfbf4a55ea8c2eab61"} Nov 22 07:31:46 crc kubenswrapper[4856]: I1122 07:31:46.702756 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-qmflx" podStartSLOduration=1.702705366 podStartE2EDuration="1.702705366s" podCreationTimestamp="2025-11-22 07:31:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:31:46.697088244 +0000 UTC m=+1749.110481522" watchObservedRunningTime="2025-11-22 07:31:46.702705366 +0000 UTC m=+1749.116098624" Nov 22 07:31:50 crc kubenswrapper[4856]: E1122 07:31:50.151952 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode782df2b_d7a8_4319_aead_d5165a61314a.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:31:51 crc kubenswrapper[4856]: I1122 07:31:51.728185 4856 generic.go:334] "Generic (PLEG): container finished" podID="bdcf6fb4-5003-482a-88eb-995e4626c8c8" containerID="ee12aaf0afa1e8898092ae79bf6a5ca333cd078f19c65b37949306518a4fa5b2" exitCode=0 Nov 22 07:31:51 crc kubenswrapper[4856]: I1122 07:31:51.728308 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qmflx" event={"ID":"bdcf6fb4-5003-482a-88eb-995e4626c8c8","Type":"ContainerDied","Data":"ee12aaf0afa1e8898092ae79bf6a5ca333cd078f19c65b37949306518a4fa5b2"} Nov 22 07:31:52 crc kubenswrapper[4856]: I1122 07:31:52.805436 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.073022 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.133256 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-combined-ca-bundle\") pod \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.133406 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwrmf\" (UniqueName: \"kubernetes.io/projected/bdcf6fb4-5003-482a-88eb-995e4626c8c8-kube-api-access-xwrmf\") pod \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.133446 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-config-data\") pod \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.133603 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-scripts\") pod \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\" (UID: \"bdcf6fb4-5003-482a-88eb-995e4626c8c8\") " Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.139080 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-scripts" (OuterVolumeSpecName: "scripts") pod "bdcf6fb4-5003-482a-88eb-995e4626c8c8" (UID: "bdcf6fb4-5003-482a-88eb-995e4626c8c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.139539 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdcf6fb4-5003-482a-88eb-995e4626c8c8-kube-api-access-xwrmf" (OuterVolumeSpecName: "kube-api-access-xwrmf") pod "bdcf6fb4-5003-482a-88eb-995e4626c8c8" (UID: "bdcf6fb4-5003-482a-88eb-995e4626c8c8"). InnerVolumeSpecName "kube-api-access-xwrmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.166649 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdcf6fb4-5003-482a-88eb-995e4626c8c8" (UID: "bdcf6fb4-5003-482a-88eb-995e4626c8c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.187865 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-config-data" (OuterVolumeSpecName: "config-data") pod "bdcf6fb4-5003-482a-88eb-995e4626c8c8" (UID: "bdcf6fb4-5003-482a-88eb-995e4626c8c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.235993 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwrmf\" (UniqueName: \"kubernetes.io/projected/bdcf6fb4-5003-482a-88eb-995e4626c8c8-kube-api-access-xwrmf\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.236315 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.236327 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.236338 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdcf6fb4-5003-482a-88eb-995e4626c8c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.747960 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-qmflx" event={"ID":"bdcf6fb4-5003-482a-88eb-995e4626c8c8","Type":"ContainerDied","Data":"6d0ede9cb4b6276f9839885f743b00bcfb56bc38e4a312bfbf4a55ea8c2eab61"} Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.747999 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d0ede9cb4b6276f9839885f743b00bcfb56bc38e4a312bfbf4a55ea8c2eab61" Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.748026 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-qmflx" Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.923627 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.923880 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3a263525-c6c2-4256-8e77-7ecf12df9caf" containerName="nova-api-log" containerID="cri-o://4e140130f70e83959c9825437a4356abd59518f1bc7f588c31811c6ba07d3a8c" gracePeriod=30 Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.924023 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3a263525-c6c2-4256-8e77-7ecf12df9caf" containerName="nova-api-api" containerID="cri-o://3b896043efdc8c004a38ab6b7b5f5f48c16d9c632c413f3028ca037ccf425d7c" gracePeriod=30 Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.957833 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.958180 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fa204458-2052-48a7-9b65-10fdfb68e792" containerName="nova-scheduler-scheduler" containerID="cri-o://fd39b333647f5bd987347453fe62460369f3d71510528f5459ac17f4c944929d" gracePeriod=30 Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.968849 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.969090 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="02dac877-8061-402a-b0bc-30f86a9305d6" containerName="nova-metadata-log" containerID="cri-o://bb1db2ba156b51b1119b4f4bfe5136e3b64ef6280be201e4ce1427a7ccd5d2ba" gracePeriod=30 Nov 22 07:31:53 crc kubenswrapper[4856]: I1122 07:31:53.969196 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="02dac877-8061-402a-b0bc-30f86a9305d6" containerName="nova-metadata-metadata" containerID="cri-o://df58c745712dbcedd311578fd3181594596212dea0770672bcd0884a14ab9ebd" gracePeriod=30 Nov 22 07:31:54 crc kubenswrapper[4856]: I1122 07:31:54.756501 4856 generic.go:334] "Generic (PLEG): container finished" podID="02dac877-8061-402a-b0bc-30f86a9305d6" containerID="bb1db2ba156b51b1119b4f4bfe5136e3b64ef6280be201e4ce1427a7ccd5d2ba" exitCode=143 Nov 22 07:31:54 crc kubenswrapper[4856]: I1122 07:31:54.756589 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"02dac877-8061-402a-b0bc-30f86a9305d6","Type":"ContainerDied","Data":"bb1db2ba156b51b1119b4f4bfe5136e3b64ef6280be201e4ce1427a7ccd5d2ba"} Nov 22 07:31:54 crc kubenswrapper[4856]: I1122 07:31:54.758033 4856 generic.go:334] "Generic (PLEG): container finished" podID="3a263525-c6c2-4256-8e77-7ecf12df9caf" containerID="4e140130f70e83959c9825437a4356abd59518f1bc7f588c31811c6ba07d3a8c" exitCode=143 Nov 22 07:31:54 crc kubenswrapper[4856]: I1122 07:31:54.758061 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a263525-c6c2-4256-8e77-7ecf12df9caf","Type":"ContainerDied","Data":"4e140130f70e83959c9825437a4356abd59518f1bc7f588c31811c6ba07d3a8c"} Nov 22 07:31:55 crc kubenswrapper[4856]: I1122 07:31:55.709919 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:31:55 crc kubenswrapper[4856]: E1122 07:31:55.710205 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.221353 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="02dac877-8061-402a-b0bc-30f86a9305d6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": dial tcp 10.217.0.198:8775: connect: connection refused" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.221389 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="02dac877-8061-402a-b0bc-30f86a9305d6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": dial tcp 10.217.0.198:8775: connect: connection refused" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.713238 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.788720 4856 generic.go:334] "Generic (PLEG): container finished" podID="02dac877-8061-402a-b0bc-30f86a9305d6" containerID="df58c745712dbcedd311578fd3181594596212dea0770672bcd0884a14ab9ebd" exitCode=0 Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.788814 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.788835 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"02dac877-8061-402a-b0bc-30f86a9305d6","Type":"ContainerDied","Data":"df58c745712dbcedd311578fd3181594596212dea0770672bcd0884a14ab9ebd"} Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.789283 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"02dac877-8061-402a-b0bc-30f86a9305d6","Type":"ContainerDied","Data":"15b931d66e0d84d5f7a5bf87595d5333a49653cc8371a274c7aaae206ba0031c"} Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.789346 4856 scope.go:117] "RemoveContainer" containerID="df58c745712dbcedd311578fd3181594596212dea0770672bcd0884a14ab9ebd" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.798650 4856 generic.go:334] "Generic (PLEG): container finished" podID="3a263525-c6c2-4256-8e77-7ecf12df9caf" containerID="3b896043efdc8c004a38ab6b7b5f5f48c16d9c632c413f3028ca037ccf425d7c" exitCode=0 Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.798696 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a263525-c6c2-4256-8e77-7ecf12df9caf","Type":"ContainerDied","Data":"3b896043efdc8c004a38ab6b7b5f5f48c16d9c632c413f3028ca037ccf425d7c"} Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.798721 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3a263525-c6c2-4256-8e77-7ecf12df9caf","Type":"ContainerDied","Data":"10b7ef438530f610286d692bc7999e5f9b9fa24d40ad1e0f23cd1221d54f8a9b"} Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.798733 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10b7ef438530f610286d692bc7999e5f9b9fa24d40ad1e0f23cd1221d54f8a9b" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.799142 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.811298 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-combined-ca-bundle\") pod \"02dac877-8061-402a-b0bc-30f86a9305d6\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.811426 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02dac877-8061-402a-b0bc-30f86a9305d6-logs\") pod \"02dac877-8061-402a-b0bc-30f86a9305d6\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.811500 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6xbq\" (UniqueName: \"kubernetes.io/projected/02dac877-8061-402a-b0bc-30f86a9305d6-kube-api-access-r6xbq\") pod \"02dac877-8061-402a-b0bc-30f86a9305d6\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.811585 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-config-data\") pod \"02dac877-8061-402a-b0bc-30f86a9305d6\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.811648 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-nova-metadata-tls-certs\") pod \"02dac877-8061-402a-b0bc-30f86a9305d6\" (UID: \"02dac877-8061-402a-b0bc-30f86a9305d6\") " Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.812741 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02dac877-8061-402a-b0bc-30f86a9305d6-logs" (OuterVolumeSpecName: "logs") pod "02dac877-8061-402a-b0bc-30f86a9305d6" (UID: "02dac877-8061-402a-b0bc-30f86a9305d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.817758 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02dac877-8061-402a-b0bc-30f86a9305d6-kube-api-access-r6xbq" (OuterVolumeSpecName: "kube-api-access-r6xbq") pod "02dac877-8061-402a-b0bc-30f86a9305d6" (UID: "02dac877-8061-402a-b0bc-30f86a9305d6"). InnerVolumeSpecName "kube-api-access-r6xbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.839618 4856 scope.go:117] "RemoveContainer" containerID="bb1db2ba156b51b1119b4f4bfe5136e3b64ef6280be201e4ce1427a7ccd5d2ba" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.848386 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02dac877-8061-402a-b0bc-30f86a9305d6" (UID: "02dac877-8061-402a-b0bc-30f86a9305d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.851534 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-config-data" (OuterVolumeSpecName: "config-data") pod "02dac877-8061-402a-b0bc-30f86a9305d6" (UID: "02dac877-8061-402a-b0bc-30f86a9305d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.864697 4856 scope.go:117] "RemoveContainer" containerID="df58c745712dbcedd311578fd3181594596212dea0770672bcd0884a14ab9ebd" Nov 22 07:31:57 crc kubenswrapper[4856]: E1122 07:31:57.868657 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df58c745712dbcedd311578fd3181594596212dea0770672bcd0884a14ab9ebd\": container with ID starting with df58c745712dbcedd311578fd3181594596212dea0770672bcd0884a14ab9ebd not found: ID does not exist" containerID="df58c745712dbcedd311578fd3181594596212dea0770672bcd0884a14ab9ebd" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.868707 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df58c745712dbcedd311578fd3181594596212dea0770672bcd0884a14ab9ebd"} err="failed to get container status \"df58c745712dbcedd311578fd3181594596212dea0770672bcd0884a14ab9ebd\": rpc error: code = NotFound desc = could not find container \"df58c745712dbcedd311578fd3181594596212dea0770672bcd0884a14ab9ebd\": container with ID starting with df58c745712dbcedd311578fd3181594596212dea0770672bcd0884a14ab9ebd not found: ID does not exist" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.868739 4856 scope.go:117] "RemoveContainer" containerID="bb1db2ba156b51b1119b4f4bfe5136e3b64ef6280be201e4ce1427a7ccd5d2ba" Nov 22 07:31:57 crc kubenswrapper[4856]: E1122 07:31:57.869271 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb1db2ba156b51b1119b4f4bfe5136e3b64ef6280be201e4ce1427a7ccd5d2ba\": container with ID starting with bb1db2ba156b51b1119b4f4bfe5136e3b64ef6280be201e4ce1427a7ccd5d2ba not found: ID does not exist" containerID="bb1db2ba156b51b1119b4f4bfe5136e3b64ef6280be201e4ce1427a7ccd5d2ba" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.869320 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb1db2ba156b51b1119b4f4bfe5136e3b64ef6280be201e4ce1427a7ccd5d2ba"} err="failed to get container status \"bb1db2ba156b51b1119b4f4bfe5136e3b64ef6280be201e4ce1427a7ccd5d2ba\": rpc error: code = NotFound desc = could not find container \"bb1db2ba156b51b1119b4f4bfe5136e3b64ef6280be201e4ce1427a7ccd5d2ba\": container with ID starting with bb1db2ba156b51b1119b4f4bfe5136e3b64ef6280be201e4ce1427a7ccd5d2ba not found: ID does not exist" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.888434 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "02dac877-8061-402a-b0bc-30f86a9305d6" (UID: "02dac877-8061-402a-b0bc-30f86a9305d6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.913739 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-combined-ca-bundle\") pod \"3a263525-c6c2-4256-8e77-7ecf12df9caf\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.913791 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a263525-c6c2-4256-8e77-7ecf12df9caf-logs\") pod \"3a263525-c6c2-4256-8e77-7ecf12df9caf\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.913969 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-public-tls-certs\") pod \"3a263525-c6c2-4256-8e77-7ecf12df9caf\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.914037 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-internal-tls-certs\") pod \"3a263525-c6c2-4256-8e77-7ecf12df9caf\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.914064 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pvt2\" (UniqueName: \"kubernetes.io/projected/3a263525-c6c2-4256-8e77-7ecf12df9caf-kube-api-access-5pvt2\") pod \"3a263525-c6c2-4256-8e77-7ecf12df9caf\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.914132 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-config-data\") pod \"3a263525-c6c2-4256-8e77-7ecf12df9caf\" (UID: \"3a263525-c6c2-4256-8e77-7ecf12df9caf\") " Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.914468 4856 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.914485 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.914494 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02dac877-8061-402a-b0bc-30f86a9305d6-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.914508 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6xbq\" (UniqueName: \"kubernetes.io/projected/02dac877-8061-402a-b0bc-30f86a9305d6-kube-api-access-r6xbq\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.914540 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02dac877-8061-402a-b0bc-30f86a9305d6-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.916501 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a263525-c6c2-4256-8e77-7ecf12df9caf-logs" (OuterVolumeSpecName: "logs") pod "3a263525-c6c2-4256-8e77-7ecf12df9caf" (UID: "3a263525-c6c2-4256-8e77-7ecf12df9caf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.920430 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a263525-c6c2-4256-8e77-7ecf12df9caf-kube-api-access-5pvt2" (OuterVolumeSpecName: "kube-api-access-5pvt2") pod "3a263525-c6c2-4256-8e77-7ecf12df9caf" (UID: "3a263525-c6c2-4256-8e77-7ecf12df9caf"). InnerVolumeSpecName "kube-api-access-5pvt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.941956 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-config-data" (OuterVolumeSpecName: "config-data") pod "3a263525-c6c2-4256-8e77-7ecf12df9caf" (UID: "3a263525-c6c2-4256-8e77-7ecf12df9caf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.943749 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a263525-c6c2-4256-8e77-7ecf12df9caf" (UID: "3a263525-c6c2-4256-8e77-7ecf12df9caf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.962660 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3a263525-c6c2-4256-8e77-7ecf12df9caf" (UID: "3a263525-c6c2-4256-8e77-7ecf12df9caf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:57 crc kubenswrapper[4856]: I1122 07:31:57.974185 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3a263525-c6c2-4256-8e77-7ecf12df9caf" (UID: "3a263525-c6c2-4256-8e77-7ecf12df9caf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.016216 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.016274 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.016289 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pvt2\" (UniqueName: \"kubernetes.io/projected/3a263525-c6c2-4256-8e77-7ecf12df9caf-kube-api-access-5pvt2\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.016306 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.016319 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a263525-c6c2-4256-8e77-7ecf12df9caf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.016332 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a263525-c6c2-4256-8e77-7ecf12df9caf-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.120716 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.128112 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.146101 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:31:58 crc kubenswrapper[4856]: E1122 07:31:58.146566 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dac877-8061-402a-b0bc-30f86a9305d6" containerName="nova-metadata-log" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.146585 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dac877-8061-402a-b0bc-30f86a9305d6" containerName="nova-metadata-log" Nov 22 07:31:58 crc kubenswrapper[4856]: E1122 07:31:58.146607 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02dac877-8061-402a-b0bc-30f86a9305d6" containerName="nova-metadata-metadata" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.146615 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="02dac877-8061-402a-b0bc-30f86a9305d6" containerName="nova-metadata-metadata" Nov 22 07:31:58 crc kubenswrapper[4856]: E1122 07:31:58.146630 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a263525-c6c2-4256-8e77-7ecf12df9caf" containerName="nova-api-log" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.146638 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a263525-c6c2-4256-8e77-7ecf12df9caf" containerName="nova-api-log" Nov 22 07:31:58 crc kubenswrapper[4856]: E1122 07:31:58.146663 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdcf6fb4-5003-482a-88eb-995e4626c8c8" containerName="nova-manage" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.146670 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdcf6fb4-5003-482a-88eb-995e4626c8c8" containerName="nova-manage" Nov 22 07:31:58 crc kubenswrapper[4856]: E1122 07:31:58.146688 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a263525-c6c2-4256-8e77-7ecf12df9caf" containerName="nova-api-api" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.146695 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a263525-c6c2-4256-8e77-7ecf12df9caf" containerName="nova-api-api" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.146922 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dac877-8061-402a-b0bc-30f86a9305d6" containerName="nova-metadata-log" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.146952 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a263525-c6c2-4256-8e77-7ecf12df9caf" containerName="nova-api-log" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.146967 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdcf6fb4-5003-482a-88eb-995e4626c8c8" containerName="nova-manage" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.146984 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a263525-c6c2-4256-8e77-7ecf12df9caf" containerName="nova-api-api" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.146994 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="02dac877-8061-402a-b0bc-30f86a9305d6" containerName="nova-metadata-metadata" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.149726 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.153328 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.153565 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.164814 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:31:58 crc kubenswrapper[4856]: E1122 07:31:58.325148 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd39b333647f5bd987347453fe62460369f3d71510528f5459ac17f4c944929d is running failed: container process not found" containerID="fd39b333647f5bd987347453fe62460369f3d71510528f5459ac17f4c944929d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:31:58 crc kubenswrapper[4856]: E1122 07:31:58.325705 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd39b333647f5bd987347453fe62460369f3d71510528f5459ac17f4c944929d is running failed: container process not found" containerID="fd39b333647f5bd987347453fe62460369f3d71510528f5459ac17f4c944929d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:31:58 crc kubenswrapper[4856]: E1122 07:31:58.326152 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd39b333647f5bd987347453fe62460369f3d71510528f5459ac17f4c944929d is running failed: container process not found" containerID="fd39b333647f5bd987347453fe62460369f3d71510528f5459ac17f4c944929d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:31:58 crc kubenswrapper[4856]: E1122 07:31:58.326189 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fd39b333647f5bd987347453fe62460369f3d71510528f5459ac17f4c944929d is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fa204458-2052-48a7-9b65-10fdfb68e792" containerName="nova-scheduler-scheduler" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.326841 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfpsw\" (UniqueName: \"kubernetes.io/projected/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-kube-api-access-kfpsw\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.327072 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.327171 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-logs\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.327234 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.327406 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.429710 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.430132 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfpsw\" (UniqueName: \"kubernetes.io/projected/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-kube-api-access-kfpsw\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.430282 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.430326 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-logs\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.430386 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.694949 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.696495 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.700833 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-logs\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.707422 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.725502 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02dac877-8061-402a-b0bc-30f86a9305d6" path="/var/lib/kubelet/pods/02dac877-8061-402a-b0bc-30f86a9305d6/volumes" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.725559 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfpsw\" (UniqueName: \"kubernetes.io/projected/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-kube-api-access-kfpsw\") pod \"nova-metadata-0\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.772717 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.827317 4856 generic.go:334] "Generic (PLEG): container finished" podID="fa204458-2052-48a7-9b65-10fdfb68e792" containerID="fd39b333647f5bd987347453fe62460369f3d71510528f5459ac17f4c944929d" exitCode=0 Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.827398 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa204458-2052-48a7-9b65-10fdfb68e792","Type":"ContainerDied","Data":"fd39b333647f5bd987347453fe62460369f3d71510528f5459ac17f4c944929d"} Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.827421 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.909626 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.931902 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.946978 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.948923 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.956150 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.956161 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.956424 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 07:31:58 crc kubenswrapper[4856]: I1122 07:31:58.959424 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.044769 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.045087 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-public-tls-certs\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.045123 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.045151 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k87gc\" (UniqueName: \"kubernetes.io/projected/b1ccf431-f692-459f-b249-66bd9747d09c-kube-api-access-k87gc\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.045299 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.045395 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1ccf431-f692-459f-b249-66bd9747d09c-logs\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.147395 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1ccf431-f692-459f-b249-66bd9747d09c-logs\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.147586 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.147625 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-public-tls-certs\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.147654 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.147681 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k87gc\" (UniqueName: \"kubernetes.io/projected/b1ccf431-f692-459f-b249-66bd9747d09c-kube-api-access-k87gc\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.147752 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.149397 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1ccf431-f692-459f-b249-66bd9747d09c-logs\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.154411 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-public-tls-certs\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.154494 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.155588 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.159249 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.169841 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k87gc\" (UniqueName: \"kubernetes.io/projected/b1ccf431-f692-459f-b249-66bd9747d09c-kube-api-access-k87gc\") pod \"nova-api-0\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.251379 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.347960 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.546590 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.658540 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa204458-2052-48a7-9b65-10fdfb68e792-config-data\") pod \"fa204458-2052-48a7-9b65-10fdfb68e792\" (UID: \"fa204458-2052-48a7-9b65-10fdfb68e792\") " Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.660106 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa204458-2052-48a7-9b65-10fdfb68e792-combined-ca-bundle\") pod \"fa204458-2052-48a7-9b65-10fdfb68e792\" (UID: \"fa204458-2052-48a7-9b65-10fdfb68e792\") " Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.660179 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pf88\" (UniqueName: \"kubernetes.io/projected/fa204458-2052-48a7-9b65-10fdfb68e792-kube-api-access-7pf88\") pod \"fa204458-2052-48a7-9b65-10fdfb68e792\" (UID: \"fa204458-2052-48a7-9b65-10fdfb68e792\") " Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.669625 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa204458-2052-48a7-9b65-10fdfb68e792-kube-api-access-7pf88" (OuterVolumeSpecName: "kube-api-access-7pf88") pod "fa204458-2052-48a7-9b65-10fdfb68e792" (UID: "fa204458-2052-48a7-9b65-10fdfb68e792"). InnerVolumeSpecName "kube-api-access-7pf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.687380 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa204458-2052-48a7-9b65-10fdfb68e792-config-data" (OuterVolumeSpecName: "config-data") pod "fa204458-2052-48a7-9b65-10fdfb68e792" (UID: "fa204458-2052-48a7-9b65-10fdfb68e792"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.694398 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa204458-2052-48a7-9b65-10fdfb68e792-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa204458-2052-48a7-9b65-10fdfb68e792" (UID: "fa204458-2052-48a7-9b65-10fdfb68e792"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.763068 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa204458-2052-48a7-9b65-10fdfb68e792-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.763102 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pf88\" (UniqueName: \"kubernetes.io/projected/fa204458-2052-48a7-9b65-10fdfb68e792-kube-api-access-7pf88\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.763129 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa204458-2052-48a7-9b65-10fdfb68e792-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.849918 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92","Type":"ContainerStarted","Data":"30f6c92bfa88e0c50a824bbd5fb87ff5b3d7fbb4606aca9dfc830b62320a94a1"} Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.849972 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92","Type":"ContainerStarted","Data":"d90632d330d38e9be6cef5206c738ec59d81e870670d687f6bccc464bedfaadd"} Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.852871 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa204458-2052-48a7-9b65-10fdfb68e792","Type":"ContainerDied","Data":"b8bfae512ccf1b72164a2de13546ddab483dfb0ffd27e7de56bc460c1790e524"} Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.852935 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.852970 4856 scope.go:117] "RemoveContainer" containerID="fd39b333647f5bd987347453fe62460369f3d71510528f5459ac17f4c944929d" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.902159 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.919900 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.930654 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:31:59 crc kubenswrapper[4856]: E1122 07:31:59.931169 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa204458-2052-48a7-9b65-10fdfb68e792" containerName="nova-scheduler-scheduler" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.931184 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa204458-2052-48a7-9b65-10fdfb68e792" containerName="nova-scheduler-scheduler" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.931443 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa204458-2052-48a7-9b65-10fdfb68e792" containerName="nova-scheduler-scheduler" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.932203 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.934594 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.946159 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:31:59 crc kubenswrapper[4856]: W1122 07:31:59.966684 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1ccf431_f692_459f_b249_66bd9747d09c.slice/crio-ed53884447da7721af2bd041c798876cdfd0f649185ae17195b88a7da8863f6e WatchSource:0}: Error finding container ed53884447da7721af2bd041c798876cdfd0f649185ae17195b88a7da8863f6e: Status 404 returned error can't find the container with id ed53884447da7721af2bd041c798876cdfd0f649185ae17195b88a7da8863f6e Nov 22 07:31:59 crc kubenswrapper[4856]: I1122 07:31:59.970608 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.069036 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-config-data\") pod \"nova-scheduler-0\" (UID: \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\") " pod="openstack/nova-scheduler-0" Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.069121 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6xnw\" (UniqueName: \"kubernetes.io/projected/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-kube-api-access-l6xnw\") pod \"nova-scheduler-0\" (UID: \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\") " pod="openstack/nova-scheduler-0" Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.069263 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\") " pod="openstack/nova-scheduler-0" Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.170594 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-config-data\") pod \"nova-scheduler-0\" (UID: \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\") " pod="openstack/nova-scheduler-0" Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.170662 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6xnw\" (UniqueName: \"kubernetes.io/projected/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-kube-api-access-l6xnw\") pod \"nova-scheduler-0\" (UID: \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\") " pod="openstack/nova-scheduler-0" Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.170811 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\") " pod="openstack/nova-scheduler-0" Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.174409 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-config-data\") pod \"nova-scheduler-0\" (UID: \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\") " pod="openstack/nova-scheduler-0" Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.175562 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\") " pod="openstack/nova-scheduler-0" Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.192950 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6xnw\" (UniqueName: \"kubernetes.io/projected/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-kube-api-access-l6xnw\") pod \"nova-scheduler-0\" (UID: \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\") " pod="openstack/nova-scheduler-0" Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.269864 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:32:00 crc kubenswrapper[4856]: E1122 07:32:00.422192 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode782df2b_d7a8_4319_aead_d5165a61314a.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.721819 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a263525-c6c2-4256-8e77-7ecf12df9caf" path="/var/lib/kubelet/pods/3a263525-c6c2-4256-8e77-7ecf12df9caf/volumes" Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.725082 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa204458-2052-48a7-9b65-10fdfb68e792" path="/var/lib/kubelet/pods/fa204458-2052-48a7-9b65-10fdfb68e792/volumes" Nov 22 07:32:00 crc kubenswrapper[4856]: W1122 07:32:00.725207 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07329cf7_c3ff_410a_8ab7_8f19ae9d3974.slice/crio-1803224d4bc27f76558788710d44cf87b3090071d8a0bc5d61101c30d3510424 WatchSource:0}: Error finding container 1803224d4bc27f76558788710d44cf87b3090071d8a0bc5d61101c30d3510424: Status 404 returned error can't find the container with id 1803224d4bc27f76558788710d44cf87b3090071d8a0bc5d61101c30d3510424 Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.725747 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.863347 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92","Type":"ContainerStarted","Data":"34b8ef8ac4487f65f5dff6c904e4aa6b5fc3a3fd278121552b6ef063060959ec"} Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.867233 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b1ccf431-f692-459f-b249-66bd9747d09c","Type":"ContainerStarted","Data":"748dcd5bb334b4bc2361b63a4afbafd4286f9d6147d5c3a3a460a57c1f55b549"} Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.867262 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b1ccf431-f692-459f-b249-66bd9747d09c","Type":"ContainerStarted","Data":"f0b1a60d0b1a6de591d20e91274f4f847de400e209ee3854019d56a6b7527817"} Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.867274 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b1ccf431-f692-459f-b249-66bd9747d09c","Type":"ContainerStarted","Data":"ed53884447da7721af2bd041c798876cdfd0f649185ae17195b88a7da8863f6e"} Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.874086 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"07329cf7-c3ff-410a-8ab7-8f19ae9d3974","Type":"ContainerStarted","Data":"1803224d4bc27f76558788710d44cf87b3090071d8a0bc5d61101c30d3510424"} Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.884471 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.884451436 podStartE2EDuration="2.884451436s" podCreationTimestamp="2025-11-22 07:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:32:00.881540637 +0000 UTC m=+1763.294933905" watchObservedRunningTime="2025-11-22 07:32:00.884451436 +0000 UTC m=+1763.297844694" Nov 22 07:32:00 crc kubenswrapper[4856]: I1122 07:32:00.905990 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.905959632 podStartE2EDuration="2.905959632s" podCreationTimestamp="2025-11-22 07:31:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:32:00.902256221 +0000 UTC m=+1763.315649479" watchObservedRunningTime="2025-11-22 07:32:00.905959632 +0000 UTC m=+1763.319352890" Nov 22 07:32:01 crc kubenswrapper[4856]: I1122 07:32:01.885872 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"07329cf7-c3ff-410a-8ab7-8f19ae9d3974","Type":"ContainerStarted","Data":"9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57"} Nov 22 07:32:03 crc kubenswrapper[4856]: I1122 07:32:03.773138 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:32:03 crc kubenswrapper[4856]: I1122 07:32:03.774479 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 07:32:05 crc kubenswrapper[4856]: I1122 07:32:05.270421 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 07:32:08 crc kubenswrapper[4856]: I1122 07:32:08.773440 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:32:08 crc kubenswrapper[4856]: I1122 07:32:08.775038 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 07:32:09 crc kubenswrapper[4856]: I1122 07:32:09.348691 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:32:09 crc kubenswrapper[4856]: I1122 07:32:09.348769 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 07:32:09 crc kubenswrapper[4856]: I1122 07:32:09.785835 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:32:09 crc kubenswrapper[4856]: I1122 07:32:09.785865 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:32:10 crc kubenswrapper[4856]: I1122 07:32:10.270297 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 07:32:10 crc kubenswrapper[4856]: I1122 07:32:10.303338 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 07:32:10 crc kubenswrapper[4856]: I1122 07:32:10.329872 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=11.329854045 podStartE2EDuration="11.329854045s" podCreationTimestamp="2025-11-22 07:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:32:01.903220169 +0000 UTC m=+1764.316613427" watchObservedRunningTime="2025-11-22 07:32:10.329854045 +0000 UTC m=+1772.743247303" Nov 22 07:32:10 crc kubenswrapper[4856]: I1122 07:32:10.361751 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b1ccf431-f692-459f-b249-66bd9747d09c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:32:10 crc kubenswrapper[4856]: I1122 07:32:10.361779 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b1ccf431-f692-459f-b249-66bd9747d09c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 07:32:10 crc kubenswrapper[4856]: E1122 07:32:10.680135 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode782df2b_d7a8_4319_aead_d5165a61314a.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:32:10 crc kubenswrapper[4856]: I1122 07:32:10.710290 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:32:10 crc kubenswrapper[4856]: E1122 07:32:10.710541 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:32:11 crc kubenswrapper[4856]: I1122 07:32:11.001794 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.628759 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hwrb9"] Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.655585 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.656060 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="9a94a048-f961-4675-85bf-88414e414a51" containerName="openstackclient" containerID="cri-o://b3b1f2a0ac6e8ef5ca8623acaf447ee1e4d4c639c63af0026dc10d1cc70ff28a" gracePeriod=2 Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.672283 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.710465 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement00a5-account-delete-qrc4g"] Nov 22 07:32:17 crc kubenswrapper[4856]: E1122 07:32:17.711059 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a94a048-f961-4675-85bf-88414e414a51" containerName="openstackclient" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.711133 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a94a048-f961-4675-85bf-88414e414a51" containerName="openstackclient" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.711399 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a94a048-f961-4675-85bf-88414e414a51" containerName="openstackclient" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.720336 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement00a5-account-delete-qrc4g" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.724166 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement00a5-account-delete-qrc4g"] Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.746100 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-8zttm"] Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.746518 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-8zttm" podUID="ee10c8a7-96d4-4ee5-8306-a17bceb73cf1" containerName="openstack-network-exporter" containerID="cri-o://508f07d95f18906c3efe0a28a1a716873bf2a5fa811acd5075db09b60b6b55fb" gracePeriod=30 Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.767830 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-zz5h4"] Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.806898 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.831610 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance1a70-account-delete-m5qqx"] Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.832810 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance1a70-account-delete-m5qqx" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.850811 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6kdp\" (UniqueName: \"kubernetes.io/projected/9eaa66e0-ee9b-4115-b385-222e8ac0c21c-kube-api-access-g6kdp\") pod \"placement00a5-account-delete-qrc4g\" (UID: \"9eaa66e0-ee9b-4115-b385-222e8ac0c21c\") " pod="openstack/placement00a5-account-delete-qrc4g" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.850920 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaa66e0-ee9b-4115-b385-222e8ac0c21c-operator-scripts\") pod \"placement00a5-account-delete-qrc4g\" (UID: \"9eaa66e0-ee9b-4115-b385-222e8ac0c21c\") " pod="openstack/placement00a5-account-delete-qrc4g" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.883777 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance1a70-account-delete-m5qqx"] Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.953219 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican29ba-account-delete-f7rqf"] Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.954449 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican29ba-account-delete-f7rqf" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.955830 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6kdp\" (UniqueName: \"kubernetes.io/projected/9eaa66e0-ee9b-4115-b385-222e8ac0c21c-kube-api-access-g6kdp\") pod \"placement00a5-account-delete-qrc4g\" (UID: \"9eaa66e0-ee9b-4115-b385-222e8ac0c21c\") " pod="openstack/placement00a5-account-delete-qrc4g" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.955923 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaa66e0-ee9b-4115-b385-222e8ac0c21c-operator-scripts\") pod \"placement00a5-account-delete-qrc4g\" (UID: \"9eaa66e0-ee9b-4115-b385-222e8ac0c21c\") " pod="openstack/placement00a5-account-delete-qrc4g" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.955957 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df337886-1469-499f-bbb4-564f479cafa7-operator-scripts\") pod \"glance1a70-account-delete-m5qqx\" (UID: \"df337886-1469-499f-bbb4-564f479cafa7\") " pod="openstack/glance1a70-account-delete-m5qqx" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.955987 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhkvs\" (UniqueName: \"kubernetes.io/projected/df337886-1469-499f-bbb4-564f479cafa7-kube-api-access-xhkvs\") pod \"glance1a70-account-delete-m5qqx\" (UID: \"df337886-1469-499f-bbb4-564f479cafa7\") " pod="openstack/glance1a70-account-delete-m5qqx" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.956858 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaa66e0-ee9b-4115-b385-222e8ac0c21c-operator-scripts\") pod \"placement00a5-account-delete-qrc4g\" (UID: \"9eaa66e0-ee9b-4115-b385-222e8ac0c21c\") " pod="openstack/placement00a5-account-delete-qrc4g" Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.957020 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-ckxn9"] Nov 22 07:32:17 crc kubenswrapper[4856]: E1122 07:32:17.957053 4856 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 07:32:17 crc kubenswrapper[4856]: E1122 07:32:17.957723 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data podName:4ac8c44e-0667-43f7-aebd-a7b4c5bcb429 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:18.457707057 +0000 UTC m=+1780.871100315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data") pod "rabbitmq-cell1-server-0" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429") : configmap "rabbitmq-cell1-config-data" not found Nov 22 07:32:17 crc kubenswrapper[4856]: I1122 07:32:17.975588 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-ckxn9"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.006303 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6kdp\" (UniqueName: \"kubernetes.io/projected/9eaa66e0-ee9b-4115-b385-222e8ac0c21c-kube-api-access-g6kdp\") pod \"placement00a5-account-delete-qrc4g\" (UID: \"9eaa66e0-ee9b-4115-b385-222e8ac0c21c\") " pod="openstack/placement00a5-account-delete-qrc4g" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.018595 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican29ba-account-delete-f7rqf"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.050921 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement00a5-account-delete-qrc4g" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.057074 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhkvs\" (UniqueName: \"kubernetes.io/projected/df337886-1469-499f-bbb4-564f479cafa7-kube-api-access-xhkvs\") pod \"glance1a70-account-delete-m5qqx\" (UID: \"df337886-1469-499f-bbb4-564f479cafa7\") " pod="openstack/glance1a70-account-delete-m5qqx" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.057159 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/519a764c-9ac2-4f94-84c6-7c284ab676cd-operator-scripts\") pod \"barbican29ba-account-delete-f7rqf\" (UID: \"519a764c-9ac2-4f94-84c6-7c284ab676cd\") " pod="openstack/barbican29ba-account-delete-f7rqf" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.057181 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkvz4\" (UniqueName: \"kubernetes.io/projected/519a764c-9ac2-4f94-84c6-7c284ab676cd-kube-api-access-vkvz4\") pod \"barbican29ba-account-delete-f7rqf\" (UID: \"519a764c-9ac2-4f94-84c6-7c284ab676cd\") " pod="openstack/barbican29ba-account-delete-f7rqf" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.057289 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df337886-1469-499f-bbb4-564f479cafa7-operator-scripts\") pod \"glance1a70-account-delete-m5qqx\" (UID: \"df337886-1469-499f-bbb4-564f479cafa7\") " pod="openstack/glance1a70-account-delete-m5qqx" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.069176 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df337886-1469-499f-bbb4-564f479cafa7-operator-scripts\") pod \"glance1a70-account-delete-m5qqx\" (UID: \"df337886-1469-499f-bbb4-564f479cafa7\") " pod="openstack/glance1a70-account-delete-m5qqx" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.071896 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinderceda-account-delete-chlrj"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.073083 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinderceda-account-delete-chlrj" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.102522 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-kvfrn"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.106963 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhkvs\" (UniqueName: \"kubernetes.io/projected/df337886-1469-499f-bbb4-564f479cafa7-kube-api-access-xhkvs\") pod \"glance1a70-account-delete-m5qqx\" (UID: \"df337886-1469-499f-bbb4-564f479cafa7\") " pod="openstack/glance1a70-account-delete-m5qqx" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.117580 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinderceda-account-delete-chlrj"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.138568 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-kvfrn"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.138978 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8zttm_ee10c8a7-96d4-4ee5-8306-a17bceb73cf1/openstack-network-exporter/0.log" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.139011 4856 generic.go:334] "Generic (PLEG): container finished" podID="ee10c8a7-96d4-4ee5-8306-a17bceb73cf1" containerID="508f07d95f18906c3efe0a28a1a716873bf2a5fa811acd5075db09b60b6b55fb" exitCode=2 Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.139035 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8zttm" event={"ID":"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1","Type":"ContainerDied","Data":"508f07d95f18906c3efe0a28a1a716873bf2a5fa811acd5075db09b60b6b55fb"} Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.161237 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4flb\" (UniqueName: \"kubernetes.io/projected/5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f-kube-api-access-t4flb\") pod \"cinderceda-account-delete-chlrj\" (UID: \"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f\") " pod="openstack/cinderceda-account-delete-chlrj" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.161291 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f-operator-scripts\") pod \"cinderceda-account-delete-chlrj\" (UID: \"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f\") " pod="openstack/cinderceda-account-delete-chlrj" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.161456 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/519a764c-9ac2-4f94-84c6-7c284ab676cd-operator-scripts\") pod \"barbican29ba-account-delete-f7rqf\" (UID: \"519a764c-9ac2-4f94-84c6-7c284ab676cd\") " pod="openstack/barbican29ba-account-delete-f7rqf" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.161484 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkvz4\" (UniqueName: \"kubernetes.io/projected/519a764c-9ac2-4f94-84c6-7c284ab676cd-kube-api-access-vkvz4\") pod \"barbican29ba-account-delete-f7rqf\" (UID: \"519a764c-9ac2-4f94-84c6-7c284ab676cd\") " pod="openstack/barbican29ba-account-delete-f7rqf" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.162358 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/519a764c-9ac2-4f94-84c6-7c284ab676cd-operator-scripts\") pod \"barbican29ba-account-delete-f7rqf\" (UID: \"519a764c-9ac2-4f94-84c6-7c284ab676cd\") " pod="openstack/barbican29ba-account-delete-f7rqf" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.168338 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.207190 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkvz4\" (UniqueName: \"kubernetes.io/projected/519a764c-9ac2-4f94-84c6-7c284ab676cd-kube-api-access-vkvz4\") pod \"barbican29ba-account-delete-f7rqf\" (UID: \"519a764c-9ac2-4f94-84c6-7c284ab676cd\") " pod="openstack/barbican29ba-account-delete-f7rqf" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.248938 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hwrb9" podUID="e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" containerName="ovn-controller" probeResult="failure" output=< Nov 22 07:32:18 crc kubenswrapper[4856]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Nov 22 07:32:18 crc kubenswrapper[4856]: > Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.263073 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4flb\" (UniqueName: \"kubernetes.io/projected/5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f-kube-api-access-t4flb\") pod \"cinderceda-account-delete-chlrj\" (UID: \"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f\") " pod="openstack/cinderceda-account-delete-chlrj" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.263122 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f-operator-scripts\") pod \"cinderceda-account-delete-chlrj\" (UID: \"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f\") " pod="openstack/cinderceda-account-delete-chlrj" Nov 22 07:32:18 crc kubenswrapper[4856]: E1122 07:32:18.264857 4856 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 07:32:18 crc kubenswrapper[4856]: E1122 07:32:18.264899 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data podName:0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:18.764885112 +0000 UTC m=+1781.178278370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data") pod "rabbitmq-server-0" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89") : configmap "rabbitmq-config-data" not found Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.265663 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f-operator-scripts\") pod \"cinderceda-account-delete-chlrj\" (UID: \"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f\") " pod="openstack/cinderceda-account-delete-chlrj" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.285380 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance1a70-account-delete-m5qqx" Nov 22 07:32:18 crc kubenswrapper[4856]: E1122 07:32:18.305214 4856 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-hwrb9" message="Exiting ovn-controller (1) " Nov 22 07:32:18 crc kubenswrapper[4856]: E1122 07:32:18.305426 4856 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-hwrb9" podUID="e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" containerName="ovn-controller" containerID="cri-o://9dcec325019ebdfce923c32261c3801484f6c45ab535eb4623bd34243cd70533" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.305455 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-hwrb9" podUID="e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" containerName="ovn-controller" containerID="cri-o://9dcec325019ebdfce923c32261c3801484f6c45ab535eb4623bd34243cd70533" gracePeriod=30 Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.342918 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican29ba-account-delete-f7rqf" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.346468 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4flb\" (UniqueName: \"kubernetes.io/projected/5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f-kube-api-access-t4flb\") pod \"cinderceda-account-delete-chlrj\" (UID: \"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f\") " pod="openstack/cinderceda-account-delete-chlrj" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.370103 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapi2c75-account-delete-c4rqx"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.397167 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi2c75-account-delete-c4rqx" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.461113 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinderceda-account-delete-chlrj" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.538775 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zc4r\" (UniqueName: \"kubernetes.io/projected/cfc2e8cc-04c1-4481-bf7d-d7e99972200f-kube-api-access-5zc4r\") pod \"novaapi2c75-account-delete-c4rqx\" (UID: \"cfc2e8cc-04c1-4481-bf7d-d7e99972200f\") " pod="openstack/novaapi2c75-account-delete-c4rqx" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.538842 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfc2e8cc-04c1-4481-bf7d-d7e99972200f-operator-scripts\") pod \"novaapi2c75-account-delete-c4rqx\" (UID: \"cfc2e8cc-04c1-4481-bf7d-d7e99972200f\") " pod="openstack/novaapi2c75-account-delete-c4rqx" Nov 22 07:32:18 crc kubenswrapper[4856]: E1122 07:32:18.539334 4856 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 07:32:18 crc kubenswrapper[4856]: E1122 07:32:18.539404 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data podName:4ac8c44e-0667-43f7-aebd-a7b4c5bcb429 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:19.539384888 +0000 UTC m=+1781.952778146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data") pod "rabbitmq-cell1-server-0" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429") : configmap "rabbitmq-cell1-config-data" not found Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.539821 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi2c75-account-delete-c4rqx"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.607581 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell07477-account-delete-5hzjb"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.610412 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell07477-account-delete-5hzjb" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.636792 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-298l7"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.644809 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zc4r\" (UniqueName: \"kubernetes.io/projected/cfc2e8cc-04c1-4481-bf7d-d7e99972200f-kube-api-access-5zc4r\") pod \"novaapi2c75-account-delete-c4rqx\" (UID: \"cfc2e8cc-04c1-4481-bf7d-d7e99972200f\") " pod="openstack/novaapi2c75-account-delete-c4rqx" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.645152 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfc2e8cc-04c1-4481-bf7d-d7e99972200f-operator-scripts\") pod \"novaapi2c75-account-delete-c4rqx\" (UID: \"cfc2e8cc-04c1-4481-bf7d-d7e99972200f\") " pod="openstack/novaapi2c75-account-delete-c4rqx" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.658053 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfc2e8cc-04c1-4481-bf7d-d7e99972200f-operator-scripts\") pod \"novaapi2c75-account-delete-c4rqx\" (UID: \"cfc2e8cc-04c1-4481-bf7d-d7e99972200f\") " pod="openstack/novaapi2c75-account-delete-c4rqx" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.711629 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zc4r\" (UniqueName: \"kubernetes.io/projected/cfc2e8cc-04c1-4481-bf7d-d7e99972200f-kube-api-access-5zc4r\") pod \"novaapi2c75-account-delete-c4rqx\" (UID: \"cfc2e8cc-04c1-4481-bf7d-d7e99972200f\") " pod="openstack/novaapi2c75-account-delete-c4rqx" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.730669 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi2c75-account-delete-c4rqx" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.747823 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxnqt\" (UniqueName: \"kubernetes.io/projected/a66fa8fc-f908-43e7-a169-6156fc2092f8-kube-api-access-pxnqt\") pod \"novacell07477-account-delete-5hzjb\" (UID: \"a66fa8fc-f908-43e7-a169-6156fc2092f8\") " pod="openstack/novacell07477-account-delete-5hzjb" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.747906 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a66fa8fc-f908-43e7-a169-6156fc2092f8-operator-scripts\") pod \"novacell07477-account-delete-5hzjb\" (UID: \"a66fa8fc-f908-43e7-a169-6156fc2092f8\") " pod="openstack/novacell07477-account-delete-5hzjb" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.821363 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb" path="/var/lib/kubelet/pods/10cb606c-6ef8-49e7-9fe4-08dd07fbd0fb/volumes" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.827841 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb19735-07df-4fbd-9f9a-4d3aa861e03a" path="/var/lib/kubelet/pods/ffb19735-07df-4fbd-9f9a-4d3aa861e03a/volumes" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.828584 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-298l7"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.872787 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell07477-account-delete-5hzjb"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.872815 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-n9nhw"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.882677 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxnqt\" (UniqueName: \"kubernetes.io/projected/a66fa8fc-f908-43e7-a169-6156fc2092f8-kube-api-access-pxnqt\") pod \"novacell07477-account-delete-5hzjb\" (UID: \"a66fa8fc-f908-43e7-a169-6156fc2092f8\") " pod="openstack/novacell07477-account-delete-5hzjb" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.883177 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a66fa8fc-f908-43e7-a169-6156fc2092f8-operator-scripts\") pod \"novacell07477-account-delete-5hzjb\" (UID: \"a66fa8fc-f908-43e7-a169-6156fc2092f8\") " pod="openstack/novacell07477-account-delete-5hzjb" Nov 22 07:32:18 crc kubenswrapper[4856]: E1122 07:32:18.883663 4856 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 07:32:18 crc kubenswrapper[4856]: E1122 07:32:18.883720 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data podName:0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:19.883703844 +0000 UTC m=+1782.297097102 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data") pod "rabbitmq-server-0" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89") : configmap "rabbitmq-config-data" not found Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.886449 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a66fa8fc-f908-43e7-a169-6156fc2092f8-operator-scripts\") pod \"novacell07477-account-delete-5hzjb\" (UID: \"a66fa8fc-f908-43e7-a169-6156fc2092f8\") " pod="openstack/novacell07477-account-delete-5hzjb" Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.888269 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-n9nhw"] Nov 22 07:32:18 crc kubenswrapper[4856]: I1122 07:32:18.997826 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-qmflx"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.001610 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxnqt\" (UniqueName: \"kubernetes.io/projected/a66fa8fc-f908-43e7-a169-6156fc2092f8-kube-api-access-pxnqt\") pod \"novacell07477-account-delete-5hzjb\" (UID: \"a66fa8fc-f908-43e7-a169-6156fc2092f8\") " pod="openstack/novacell07477-account-delete-5hzjb" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.028643 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.029779 4856 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/nova-metadata-0" secret="" err="secret \"nova-nova-dockercfg-9nqcq\" not found" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.058242 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.071200 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.071554 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="3aa24715-1df9-4a47-9817-4a1b68679d08" containerName="ovn-northd" containerID="cri-o://0837d9798ef5bdddf9e9d11f1d4578cefe9b49abb3e9b5697828bae554298534" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.071740 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="3aa24715-1df9-4a47-9817-4a1b68679d08" containerName="openstack-network-exporter" containerID="cri-o://bef68756d75607bcf49b118ee011e2d46c1fca15a0f4988d5490ac2121c7d6ec" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.089854 4856 secret.go:188] Couldn't get secret openstack/nova-metadata-config-data: secret "nova-metadata-config-data" not found Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.090999 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data podName:e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:19.590975018 +0000 UTC m=+1782.004368276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data") pod "nova-metadata-0" (UID: "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92") : secret "nova-metadata-config-data" not found Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.131692 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-qmflx"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.146573 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-nr4d2"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.213163 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.219421 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron73f8-account-delete-b8zpb"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.220820 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron73f8-account-delete-b8zpb" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.245223 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-nr4d2"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.247305 4856 generic.go:334] "Generic (PLEG): container finished" podID="e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" containerID="9dcec325019ebdfce923c32261c3801484f6c45ab535eb4623bd34243cd70533" exitCode=0 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.247771 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hwrb9" event={"ID":"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3","Type":"ContainerDied","Data":"9dcec325019ebdfce923c32261c3801484f6c45ab535eb4623bd34243cd70533"} Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.257182 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell07477-account-delete-5hzjb" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.268436 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron73f8-account-delete-b8zpb"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.276031 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-5mjrn"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.286262 4856 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/nova-metadata-0" secret="" err="secret \"nova-nova-dockercfg-9nqcq\" not found" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.291913 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-5mjrn"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.296865 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.304498 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-fc96b95bb-4mtxg"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.304772 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-fc96b95bb-4mtxg" podUID="70c845eb-6695-4de7-8b4a-ef7c6a6701a4" containerName="placement-log" containerID="cri-o://4271d7224db735b34906645781ea2372db51f2e3d614022512e9b52eee61ba39" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.305134 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="0768fe63-c6c8-48c2-a121-7216823f73ef" containerName="openstack-network-exporter" containerID="cri-o://a27e20589e4e9738c8b1ba2a88ec92db294be52ec1405bb5a02a6d451b8e8534" gracePeriod=300 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.305455 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-fc96b95bb-4mtxg" podUID="70c845eb-6695-4de7-8b4a-ef7c6a6701a4" containerName="placement-api" containerID="cri-o://bb57a5740eec3fe63e3bb880f72bda941c5f54b634af051477157f490cf788ec" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.306387 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.309160 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8-operator-scripts\") pod \"neutron73f8-account-delete-b8zpb\" (UID: \"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8\") " pod="openstack/neutron73f8-account-delete-b8zpb" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.309260 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tddl8\" (UniqueName: \"kubernetes.io/projected/fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8-kube-api-access-tddl8\") pod \"neutron73f8-account-delete-b8zpb\" (UID: \"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8\") " pod="openstack/neutron73f8-account-delete-b8zpb" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.317971 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.333560 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b57bd9f89-z95qh"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.338254 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="55308d58-6be6-483d-bc27-2904f15d32f0" containerName="openstack-network-exporter" containerID="cri-o://beff5c4f9865829069fb5a650f73d4daaf877eaaaf7cd411dbc96c82233e8e19" gracePeriod=300 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.338419 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" podUID="8c76350d-ce88-42c5-8f7c-68c084a511e2" containerName="dnsmasq-dns" containerID="cri-o://926e6fd1571566f17f16d953955d8eb260b1f5bed95ee74d64360f30908b0a98" gracePeriod=10 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.349740 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-w7rvq"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.358823 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-w7rvq"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.370590 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.370857 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c75cebe3-86db-4be1-9755-4bd8a83c9796" containerName="glance-log" containerID="cri-o://87c89906bf819de89643974ff91061bf464fcbe0da565621b557fdb026d38601" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.370929 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c75cebe3-86db-4be1-9755-4bd8a83c9796" containerName="glance-httpd" containerID="cri-o://cfc3e2910129f9e8a60e68b621e6eee3267b6c9aa86e078920823532cee13fa0" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.391878 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.394083 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.394123 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.398573 4856 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/nova-api-0" secret="" err="secret \"nova-nova-dockercfg-9nqcq\" not found" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.410574 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8-operator-scripts\") pod \"neutron73f8-account-delete-b8zpb\" (UID: \"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8\") " pod="openstack/neutron73f8-account-delete-b8zpb" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.410633 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tddl8\" (UniqueName: \"kubernetes.io/projected/fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8-kube-api-access-tddl8\") pod \"neutron73f8-account-delete-b8zpb\" (UID: \"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8\") " pod="openstack/neutron73f8-account-delete-b8zpb" Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.411614 4856 secret.go:188] Couldn't get secret openstack/nova-api-config-data: secret "nova-api-config-data" not found Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.411658 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data podName:b1ccf431-f692-459f-b249-66bd9747d09c nodeName:}" failed. No retries permitted until 2025-11-22 07:32:19.911643902 +0000 UTC m=+1782.325037160 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data") pod "nova-api-0" (UID: "b1ccf431-f692-459f-b249-66bd9747d09c") : secret "nova-api-config-data" not found Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.412105 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.412300 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="aec2d14e-7026-4f6d-a0b2-13ff53d5e124" containerName="cinder-scheduler" containerID="cri-o://e5b7a326f0ad6ee2471d7167a3c293c93e8329469da146c3d10a4dab31910b17" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.412431 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="aec2d14e-7026-4f6d-a0b2-13ff53d5e124" containerName="probe" containerID="cri-o://72290d753c232f9f411f4eca62ef3cf6c13d4eb7af108e1e14ff35b4c3746200" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.412598 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8-operator-scripts\") pod \"neutron73f8-account-delete-b8zpb\" (UID: \"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8\") " pod="openstack/neutron73f8-account-delete-b8zpb" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.420443 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.439434 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.439794 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bb0a212d-74dc-40d3-84a4-bce83b78e788" containerName="glance-log" containerID="cri-o://f37064b2330b484c7d969fce3d2a893f3abae84ebef184dd47e6d9b5cc22dbd0" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.439946 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bb0a212d-74dc-40d3-84a4-bce83b78e788" containerName="glance-httpd" containerID="cri-o://252762430df01a665bfdad1f0791906f5cce3eb5c2ead3119b24019aca7e6928" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.486357 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tddl8\" (UniqueName: \"kubernetes.io/projected/fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8-kube-api-access-tddl8\") pod \"neutron73f8-account-delete-b8zpb\" (UID: \"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8\") " pod="openstack/neutron73f8-account-delete-b8zpb" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.490320 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.490829 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-server" containerID="cri-o://c22be9584965ebc42abd66c9bfe89aca421bd210a908db30115541e641df706a" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491222 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-updater" containerID="cri-o://dafe6ce95027e629d7af60bc33995b31a71bb7ef4de51b371a2ee48e7639d083" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491263 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-auditor" containerID="cri-o://423b2c9f27662f7d6367f52a13a9033ed0e18cb78b5dc553d9b64162d80e2544" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491297 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-replicator" containerID="cri-o://9b6021a67115d6e55eab967cf6d9caa17bd06d922a3d54b43b6f5dec9196e96d" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491433 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-server" containerID="cri-o://1cf12acdc3f6a6abb938bdcfc295ffa2101088f787027d51f80b951797bb5873" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491491 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="swift-recon-cron" containerID="cri-o://f6b36d1ad73481da60eada98f0cdb3c61e2e68ee475247d1ff9682f6f708afb3" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491552 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="rsync" containerID="cri-o://ecc44836c8466c6fbcc848350b1a769fe7507c5c9ee03a0001c9685bf0cd78bc" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491595 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-expirer" containerID="cri-o://9f435952eb044c7ab5dcb833fc12c8685ca6e3fd82a9405acc66ff7e0a5e1488" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491637 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-updater" containerID="cri-o://4011c89f0b6803e45417d4182117f87df790db47e51c6dc417714bdbab0d9328" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491668 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-auditor" containerID="cri-o://be283db24da6932b997e62df069e78ce522bed9042d62990be78c405a0d8baff" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491697 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-replicator" containerID="cri-o://7739725925a289b294a1260a2963889a83f70dbfee02df9ebc4a046996eec165" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491728 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-server" containerID="cri-o://5aab3b9349e7624b4bdd58b9ddc145142c8697523405f28d16e4f3c04ea145ae" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491759 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-reaper" containerID="cri-o://a5a09f33961facab4f00ff54e2e02326d023fd20d2ac164e6dacaf7131204425" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491786 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-auditor" containerID="cri-o://199edfe080cf33b200ed5effe88b6a79246b1c89eb804c543da87be52e6c569e" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.491814 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-replicator" containerID="cri-o://507063dad370d0aa753a3a159944ec9f090dd4d59c3360495ed98d90f8250c2e" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.528616 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="0768fe63-c6c8-48c2-a121-7216823f73ef" containerName="ovsdbserver-sb" containerID="cri-o://4e82ed304d356b9a96c6aba3ac20d49662a3c4b69636547ee5ba622240c5a35a" gracePeriod=300 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.537755 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.538175 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b049e107-76c1-4669-adb3-7b92560ef90d" containerName="cinder-api-log" containerID="cri-o://0439eeb4750e14d8f9ae56ff7b037034d989e88f9f897866cf9e1ff613634e01" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.545081 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b049e107-76c1-4669-adb3-7b92560ef90d" containerName="cinder-api" containerID="cri-o://ae19c364f3ae18bad5e6112fe44d680c855202a8622fbd6ab5f9cf49060fcd42" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.562329 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" podUID="8c76350d-ce88-42c5-8f7c-68c084a511e2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.202:5353: connect: connection refused" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.580691 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.580813 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="55308d58-6be6-483d-bc27-2904f15d32f0" containerName="ovsdbserver-nb" containerID="cri-o://52b5d54be089b50ed66bdc784286dd4da166b47053361ce527fcf601f29f2b4c" gracePeriod=300 Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.617352 4856 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.617423 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data podName:4ac8c44e-0667-43f7-aebd-a7b4c5bcb429 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:21.617403844 +0000 UTC m=+1784.030797102 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data") pod "rabbitmq-cell1-server-0" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429") : configmap "rabbitmq-cell1-config-data" not found Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.617490 4856 secret.go:188] Couldn't get secret openstack/nova-metadata-config-data: secret "nova-metadata-config-data" not found Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.617532 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data podName:e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:20.617524848 +0000 UTC m=+1783.030918106 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data") pod "nova-metadata-0" (UID: "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92") : secret "nova-metadata-config-data" not found Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.647341 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.700608 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.732190 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-pb4xh"] Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.754725 4856 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.771264 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-pb4xh"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.824525 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.859386 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e905-account-create-2fzk4"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.872472 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e905-account-create-2fzk4"] Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.880388 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52b5d54be089b50ed66bdc784286dd4da166b47053361ce527fcf601f29f2b4c is running failed: container process not found" containerID="52b5d54be089b50ed66bdc784286dd4da166b47053361ce527fcf601f29f2b4c" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.883724 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52b5d54be089b50ed66bdc784286dd4da166b47053361ce527fcf601f29f2b4c is running failed: container process not found" containerID="52b5d54be089b50ed66bdc784286dd4da166b47053361ce527fcf601f29f2b4c" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.884218 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c448d48d9-lmlhj"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.884472 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c448d48d9-lmlhj" podUID="cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" containerName="neutron-api" containerID="cri-o://273797dc3d1ff426732192e04e6bd642a97dc99523e657e806f91b951e7b928a" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.884648 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c448d48d9-lmlhj" podUID="cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" containerName="neutron-httpd" containerID="cri-o://e4ce8b9ed4b91b14fe577f0657b03ac8159da3736fa9337862e230ef16a43afb" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.884843 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52b5d54be089b50ed66bdc784286dd4da166b47053361ce527fcf601f29f2b4c is running failed: container process not found" containerID="52b5d54be089b50ed66bdc784286dd4da166b47053361ce527fcf601f29f2b4c" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.884911 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 52b5d54be089b50ed66bdc784286dd4da166b47053361ce527fcf601f29f2b4c is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-nb-0" podUID="55308d58-6be6-483d-bc27-2904f15d32f0" containerName="ovsdbserver-nb" Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.897018 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-56bc6597ff-ll6fl"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.897269 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-56bc6597ff-ll6fl" podUID="314d3b00-9bb4-4caa-a2dd-521e70e3d73d" containerName="proxy-httpd" containerID="cri-o://6219833dba75dd8b4b4fd8f9b3965d45ed8beebecf788175cdad2c1025ca7eea" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.897694 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-56bc6597ff-ll6fl" podUID="314d3b00-9bb4-4caa-a2dd-521e70e3d73d" containerName="proxy-server" containerID="cri-o://545281c9124acb52b1ddf1192147efb7e07a95b9f53d9d183531f5e1698bb14f" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.911628 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-68b59dd9f8-dgbs9"] Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.911953 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" podUID="665dbe7c-5370-4a97-8502-e9b25c8acd3a" containerName="barbican-keystone-listener-log" containerID="cri-o://89fb7a00fd4efc74515a0c3d4a20db20a62bcd9de48f98ba66ab6036caf8a420" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.912350 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" podUID="665dbe7c-5370-4a97-8502-e9b25c8acd3a" containerName="barbican-keystone-listener" containerID="cri-o://79ce02c0e12e71d034284ed8bae98790aa968294e2855ff785b9729ddd86f16b" gracePeriod=30 Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.920402 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" containerName="rabbitmq" containerID="cri-o://fe053dc6b4b700a119cd588385a844042a2dde38e5a679600fc61619199db0cc" gracePeriod=604800 Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.947050 4856 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.947115 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data podName:0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:21.947100253 +0000 UTC m=+1784.360493511 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data") pod "rabbitmq-server-0" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89") : configmap "rabbitmq-config-data" not found Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.947547 4856 secret.go:188] Couldn't get secret openstack/nova-api-config-data: secret "nova-api-config-data" not found Nov 22 07:32:19 crc kubenswrapper[4856]: E1122 07:32:19.947575 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data podName:b1ccf431-f692-459f-b249-66bd9747d09c nodeName:}" failed. No retries permitted until 2025-11-22 07:32:20.947566625 +0000 UTC m=+1783.360959883 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data") pod "nova-api-0" (UID: "b1ccf431-f692-459f-b249-66bd9747d09c") : secret "nova-api-config-data" not found Nov 22 07:32:19 crc kubenswrapper[4856]: I1122 07:32:19.990490 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron73f8-account-delete-b8zpb" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.044778 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-f69556b5c-qmsmf"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.044981 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-f69556b5c-qmsmf" podUID="bfd5417e-43d6-4fe2-807c-8c203cb74c0a" containerName="barbican-worker-log" containerID="cri-o://df445e7e3ade77c2dd919f37116927b5b747d07550482260db0ad6f5970682fd" gracePeriod=30 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.045418 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-f69556b5c-qmsmf" podUID="bfd5417e-43d6-4fe2-807c-8c203cb74c0a" containerName="barbican-worker" containerID="cri-o://08e96c872138b89aa87fe681eda59fce3d594656121c84a13f4d89a1c5be6ca8" gracePeriod=30 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.047062 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8zttm_ee10c8a7-96d4-4ee5-8306-a17bceb73cf1/openstack-network-exporter/0.log" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.047144 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.061783 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovs-vswitchd" containerID="cri-o://1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" gracePeriod=28 Nov 22 07:32:20 crc kubenswrapper[4856]: W1122 07:32:20.068999 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eaa66e0_ee9b_4115_b385_222e8ac0c21c.slice/crio-9970fb69254ee38cebcff01cf14152d6038ef05428b95cedacc4f1c2e4b74be5 WatchSource:0}: Error finding container 9970fb69254ee38cebcff01cf14152d6038ef05428b95cedacc4f1c2e4b74be5: Status 404 returned error can't find the container with id 9970fb69254ee38cebcff01cf14152d6038ef05428b95cedacc4f1c2e4b74be5 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.071861 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-79bdcb776d-cl77m"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.072127 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-79bdcb776d-cl77m" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api-log" containerID="cri-o://b6356dec8e3af2060f0508772909c3164a9dbf1ad47a0fddc1e261b2db1f8b4f" gracePeriod=30 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.072270 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-79bdcb776d-cl77m" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api" containerID="cri-o://4b3192676d3e19f237ce934c70e2e2105edb9e9415b2d7c5b848a4de24f6ac9a" gracePeriod=30 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.087668 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hwrb9" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.092661 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.093044 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="8f9815d1-2297-4a66-9793-ba485053ca2a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://75e814a4cfa4f97ecc9bfab324de4d5b2b33d836ae12cc47b87c6782b91c5dae" gracePeriod=30 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.136584 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.157860 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-ovs-rundir\") pod \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.157909 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-log-ovn\") pod \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.157939 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlttb\" (UniqueName: \"kubernetes.io/projected/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-kube-api-access-hlttb\") pod \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.157969 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfz9n\" (UniqueName: \"kubernetes.io/projected/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-kube-api-access-xfz9n\") pod \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158021 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-combined-ca-bundle\") pod \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158024 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" (UID: "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158072 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "ee10c8a7-96d4-4ee5-8306-a17bceb73cf1" (UID: "ee10c8a7-96d4-4ee5-8306-a17bceb73cf1"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158085 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-scripts\") pod \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158162 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-ovn-controller-tls-certs\") pod \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158199 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-run\") pod \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158215 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-config\") pod \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158269 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-combined-ca-bundle\") pod \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158299 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-run-ovn\") pod \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\" (UID: \"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3\") " Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158342 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-metrics-certs-tls-certs\") pod \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158359 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-ovn-rundir\") pod \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\" (UID: \"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1\") " Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158783 4856 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-ovs-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158800 4856 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.158846 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "ee10c8a7-96d4-4ee5-8306-a17bceb73cf1" (UID: "ee10c8a7-96d4-4ee5-8306-a17bceb73cf1"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.162156 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" (UID: "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.162308 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-run" (OuterVolumeSpecName: "var-run") pod "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" (UID: "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.164472 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.164532 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-scripts" (OuterVolumeSpecName: "scripts") pod "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" (UID: "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.164702 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="07329cf7-c3ff-410a-8ab7-8f19ae9d3974" containerName="nova-scheduler-scheduler" containerID="cri-o://9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57" gracePeriod=30 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.165106 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-kube-api-access-xfz9n" (OuterVolumeSpecName: "kube-api-access-xfz9n") pod "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" (UID: "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3"). InnerVolumeSpecName "kube-api-access-xfz9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.176805 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-config" (OuterVolumeSpecName: "config") pod "ee10c8a7-96d4-4ee5-8306-a17bceb73cf1" (UID: "ee10c8a7-96d4-4ee5-8306-a17bceb73cf1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.204568 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gd2pc"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.210555 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-gd2pc"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.217846 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-kube-api-access-hlttb" (OuterVolumeSpecName: "kube-api-access-hlttb") pod "ee10c8a7-96d4-4ee5-8306-a17bceb73cf1" (UID: "ee10c8a7-96d4-4ee5-8306-a17bceb73cf1"). InnerVolumeSpecName "kube-api-access-hlttb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.224008 4856 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Nov 22 07:32:20 crc kubenswrapper[4856]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 22 07:32:20 crc kubenswrapper[4856]: + source /usr/local/bin/container-scripts/functions Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNBridge=br-int Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNRemote=tcp:localhost:6642 Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNEncapType=geneve Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNAvailabilityZones= Nov 22 07:32:20 crc kubenswrapper[4856]: ++ EnableChassisAsGateway=true Nov 22 07:32:20 crc kubenswrapper[4856]: ++ PhysicalNetworks= Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNHostName= Nov 22 07:32:20 crc kubenswrapper[4856]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 22 07:32:20 crc kubenswrapper[4856]: ++ ovs_dir=/var/lib/openvswitch Nov 22 07:32:20 crc kubenswrapper[4856]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 22 07:32:20 crc kubenswrapper[4856]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 22 07:32:20 crc kubenswrapper[4856]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + sleep 0.5 Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + sleep 0.5 Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + sleep 0.5 Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + sleep 0.5 Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + cleanup_ovsdb_server_semaphore Nov 22 07:32:20 crc kubenswrapper[4856]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 07:32:20 crc kubenswrapper[4856]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 22 07:32:20 crc kubenswrapper[4856]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-zz5h4" message=< Nov 22 07:32:20 crc kubenswrapper[4856]: Exiting ovsdb-server (5) [ OK ] Nov 22 07:32:20 crc kubenswrapper[4856]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 22 07:32:20 crc kubenswrapper[4856]: + source /usr/local/bin/container-scripts/functions Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNBridge=br-int Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNRemote=tcp:localhost:6642 Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNEncapType=geneve Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNAvailabilityZones= Nov 22 07:32:20 crc kubenswrapper[4856]: ++ EnableChassisAsGateway=true Nov 22 07:32:20 crc kubenswrapper[4856]: ++ PhysicalNetworks= Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNHostName= Nov 22 07:32:20 crc kubenswrapper[4856]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 22 07:32:20 crc kubenswrapper[4856]: ++ ovs_dir=/var/lib/openvswitch Nov 22 07:32:20 crc kubenswrapper[4856]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 22 07:32:20 crc kubenswrapper[4856]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 22 07:32:20 crc kubenswrapper[4856]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + sleep 0.5 Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + sleep 0.5 Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + sleep 0.5 Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + sleep 0.5 Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + cleanup_ovsdb_server_semaphore Nov 22 07:32:20 crc kubenswrapper[4856]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 07:32:20 crc kubenswrapper[4856]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 22 07:32:20 crc kubenswrapper[4856]: > Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.224048 4856 kuberuntime_container.go:691] "PreStop hook failed" err=< Nov 22 07:32:20 crc kubenswrapper[4856]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Nov 22 07:32:20 crc kubenswrapper[4856]: + source /usr/local/bin/container-scripts/functions Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNBridge=br-int Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNRemote=tcp:localhost:6642 Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNEncapType=geneve Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNAvailabilityZones= Nov 22 07:32:20 crc kubenswrapper[4856]: ++ EnableChassisAsGateway=true Nov 22 07:32:20 crc kubenswrapper[4856]: ++ PhysicalNetworks= Nov 22 07:32:20 crc kubenswrapper[4856]: ++ OVNHostName= Nov 22 07:32:20 crc kubenswrapper[4856]: ++ DB_FILE=/etc/openvswitch/conf.db Nov 22 07:32:20 crc kubenswrapper[4856]: ++ ovs_dir=/var/lib/openvswitch Nov 22 07:32:20 crc kubenswrapper[4856]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Nov 22 07:32:20 crc kubenswrapper[4856]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Nov 22 07:32:20 crc kubenswrapper[4856]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + sleep 0.5 Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + sleep 0.5 Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + sleep 0.5 Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + sleep 0.5 Nov 22 07:32:20 crc kubenswrapper[4856]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Nov 22 07:32:20 crc kubenswrapper[4856]: + cleanup_ovsdb_server_semaphore Nov 22 07:32:20 crc kubenswrapper[4856]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Nov 22 07:32:20 crc kubenswrapper[4856]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Nov 22 07:32:20 crc kubenswrapper[4856]: > pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovsdb-server" containerID="cri-o://f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.224082 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovsdb-server" containerID="cri-o://f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" gracePeriod=28 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.224909 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.225214 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="2b88f55c-12d5-4cba-a155-aa00c19c94f4" containerName="nova-cell1-conductor-conductor" containerID="cri-o://889ab0aa1988eeb2448a9ab0bc42e314c5c9c7e3df09896245e4cd6f9448c8fb" gracePeriod=30 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.226727 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" (UID: "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.251564 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mjp7j"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.254958 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee10c8a7-96d4-4ee5-8306-a17bceb73cf1" (UID: "ee10c8a7-96d4-4ee5-8306-a17bceb73cf1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.260345 4856 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.260382 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.260394 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.260410 4856 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.260422 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.260433 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlttb\" (UniqueName: \"kubernetes.io/projected/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-kube-api-access-hlttb\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.260444 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfz9n\" (UniqueName: \"kubernetes.io/projected/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-kube-api-access-xfz9n\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.260455 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.260465 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.267636 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-mjp7j"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.275323 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.275554 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="18fcab55-6a49-4c21-9314-435129cf376a" containerName="nova-cell0-conductor-conductor" containerID="cri-o://ae2a400802cf450e80a83dad86eb4c2623ee43d44b73239e9fc8e7d9b2dbe411" gracePeriod=30 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.289327 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement00a5-account-delete-qrc4g"] Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.295686 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.298632 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance1a70-account-delete-m5qqx"] Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.306003 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.315407 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb0a212d-74dc-40d3-84a4-bce83b78e788","Type":"ContainerDied","Data":"f37064b2330b484c7d969fce3d2a893f3abae84ebef184dd47e6d9b5cc22dbd0"} Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.317561 4856 generic.go:334] "Generic (PLEG): container finished" podID="bb0a212d-74dc-40d3-84a4-bce83b78e788" containerID="f37064b2330b484c7d969fce3d2a893f3abae84ebef184dd47e6d9b5cc22dbd0" exitCode=143 Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.326880 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.326955 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="07329cf7-c3ff-410a-8ab7-8f19ae9d3974" containerName="nova-scheduler-scheduler" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.330674 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance1a70-account-delete-m5qqx" event={"ID":"df337886-1469-499f-bbb4-564f479cafa7","Type":"ContainerStarted","Data":"8286b09b3af8de22cd7b4baf2b144f226e9a825dc711ba4c0cb6b9829ff161b4"} Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.342532 4856 generic.go:334] "Generic (PLEG): container finished" podID="c75cebe3-86db-4be1-9755-4bd8a83c9796" containerID="87c89906bf819de89643974ff91061bf464fcbe0da565621b557fdb026d38601" exitCode=143 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.342587 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c75cebe3-86db-4be1-9755-4bd8a83c9796","Type":"ContainerDied","Data":"87c89906bf819de89643974ff91061bf464fcbe0da565621b557fdb026d38601"} Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.347420 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinderceda-account-delete-chlrj"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.357557 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" containerName="rabbitmq" containerID="cri-o://1a983d61b8dfe6b5b848b2945b31f7053bd5045dbc03ba4867c1e7855f9b3dcd" gracePeriod=604800 Nov 22 07:32:20 crc kubenswrapper[4856]: W1122 07:32:20.367148 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d5250b2_5ff2_4da6_a2b9_038ff0e0d30f.slice/crio-9abfc1f2b88c3f7fa0191ded52eb115d679f975757c1bf640410340993f23dc1 WatchSource:0}: Error finding container 9abfc1f2b88c3f7fa0191ded52eb115d679f975757c1bf640410340993f23dc1: Status 404 returned error can't find the container with id 9abfc1f2b88c3f7fa0191ded52eb115d679f975757c1bf640410340993f23dc1 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.373855 4856 generic.go:334] "Generic (PLEG): container finished" podID="55308d58-6be6-483d-bc27-2904f15d32f0" containerID="beff5c4f9865829069fb5a650f73d4daaf877eaaaf7cd411dbc96c82233e8e19" exitCode=2 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.373924 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"55308d58-6be6-483d-bc27-2904f15d32f0","Type":"ContainerDied","Data":"beff5c4f9865829069fb5a650f73d4daaf877eaaaf7cd411dbc96c82233e8e19"} Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.376970 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement00a5-account-delete-qrc4g" event={"ID":"9eaa66e0-ee9b-4115-b385-222e8ac0c21c","Type":"ContainerStarted","Data":"9970fb69254ee38cebcff01cf14152d6038ef05428b95cedacc4f1c2e4b74be5"} Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.378341 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-8zttm_ee10c8a7-96d4-4ee5-8306-a17bceb73cf1/openstack-network-exporter/0.log" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.378399 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-8zttm" event={"ID":"ee10c8a7-96d4-4ee5-8306-a17bceb73cf1","Type":"ContainerDied","Data":"99b11eeba96df72f922be4388c2d874e1c34e44149ab656894a3c81bacdeab57"} Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.378428 4856 scope.go:117] "RemoveContainer" containerID="508f07d95f18906c3efe0a28a1a716873bf2a5fa811acd5075db09b60b6b55fb" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.378586 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-8zttm" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.379392 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell07477-account-delete-5hzjb"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.393396 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="8f9815d1-2297-4a66-9793-ba485053ca2a" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.201:6080/vnc_lite.html\": dial tcp 10.217.0.201:6080: connect: connection refused" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.399462 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hwrb9" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.399467 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hwrb9" event={"ID":"e1e80ae2-f12d-4bfb-acca-e60281ef6dd3","Type":"ContainerDied","Data":"efd643fc4f2fed96eafc1048f314e88d90b5b3ffb076b18bdd30c237ebb01b33"} Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.411214 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican29ba-account-delete-f7rqf"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.425767 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" (UID: "e1e80ae2-f12d-4bfb-acca-e60281ef6dd3"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.432953 4856 generic.go:334] "Generic (PLEG): container finished" podID="3aa24715-1df9-4a47-9817-4a1b68679d08" containerID="bef68756d75607bcf49b118ee011e2d46c1fca15a0f4988d5490ac2121c7d6ec" exitCode=2 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.433009 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3aa24715-1df9-4a47-9817-4a1b68679d08","Type":"ContainerDied","Data":"bef68756d75607bcf49b118ee011e2d46c1fca15a0f4988d5490ac2121c7d6ec"} Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.444478 4856 generic.go:334] "Generic (PLEG): container finished" podID="70c845eb-6695-4de7-8b4a-ef7c6a6701a4" containerID="4271d7224db735b34906645781ea2372db51f2e3d614022512e9b52eee61ba39" exitCode=143 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.444569 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fc96b95bb-4mtxg" event={"ID":"70c845eb-6695-4de7-8b4a-ef7c6a6701a4","Type":"ContainerDied","Data":"4271d7224db735b34906645781ea2372db51f2e3d614022512e9b52eee61ba39"} Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.452144 4856 generic.go:334] "Generic (PLEG): container finished" podID="0768fe63-c6c8-48c2-a121-7216823f73ef" containerID="a27e20589e4e9738c8b1ba2a88ec92db294be52ec1405bb5a02a6d451b8e8534" exitCode=2 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.452228 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0768fe63-c6c8-48c2-a121-7216823f73ef","Type":"ContainerDied","Data":"a27e20589e4e9738c8b1ba2a88ec92db294be52ec1405bb5a02a6d451b8e8534"} Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.452849 4856 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/nova-api-0" secret="" err="secret \"nova-nova-dockercfg-9nqcq\" not found" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.452868 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.453050 4856 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/nova-metadata-0" secret="" err="secret \"nova-nova-dockercfg-9nqcq\" not found" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.461094 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi2c75-account-delete-c4rqx"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.468397 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.484354 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.617409 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="1d3a5d31-7183-4298-87ea-4aa84aa395b4" containerName="galera" containerID="cri-o://cde1d5e34fed489806a536b0abe875c6d7151093d591a234d52ed41c693e2b63" gracePeriod=29 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.617454 4856 scope.go:117] "RemoveContainer" containerID="9dcec325019ebdfce923c32261c3801484f6c45ab535eb4623bd34243cd70533" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.659138 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "ee10c8a7-96d4-4ee5-8306-a17bceb73cf1" (UID: "ee10c8a7-96d4-4ee5-8306-a17bceb73cf1"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.662723 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ae2a400802cf450e80a83dad86eb4c2623ee43d44b73239e9fc8e7d9b2dbe411" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.675463 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.675606 4856 secret.go:188] Couldn't get secret openstack/nova-metadata-config-data: secret "nova-metadata-config-data" not found Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.675660 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data podName:e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:22.675644312 +0000 UTC m=+1785.089037570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data") pod "nova-metadata-0" (UID: "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92") : secret "nova-metadata-config-data" not found Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.705672 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ae2a400802cf450e80a83dad86eb4c2623ee43d44b73239e9fc8e7d9b2dbe411" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.708285 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ae2a400802cf450e80a83dad86eb4c2623ee43d44b73239e9fc8e7d9b2dbe411" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.708357 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="18fcab55-6a49-4c21-9314-435129cf376a" containerName="nova-cell0-conductor-conductor" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.774285 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c34ba2b-b0cb-4527-b651-a888c0b49d32" path="/var/lib/kubelet/pods/1c34ba2b-b0cb-4527-b651-a888c0b49d32/volumes" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.775337 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4ab0b87-dec0-42f2-86a2-4e12a02c7573" path="/var/lib/kubelet/pods/a4ab0b87-dec0-42f2-86a2-4e12a02c7573/volumes" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.779168 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b446b176-7d24-4bb1-ab69-7d78c1c1e99f" path="/var/lib/kubelet/pods/b446b176-7d24-4bb1-ab69-7d78c1c1e99f/volumes" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.782340 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdcf6fb4-5003-482a-88eb-995e4626c8c8" path="/var/lib/kubelet/pods/bdcf6fb4-5003-482a-88eb-995e4626c8c8/volumes" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.784284 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be896f81-5804-4e66-8006-51eaa9675cb2" path="/var/lib/kubelet/pods/be896f81-5804-4e66-8006-51eaa9675cb2/volumes" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.787625 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c4fd78-c2bf-4a39-8db9-e511ae36a38c" path="/var/lib/kubelet/pods/d8c4fd78-c2bf-4a39-8db9-e511ae36a38c/volumes" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.789359 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db34970f-8e46-4f6f-9c3c-437b1a6d7a2d" path="/var/lib/kubelet/pods/db34970f-8e46-4f6f-9c3c-437b1a6d7a2d/volumes" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.795010 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e782df2b-d7a8-4319-aead-d5165a61314a" path="/var/lib/kubelet/pods/e782df2b-d7a8-4319-aead-d5165a61314a/volumes" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.810364 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb9414db-136f-408b-9081-d9ffdaa00e07" path="/var/lib/kubelet/pods/eb9414db-136f-408b-9081-d9ffdaa00e07/volumes" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.818940 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f62cc6af-1032-4593-a11f-0dde4a6020ae" path="/var/lib/kubelet/pods/f62cc6af-1032-4593-a11f-0dde4a6020ae/volumes" Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.820397 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-8zttm"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.820429 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-8zttm"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.820451 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron73f8-account-delete-b8zpb"] Nov 22 07:32:20 crc kubenswrapper[4856]: W1122 07:32:20.842775 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod519a764c_9ac2_4f94_84c6_7c284ab676cd.slice/crio-65dcf2061b4f7648accb48bcaf27113b70129034cad6d9ac5be7da9168939260 WatchSource:0}: Error finding container 65dcf2061b4f7648accb48bcaf27113b70129034cad6d9ac5be7da9168939260: Status 404 returned error can't find the container with id 65dcf2061b4f7648accb48bcaf27113b70129034cad6d9ac5be7da9168939260 Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.843104 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hwrb9"] Nov 22 07:32:20 crc kubenswrapper[4856]: I1122 07:32:20.847709 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hwrb9"] Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.957736 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0837d9798ef5bdddf9e9d11f1d4578cefe9b49abb3e9b5697828bae554298534" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.977747 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0837d9798ef5bdddf9e9d11f1d4578cefe9b49abb3e9b5697828bae554298534" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.985169 4856 secret.go:188] Couldn't get secret openstack/nova-api-config-data: secret "nova-api-config-data" not found Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.985233 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data podName:b1ccf431-f692-459f-b249-66bd9747d09c nodeName:}" failed. No retries permitted until 2025-11-22 07:32:22.985218173 +0000 UTC m=+1785.398611431 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data") pod "nova-api-0" (UID: "b1ccf431-f692-459f-b249-66bd9747d09c") : secret "nova-api-config-data" not found Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.990906 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b649794_30ba_493c_9285_05a58981ed36.slice/crio-conmon-507063dad370d0aa753a3a159944ec9f090dd4d59c3360495ed98d90f8250c2e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b649794_30ba_493c_9285_05a58981ed36.slice/crio-conmon-a5a09f33961facab4f00ff54e2e02326d023fd20d2ac164e6dacaf7131204425.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b649794_30ba_493c_9285_05a58981ed36.slice/crio-conmon-ecc44836c8466c6fbcc848350b1a769fe7507c5c9ee03a0001c9685bf0cd78bc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode782df2b_d7a8_4319_aead_d5165a61314a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1e80ae2_f12d_4bfb_acca_e60281ef6dd3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1e80ae2_f12d_4bfb_acca_e60281ef6dd3.slice/crio-efd643fc4f2fed96eafc1048f314e88d90b5b3ffb076b18bdd30c237ebb01b33\": RecentStats: unable to find data in memory cache]" Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.991803 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0837d9798ef5bdddf9e9d11f1d4578cefe9b49abb3e9b5697828bae554298534" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 07:32:20 crc kubenswrapper[4856]: E1122 07:32:20.991871 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="3aa24715-1df9-4a47-9817-4a1b68679d08" containerName="ovn-northd" Nov 22 07:32:21 crc kubenswrapper[4856]: E1122 07:32:21.326713 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e82ed304d356b9a96c6aba3ac20d49662a3c4b69636547ee5ba622240c5a35a is running failed: container process not found" containerID="4e82ed304d356b9a96c6aba3ac20d49662a3c4b69636547ee5ba622240c5a35a" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 22 07:32:21 crc kubenswrapper[4856]: E1122 07:32:21.327315 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e82ed304d356b9a96c6aba3ac20d49662a3c4b69636547ee5ba622240c5a35a is running failed: container process not found" containerID="4e82ed304d356b9a96c6aba3ac20d49662a3c4b69636547ee5ba622240c5a35a" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 22 07:32:21 crc kubenswrapper[4856]: E1122 07:32:21.327538 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e82ed304d356b9a96c6aba3ac20d49662a3c4b69636547ee5ba622240c5a35a is running failed: container process not found" containerID="4e82ed304d356b9a96c6aba3ac20d49662a3c4b69636547ee5ba622240c5a35a" cmd=["/usr/bin/pidof","ovsdb-server"] Nov 22 07:32:21 crc kubenswrapper[4856]: E1122 07:32:21.327567 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e82ed304d356b9a96c6aba3ac20d49662a3c4b69636547ee5ba622240c5a35a is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="0768fe63-c6c8-48c2-a121-7216823f73ef" containerName="ovsdbserver-sb" Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.474554 4856 generic.go:334] "Generic (PLEG): container finished" podID="cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" containerID="e4ce8b9ed4b91b14fe577f0657b03ac8159da3736fa9337862e230ef16a43afb" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.474612 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c448d48d9-lmlhj" event={"ID":"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313","Type":"ContainerDied","Data":"e4ce8b9ed4b91b14fe577f0657b03ac8159da3736fa9337862e230ef16a43afb"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.476364 4856 generic.go:334] "Generic (PLEG): container finished" podID="8f9815d1-2297-4a66-9793-ba485053ca2a" containerID="75e814a4cfa4f97ecc9bfab324de4d5b2b33d836ae12cc47b87c6782b91c5dae" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.476430 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8f9815d1-2297-4a66-9793-ba485053ca2a","Type":"ContainerDied","Data":"75e814a4cfa4f97ecc9bfab324de4d5b2b33d836ae12cc47b87c6782b91c5dae"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.477566 4856 generic.go:334] "Generic (PLEG): container finished" podID="9a94a048-f961-4675-85bf-88414e414a51" containerID="b3b1f2a0ac6e8ef5ca8623acaf447ee1e4d4c639c63af0026dc10d1cc70ff28a" exitCode=137 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.484331 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_55308d58-6be6-483d-bc27-2904f15d32f0/ovsdbserver-nb/0.log" Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.484373 4856 generic.go:334] "Generic (PLEG): container finished" podID="55308d58-6be6-483d-bc27-2904f15d32f0" containerID="52b5d54be089b50ed66bdc784286dd4da166b47053361ce527fcf601f29f2b4c" exitCode=143 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.484418 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"55308d58-6be6-483d-bc27-2904f15d32f0","Type":"ContainerDied","Data":"52b5d54be089b50ed66bdc784286dd4da166b47053361ce527fcf601f29f2b4c"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.490210 4856 generic.go:334] "Generic (PLEG): container finished" podID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerID="b6356dec8e3af2060f0508772909c3164a9dbf1ad47a0fddc1e261b2db1f8b4f" exitCode=143 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.490286 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79bdcb776d-cl77m" event={"ID":"39f7a457-9a5c-48b5-86c0-24d274596c8a","Type":"ContainerDied","Data":"b6356dec8e3af2060f0508772909c3164a9dbf1ad47a0fddc1e261b2db1f8b4f"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.495707 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0768fe63-c6c8-48c2-a121-7216823f73ef/ovsdbserver-sb/0.log" Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.495759 4856 generic.go:334] "Generic (PLEG): container finished" podID="0768fe63-c6c8-48c2-a121-7216823f73ef" containerID="4e82ed304d356b9a96c6aba3ac20d49662a3c4b69636547ee5ba622240c5a35a" exitCode=143 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.495847 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0768fe63-c6c8-48c2-a121-7216823f73ef","Type":"ContainerDied","Data":"4e82ed304d356b9a96c6aba3ac20d49662a3c4b69636547ee5ba622240c5a35a"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.498399 4856 generic.go:334] "Generic (PLEG): container finished" podID="bfd5417e-43d6-4fe2-807c-8c203cb74c0a" containerID="df445e7e3ade77c2dd919f37116927b5b747d07550482260db0ad6f5970682fd" exitCode=143 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.498545 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f69556b5c-qmsmf" event={"ID":"bfd5417e-43d6-4fe2-807c-8c203cb74c0a","Type":"ContainerDied","Data":"df445e7e3ade77c2dd919f37116927b5b747d07550482260db0ad6f5970682fd"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.500449 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement00a5-account-delete-qrc4g" event={"ID":"9eaa66e0-ee9b-4115-b385-222e8ac0c21c","Type":"ContainerStarted","Data":"d07bafd279ede63c481207e2d867b890ff02bb3ed145878df74e9c9bf2234f52"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.504701 4856 generic.go:334] "Generic (PLEG): container finished" podID="8c76350d-ce88-42c5-8f7c-68c084a511e2" containerID="926e6fd1571566f17f16d953955d8eb260b1f5bed95ee74d64360f30908b0a98" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.504875 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" event={"ID":"8c76350d-ce88-42c5-8f7c-68c084a511e2","Type":"ContainerDied","Data":"926e6fd1571566f17f16d953955d8eb260b1f5bed95ee74d64360f30908b0a98"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.513694 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican29ba-account-delete-f7rqf" event={"ID":"519a764c-9ac2-4f94-84c6-7c284ab676cd","Type":"ContainerStarted","Data":"65dcf2061b4f7648accb48bcaf27113b70129034cad6d9ac5be7da9168939260"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.515646 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi2c75-account-delete-c4rqx" event={"ID":"cfc2e8cc-04c1-4481-bf7d-d7e99972200f","Type":"ContainerStarted","Data":"85e485f6c2ad89407e94f73dfdb8f6483fd2408bef8c7f63bada4c52644e6f6e"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.518542 4856 generic.go:334] "Generic (PLEG): container finished" podID="665dbe7c-5370-4a97-8502-e9b25c8acd3a" containerID="89fb7a00fd4efc74515a0c3d4a20db20a62bcd9de48f98ba66ab6036caf8a420" exitCode=143 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.518728 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" event={"ID":"665dbe7c-5370-4a97-8502-e9b25c8acd3a","Type":"ContainerDied","Data":"89fb7a00fd4efc74515a0c3d4a20db20a62bcd9de48f98ba66ab6036caf8a420"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.527713 4856 generic.go:334] "Generic (PLEG): container finished" podID="aec2d14e-7026-4f6d-a0b2-13ff53d5e124" containerID="72290d753c232f9f411f4eca62ef3cf6c13d4eb7af108e1e14ff35b4c3746200" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.527739 4856 generic.go:334] "Generic (PLEG): container finished" podID="aec2d14e-7026-4f6d-a0b2-13ff53d5e124" containerID="e5b7a326f0ad6ee2471d7167a3c293c93e8329469da146c3d10a4dab31910b17" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.527792 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aec2d14e-7026-4f6d-a0b2-13ff53d5e124","Type":"ContainerDied","Data":"72290d753c232f9f411f4eca62ef3cf6c13d4eb7af108e1e14ff35b4c3746200"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.527816 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aec2d14e-7026-4f6d-a0b2-13ff53d5e124","Type":"ContainerDied","Data":"e5b7a326f0ad6ee2471d7167a3c293c93e8329469da146c3d10a4dab31910b17"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.543457 4856 generic.go:334] "Generic (PLEG): container finished" podID="285d77d1-e278-4664-97f0-7562e2740a0b" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.543548 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zz5h4" event={"ID":"285d77d1-e278-4664-97f0-7562e2740a0b","Type":"ContainerDied","Data":"f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.549159 4856 generic.go:334] "Generic (PLEG): container finished" podID="b049e107-76c1-4669-adb3-7b92560ef90d" containerID="0439eeb4750e14d8f9ae56ff7b037034d989e88f9f897866cf9e1ff613634e01" exitCode=143 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.549227 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b049e107-76c1-4669-adb3-7b92560ef90d","Type":"ContainerDied","Data":"0439eeb4750e14d8f9ae56ff7b037034d989e88f9f897866cf9e1ff613634e01"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.554166 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderceda-account-delete-chlrj" event={"ID":"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f","Type":"ContainerStarted","Data":"9abfc1f2b88c3f7fa0191ded52eb115d679f975757c1bf640410340993f23dc1"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.556634 4856 generic.go:334] "Generic (PLEG): container finished" podID="314d3b00-9bb4-4caa-a2dd-521e70e3d73d" containerID="545281c9124acb52b1ddf1192147efb7e07a95b9f53d9d183531f5e1698bb14f" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.556732 4856 generic.go:334] "Generic (PLEG): container finished" podID="314d3b00-9bb4-4caa-a2dd-521e70e3d73d" containerID="6219833dba75dd8b4b4fd8f9b3965d45ed8beebecf788175cdad2c1025ca7eea" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.556850 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56bc6597ff-ll6fl" event={"ID":"314d3b00-9bb4-4caa-a2dd-521e70e3d73d","Type":"ContainerDied","Data":"545281c9124acb52b1ddf1192147efb7e07a95b9f53d9d183531f5e1698bb14f"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.556940 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56bc6597ff-ll6fl" event={"ID":"314d3b00-9bb4-4caa-a2dd-521e70e3d73d","Type":"ContainerDied","Data":"6219833dba75dd8b4b4fd8f9b3965d45ed8beebecf788175cdad2c1025ca7eea"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.558235 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance1a70-account-delete-m5qqx" event={"ID":"df337886-1469-499f-bbb4-564f479cafa7","Type":"ContainerStarted","Data":"512fcc1df0f14272bdaa8bdcc74bd190f573df805e1cf118ca7c673232d677b1"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.561749 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell07477-account-delete-5hzjb" event={"ID":"a66fa8fc-f908-43e7-a169-6156fc2092f8","Type":"ContainerStarted","Data":"8aa6b04a05a5d76fcbff6f69a1a8e583400f0b15c27cb5021061d3d8cf44602a"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603613 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="dafe6ce95027e629d7af60bc33995b31a71bb7ef4de51b371a2ee48e7639d083" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603649 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="423b2c9f27662f7d6367f52a13a9033ed0e18cb78b5dc553d9b64162d80e2544" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603660 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="9b6021a67115d6e55eab967cf6d9caa17bd06d922a3d54b43b6f5dec9196e96d" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603670 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="1cf12acdc3f6a6abb938bdcfc295ffa2101088f787027d51f80b951797bb5873" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603681 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="ecc44836c8466c6fbcc848350b1a769fe7507c5c9ee03a0001c9685bf0cd78bc" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603689 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="9f435952eb044c7ab5dcb833fc12c8685ca6e3fd82a9405acc66ff7e0a5e1488" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603697 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="4011c89f0b6803e45417d4182117f87df790db47e51c6dc417714bdbab0d9328" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603707 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="be283db24da6932b997e62df069e78ce522bed9042d62990be78c405a0d8baff" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603702 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"dafe6ce95027e629d7af60bc33995b31a71bb7ef4de51b371a2ee48e7639d083"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603743 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"423b2c9f27662f7d6367f52a13a9033ed0e18cb78b5dc553d9b64162d80e2544"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603754 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"9b6021a67115d6e55eab967cf6d9caa17bd06d922a3d54b43b6f5dec9196e96d"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603761 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"1cf12acdc3f6a6abb938bdcfc295ffa2101088f787027d51f80b951797bb5873"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603770 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"ecc44836c8466c6fbcc848350b1a769fe7507c5c9ee03a0001c9685bf0cd78bc"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603778 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"9f435952eb044c7ab5dcb833fc12c8685ca6e3fd82a9405acc66ff7e0a5e1488"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603716 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="7739725925a289b294a1260a2963889a83f70dbfee02df9ebc4a046996eec165" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603829 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="5aab3b9349e7624b4bdd58b9ddc145142c8697523405f28d16e4f3c04ea145ae" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603857 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="a5a09f33961facab4f00ff54e2e02326d023fd20d2ac164e6dacaf7131204425" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603864 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="199edfe080cf33b200ed5effe88b6a79246b1c89eb804c543da87be52e6c569e" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603871 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="507063dad370d0aa753a3a159944ec9f090dd4d59c3360495ed98d90f8250c2e" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603878 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="c22be9584965ebc42abd66c9bfe89aca421bd210a908db30115541e641df706a" exitCode=0 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603786 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"4011c89f0b6803e45417d4182117f87df790db47e51c6dc417714bdbab0d9328"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603971 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"be283db24da6932b997e62df069e78ce522bed9042d62990be78c405a0d8baff"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603986 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"7739725925a289b294a1260a2963889a83f70dbfee02df9ebc4a046996eec165"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.603995 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"5aab3b9349e7624b4bdd58b9ddc145142c8697523405f28d16e4f3c04ea145ae"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.604004 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"a5a09f33961facab4f00ff54e2e02326d023fd20d2ac164e6dacaf7131204425"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.604015 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"199edfe080cf33b200ed5effe88b6a79246b1c89eb804c543da87be52e6c569e"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.604023 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"507063dad370d0aa753a3a159944ec9f090dd4d59c3360495ed98d90f8250c2e"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.604033 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"c22be9584965ebc42abd66c9bfe89aca421bd210a908db30115541e641df706a"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.606341 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b1ccf431-f692-459f-b249-66bd9747d09c" containerName="nova-api-log" containerID="cri-o://f0b1a60d0b1a6de591d20e91274f4f847de400e209ee3854019d56a6b7527817" gracePeriod=30 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.606380 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron73f8-account-delete-b8zpb" event={"ID":"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8","Type":"ContainerStarted","Data":"193af241635c99ea47ec3315efa583c295c6bf29160559a1528156c08c60bc9d"} Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.606500 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b1ccf431-f692-459f-b249-66bd9747d09c" containerName="nova-api-api" containerID="cri-o://748dcd5bb334b4bc2361b63a4afbafd4286f9d6147d5c3a3a460a57c1f55b549" gracePeriod=30 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.606626 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerName="nova-metadata-log" containerID="cri-o://30f6c92bfa88e0c50a824bbd5fb87ff5b3d7fbb4606aca9dfc830b62320a94a1" gracePeriod=30 Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.606673 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerName="nova-metadata-metadata" containerID="cri-o://34b8ef8ac4487f65f5dff6c904e4aa6b5fc3a3fd278121552b6ef063060959ec" gracePeriod=30 Nov 22 07:32:21 crc kubenswrapper[4856]: E1122 07:32:21.705815 4856 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 07:32:21 crc kubenswrapper[4856]: E1122 07:32:21.705899 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data podName:4ac8c44e-0667-43f7-aebd-a7b4c5bcb429 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:25.705878568 +0000 UTC m=+1788.119271826 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data") pod "rabbitmq-cell1-server-0" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429") : configmap "rabbitmq-cell1-config-data" not found Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.816421 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.909830 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-config\") pod \"8c76350d-ce88-42c5-8f7c-68c084a511e2\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.909883 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-ovsdbserver-nb\") pod \"8c76350d-ce88-42c5-8f7c-68c084a511e2\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.909920 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-ovsdbserver-sb\") pod \"8c76350d-ce88-42c5-8f7c-68c084a511e2\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.909974 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcpmw\" (UniqueName: \"kubernetes.io/projected/8c76350d-ce88-42c5-8f7c-68c084a511e2-kube-api-access-zcpmw\") pod \"8c76350d-ce88-42c5-8f7c-68c084a511e2\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.910041 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-dns-swift-storage-0\") pod \"8c76350d-ce88-42c5-8f7c-68c084a511e2\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.910117 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-dns-svc\") pod \"8c76350d-ce88-42c5-8f7c-68c084a511e2\" (UID: \"8c76350d-ce88-42c5-8f7c-68c084a511e2\") " Nov 22 07:32:21 crc kubenswrapper[4856]: I1122 07:32:21.923601 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c76350d-ce88-42c5-8f7c-68c084a511e2-kube-api-access-zcpmw" (OuterVolumeSpecName: "kube-api-access-zcpmw") pod "8c76350d-ce88-42c5-8f7c-68c084a511e2" (UID: "8c76350d-ce88-42c5-8f7c-68c084a511e2"). InnerVolumeSpecName "kube-api-access-zcpmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.017612 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcpmw\" (UniqueName: \"kubernetes.io/projected/8c76350d-ce88-42c5-8f7c-68c084a511e2-kube-api-access-zcpmw\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: E1122 07:32:22.017732 4856 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 07:32:22 crc kubenswrapper[4856]: E1122 07:32:22.017822 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data podName:0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:26.017796493 +0000 UTC m=+1788.431189751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data") pod "rabbitmq-server-0" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89") : configmap "rabbitmq-config-data" not found Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.019619 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8c76350d-ce88-42c5-8f7c-68c084a511e2" (UID: "8c76350d-ce88-42c5-8f7c-68c084a511e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.056917 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c76350d-ce88-42c5-8f7c-68c084a511e2" (UID: "8c76350d-ce88-42c5-8f7c-68c084a511e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.073013 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-config" (OuterVolumeSpecName: "config") pod "8c76350d-ce88-42c5-8f7c-68c084a511e2" (UID: "8c76350d-ce88-42c5-8f7c-68c084a511e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.088040 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c76350d-ce88-42c5-8f7c-68c084a511e2" (UID: "8c76350d-ce88-42c5-8f7c-68c084a511e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.102828 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c76350d-ce88-42c5-8f7c-68c084a511e2" (UID: "8c76350d-ce88-42c5-8f7c-68c084a511e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.119731 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.119767 4856 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.119776 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.119787 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.119798 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c76350d-ce88-42c5-8f7c-68c084a511e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.492163 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0768fe63-c6c8-48c2-a121-7216823f73ef/ovsdbserver-sb/0.log" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.492488 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.557649 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="c75cebe3-86db-4be1-9755-4bd8a83c9796" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.164:9292/healthcheck\": read tcp 10.217.0.2:58570->10.217.0.164:9292: read: connection reset by peer" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.559660 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="c75cebe3-86db-4be1-9755-4bd8a83c9796" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.164:9292/healthcheck\": read tcp 10.217.0.2:58574->10.217.0.164:9292: read: connection reset by peer" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.627538 4856 generic.go:334] "Generic (PLEG): container finished" podID="70c845eb-6695-4de7-8b4a-ef7c6a6701a4" containerID="bb57a5740eec3fe63e3bb880f72bda941c5f54b634af051477157f490cf788ec" exitCode=0 Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.627609 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fc96b95bb-4mtxg" event={"ID":"70c845eb-6695-4de7-8b4a-ef7c6a6701a4","Type":"ContainerDied","Data":"bb57a5740eec3fe63e3bb880f72bda941c5f54b634af051477157f490cf788ec"} Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.630809 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0768fe63-c6c8-48c2-a121-7216823f73ef-config\") pod \"0768fe63-c6c8-48c2-a121-7216823f73ef\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.630986 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-metrics-certs-tls-certs\") pod \"0768fe63-c6c8-48c2-a121-7216823f73ef\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.631018 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-ovsdbserver-sb-tls-certs\") pod \"0768fe63-c6c8-48c2-a121-7216823f73ef\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.631040 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"0768fe63-c6c8-48c2-a121-7216823f73ef\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.631069 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0768fe63-c6c8-48c2-a121-7216823f73ef-scripts\") pod \"0768fe63-c6c8-48c2-a121-7216823f73ef\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.631110 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwf99\" (UniqueName: \"kubernetes.io/projected/0768fe63-c6c8-48c2-a121-7216823f73ef-kube-api-access-bwf99\") pod \"0768fe63-c6c8-48c2-a121-7216823f73ef\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.631203 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0768fe63-c6c8-48c2-a121-7216823f73ef-ovsdb-rundir\") pod \"0768fe63-c6c8-48c2-a121-7216823f73ef\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.631279 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-combined-ca-bundle\") pod \"0768fe63-c6c8-48c2-a121-7216823f73ef\" (UID: \"0768fe63-c6c8-48c2-a121-7216823f73ef\") " Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.632269 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0768fe63-c6c8-48c2-a121-7216823f73ef-config" (OuterVolumeSpecName: "config") pod "0768fe63-c6c8-48c2-a121-7216823f73ef" (UID: "0768fe63-c6c8-48c2-a121-7216823f73ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.632959 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0768fe63-c6c8-48c2-a121-7216823f73ef-scripts" (OuterVolumeSpecName: "scripts") pod "0768fe63-c6c8-48c2-a121-7216823f73ef" (UID: "0768fe63-c6c8-48c2-a121-7216823f73ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.639619 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "0768fe63-c6c8-48c2-a121-7216823f73ef" (UID: "0768fe63-c6c8-48c2-a121-7216823f73ef"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.641874 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0768fe63-c6c8-48c2-a121-7216823f73ef-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "0768fe63-c6c8-48c2-a121-7216823f73ef" (UID: "0768fe63-c6c8-48c2-a121-7216823f73ef"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.645637 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.645729 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0768fe63-c6c8-48c2-a121-7216823f73ef-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.645742 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0768fe63-c6c8-48c2-a121-7216823f73ef-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.645777 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0768fe63-c6c8-48c2-a121-7216823f73ef-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.650024 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderceda-account-delete-chlrj" event={"ID":"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f","Type":"ContainerStarted","Data":"9741711f509fcaac60e11d8c80612fcee97889c09aee1bfdf5b301c894e0da33"} Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.655180 4856 generic.go:334] "Generic (PLEG): container finished" podID="df337886-1469-499f-bbb4-564f479cafa7" containerID="512fcc1df0f14272bdaa8bdcc74bd190f573df805e1cf118ca7c673232d677b1" exitCode=0 Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.655264 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance1a70-account-delete-m5qqx" event={"ID":"df337886-1469-499f-bbb4-564f479cafa7","Type":"ContainerDied","Data":"512fcc1df0f14272bdaa8bdcc74bd190f573df805e1cf118ca7c673232d677b1"} Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.658359 4856 generic.go:334] "Generic (PLEG): container finished" podID="b1ccf431-f692-459f-b249-66bd9747d09c" containerID="f0b1a60d0b1a6de591d20e91274f4f847de400e209ee3854019d56a6b7527817" exitCode=143 Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.658516 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b1ccf431-f692-459f-b249-66bd9747d09c","Type":"ContainerDied","Data":"f0b1a60d0b1a6de591d20e91274f4f847de400e209ee3854019d56a6b7527817"} Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.660776 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0768fe63-c6c8-48c2-a121-7216823f73ef-kube-api-access-bwf99" (OuterVolumeSpecName: "kube-api-access-bwf99") pod "0768fe63-c6c8-48c2-a121-7216823f73ef" (UID: "0768fe63-c6c8-48c2-a121-7216823f73ef"). InnerVolumeSpecName "kube-api-access-bwf99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.665932 4856 generic.go:334] "Generic (PLEG): container finished" podID="18fcab55-6a49-4c21-9314-435129cf376a" containerID="ae2a400802cf450e80a83dad86eb4c2623ee43d44b73239e9fc8e7d9b2dbe411" exitCode=0 Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.666169 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"18fcab55-6a49-4c21-9314-435129cf376a","Type":"ContainerDied","Data":"ae2a400802cf450e80a83dad86eb4c2623ee43d44b73239e9fc8e7d9b2dbe411"} Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.667907 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron73f8-account-delete-b8zpb" event={"ID":"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8","Type":"ContainerStarted","Data":"acef6c657dbb259f9e9177fab5afef052fec2f5a02a3c72c2be4304e9d337a1c"} Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.669034 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinderceda-account-delete-chlrj" podStartSLOduration=5.669015336 podStartE2EDuration="5.669015336s" podCreationTimestamp="2025-11-22 07:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:32:22.6625473 +0000 UTC m=+1785.075940558" watchObservedRunningTime="2025-11-22 07:32:22.669015336 +0000 UTC m=+1785.082408594" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.711755 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.712045 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" Nov 22 07:32:22 crc kubenswrapper[4856]: E1122 07:32:22.715944 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.720659 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron73f8-account-delete-b8zpb" podStartSLOduration=4.720639212 podStartE2EDuration="4.720639212s" podCreationTimestamp="2025-11-22 07:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:32:22.716619773 +0000 UTC m=+1785.130013021" watchObservedRunningTime="2025-11-22 07:32:22.720639212 +0000 UTC m=+1785.134032470" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.722924 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0768fe63-c6c8-48c2-a121-7216823f73ef" (UID: "0768fe63-c6c8-48c2-a121-7216823f73ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.738426 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" path="/var/lib/kubelet/pods/e1e80ae2-f12d-4bfb-acca-e60281ef6dd3/volumes" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.739153 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee10c8a7-96d4-4ee5-8306-a17bceb73cf1" path="/var/lib/kubelet/pods/ee10c8a7-96d4-4ee5-8306-a17bceb73cf1/volumes" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.740182 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.742439 4856 generic.go:334] "Generic (PLEG): container finished" podID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerID="30f6c92bfa88e0c50a824bbd5fb87ff5b3d7fbb4606aca9dfc830b62320a94a1" exitCode=143 Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.748884 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.748925 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwf99\" (UniqueName: \"kubernetes.io/projected/0768fe63-c6c8-48c2-a121-7216823f73ef-kube-api-access-bwf99\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.748937 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: E1122 07:32:22.749287 4856 secret.go:188] Couldn't get secret openstack/nova-metadata-config-data: secret "nova-metadata-config-data" not found Nov 22 07:32:22 crc kubenswrapper[4856]: E1122 07:32:22.749371 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data podName:e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:26.749348294 +0000 UTC m=+1789.162741562 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data") pod "nova-metadata-0" (UID: "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92") : secret "nova-metadata-config-data" not found Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.756977 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b57bd9f89-z95qh" event={"ID":"8c76350d-ce88-42c5-8f7c-68c084a511e2","Type":"ContainerDied","Data":"5322cf2859055f4460688dafde41c71b7d57a773b39f772049d7578374f41363"} Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.757032 4856 scope.go:117] "RemoveContainer" containerID="926e6fd1571566f17f16d953955d8eb260b1f5bed95ee74d64360f30908b0a98" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.757789 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican29ba-account-delete-f7rqf" event={"ID":"519a764c-9ac2-4f94-84c6-7c284ab676cd","Type":"ContainerStarted","Data":"7c3193dd655842750b4950a2f0999bd2a78e535166e5cf2551e4cdaf1b19f49e"} Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.757818 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92","Type":"ContainerDied","Data":"30f6c92bfa88e0c50a824bbd5fb87ff5b3d7fbb4606aca9dfc830b62320a94a1"} Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.757840 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi2c75-account-delete-c4rqx" event={"ID":"cfc2e8cc-04c1-4481-bf7d-d7e99972200f","Type":"ContainerStarted","Data":"2be53b136ee6ae58f4111e796dc3dacfcd801bacdf0e16aa09eabf48d1ca897c"} Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.761636 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican29ba-account-delete-f7rqf" podStartSLOduration=5.761615108 podStartE2EDuration="5.761615108s" podCreationTimestamp="2025-11-22 07:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:32:22.755941144 +0000 UTC m=+1785.169334402" watchObservedRunningTime="2025-11-22 07:32:22.761615108 +0000 UTC m=+1785.175008366" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.769855 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0768fe63-c6c8-48c2-a121-7216823f73ef/ovsdbserver-sb/0.log" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.770143 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.770579 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0768fe63-c6c8-48c2-a121-7216823f73ef","Type":"ContainerDied","Data":"586c46078d618d2d93d14100623bfa053417297c9f2511cf5dae809eb647e663"} Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.780139 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "0768fe63-c6c8-48c2-a121-7216823f73ef" (UID: "0768fe63-c6c8-48c2-a121-7216823f73ef"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.805118 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novaapi2c75-account-delete-c4rqx" podStartSLOduration=4.805095172 podStartE2EDuration="4.805095172s" podCreationTimestamp="2025-11-22 07:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:32:22.775462415 +0000 UTC m=+1785.188855673" watchObservedRunningTime="2025-11-22 07:32:22.805095172 +0000 UTC m=+1785.218488430" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.815423 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell07477-account-delete-5hzjb" event={"ID":"a66fa8fc-f908-43e7-a169-6156fc2092f8","Type":"ContainerStarted","Data":"2dec82303da62aba427908f57f4ad7d03b32f004aa85bacfde2cdfa11f792f02"} Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.823370 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b57bd9f89-z95qh"] Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.829971 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.831190 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b57bd9f89-z95qh"] Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.845945 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novacell07477-account-delete-5hzjb" podStartSLOduration=4.845924174 podStartE2EDuration="4.845924174s" podCreationTimestamp="2025-11-22 07:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:32:22.838824321 +0000 UTC m=+1785.252217589" watchObservedRunningTime="2025-11-22 07:32:22.845924174 +0000 UTC m=+1785.259317432" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.865201 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement00a5-account-delete-qrc4g" podStartSLOduration=5.865178928 podStartE2EDuration="5.865178928s" podCreationTimestamp="2025-11-22 07:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 07:32:22.854098047 +0000 UTC m=+1785.267491315" watchObservedRunningTime="2025-11-22 07:32:22.865178928 +0000 UTC m=+1785.278572186" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.875400 4856 scope.go:117] "RemoveContainer" containerID="da67b84177209e9078903a9d7ca7f3ae2a9d1b2f39212601d306011553e1be52" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.876687 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.923327 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "0768fe63-c6c8-48c2-a121-7216823f73ef" (UID: "0768fe63-c6c8-48c2-a121-7216823f73ef"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.952016 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="bb0a212d-74dc-40d3-84a4-bce83b78e788" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.166:9292/healthcheck\": read tcp 10.217.0.2:58666->10.217.0.166:9292: read: connection reset by peer" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.952491 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="bb0a212d-74dc-40d3-84a4-bce83b78e788" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.166:9292/healthcheck\": read tcp 10.217.0.2:58678->10.217.0.166:9292: read: connection reset by peer" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.978782 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h8fx\" (UniqueName: \"kubernetes.io/projected/9a94a048-f961-4675-85bf-88414e414a51-kube-api-access-5h8fx\") pod \"9a94a048-f961-4675-85bf-88414e414a51\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.978875 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9a94a048-f961-4675-85bf-88414e414a51-openstack-config-secret\") pod \"9a94a048-f961-4675-85bf-88414e414a51\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.978942 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9a94a048-f961-4675-85bf-88414e414a51-openstack-config\") pod \"9a94a048-f961-4675-85bf-88414e414a51\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.978976 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a94a048-f961-4675-85bf-88414e414a51-combined-ca-bundle\") pod \"9a94a048-f961-4675-85bf-88414e414a51\" (UID: \"9a94a048-f961-4675-85bf-88414e414a51\") " Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.980666 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0768fe63-c6c8-48c2-a121-7216823f73ef-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:22 crc kubenswrapper[4856]: I1122 07:32:22.987420 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a94a048-f961-4675-85bf-88414e414a51-kube-api-access-5h8fx" (OuterVolumeSpecName: "kube-api-access-5h8fx") pod "9a94a048-f961-4675-85bf-88414e414a51" (UID: "9a94a048-f961-4675-85bf-88414e414a51"). InnerVolumeSpecName "kube-api-access-5h8fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.044333 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a94a048-f961-4675-85bf-88414e414a51-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9a94a048-f961-4675-85bf-88414e414a51" (UID: "9a94a048-f961-4675-85bf-88414e414a51"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.051339 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a94a048-f961-4675-85bf-88414e414a51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a94a048-f961-4675-85bf-88414e414a51" (UID: "9a94a048-f961-4675-85bf-88414e414a51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.081500 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a94a048-f961-4675-85bf-88414e414a51-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9a94a048-f961-4675-85bf-88414e414a51" (UID: "9a94a048-f961-4675-85bf-88414e414a51"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.082106 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9a94a048-f961-4675-85bf-88414e414a51-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.082422 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9a94a048-f961-4675-85bf-88414e414a51-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.082553 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a94a048-f961-4675-85bf-88414e414a51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.082614 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h8fx\" (UniqueName: \"kubernetes.io/projected/9a94a048-f961-4675-85bf-88414e414a51-kube-api-access-5h8fx\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: E1122 07:32:23.082744 4856 secret.go:188] Couldn't get secret openstack/nova-api-config-data: secret "nova-api-config-data" not found Nov 22 07:32:23 crc kubenswrapper[4856]: E1122 07:32:23.082833 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data podName:b1ccf431-f692-459f-b249-66bd9747d09c nodeName:}" failed. No retries permitted until 2025-11-22 07:32:27.082818276 +0000 UTC m=+1789.496211534 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data") pod "nova-api-0" (UID: "b1ccf431-f692-459f-b249-66bd9747d09c") : secret "nova-api-config-data" not found Nov 22 07:32:23 crc kubenswrapper[4856]: E1122 07:32:23.174058 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:23 crc kubenswrapper[4856]: E1122 07:32:23.174596 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:23 crc kubenswrapper[4856]: E1122 07:32:23.174902 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:23 crc kubenswrapper[4856]: E1122 07:32:23.174933 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovsdb-server" Nov 22 07:32:23 crc kubenswrapper[4856]: E1122 07:32:23.175296 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:23 crc kubenswrapper[4856]: E1122 07:32:23.176785 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:23 crc kubenswrapper[4856]: E1122 07:32:23.178176 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:23 crc kubenswrapper[4856]: E1122 07:32:23.178217 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovs-vswitchd" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.502579 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.503223 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="ceilometer-central-agent" containerID="cri-o://24259cf1c1f38f1bc7f64997b64b9ed69fb4bf62d123b79b4fadefd0f143056d" gracePeriod=30 Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.503783 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="proxy-httpd" containerID="cri-o://b68c3e9d5fec381205cff7840dff84ed802d1d3dd4294ad59eed929c11d88ac0" gracePeriod=30 Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.503841 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="sg-core" containerID="cri-o://b22af23b8eca911c39bf860e938113315fcb9f3dd60e8b97761359b25855b4a1" gracePeriod=30 Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.503880 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="ceilometer-notification-agent" containerID="cri-o://02a270d659156bdef916a33cbab50d2c8c0cc0527187e2d9fcd2dc12495e6671" gracePeriod=30 Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.543696 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.548368 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.548615 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="4a38fdd7-2dc0-4ebc-91c7-359d0e437900" containerName="kube-state-metrics" containerID="cri-o://79ac3da01d567af671e8140ba0abef013a08691b348676216927e29a7c793bcc" gracePeriod=30 Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.578807 4856 scope.go:117] "RemoveContainer" containerID="a27e20589e4e9738c8b1ba2a88ec92db294be52ec1405bb5a02a6d451b8e8534" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.585315 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.594779 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.618132 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.630132 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.657560 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.663118 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79bdcb776d-cl77m" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": dial tcp 10.217.0.172:9311: connect: connection refused" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.674555 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-79bdcb776d-cl77m" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": dial tcp 10.217.0.172:9311: connect: connection refused" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.685487 4856 scope.go:117] "RemoveContainer" containerID="4e82ed304d356b9a96c6aba3ac20d49662a3c4b69636547ee5ba622240c5a35a" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.696870 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-config-data\") pod \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.696946 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-combined-ca-bundle\") pod \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.696975 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lbrg\" (UniqueName: \"kubernetes.io/projected/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-kube-api-access-2lbrg\") pod \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.696996 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-config-data\") pod \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697038 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-combined-ca-bundle\") pod \"8f9815d1-2297-4a66-9793-ba485053ca2a\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697072 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-combined-ca-bundle\") pod \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697096 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-config-data-custom\") pod \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697157 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-config-data\") pod \"8f9815d1-2297-4a66-9793-ba485053ca2a\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697182 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-public-tls-certs\") pod \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697213 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-log-httpd\") pod \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697256 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-nova-novncproxy-tls-certs\") pod \"8f9815d1-2297-4a66-9793-ba485053ca2a\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697286 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-internal-tls-certs\") pod \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697309 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-etc-machine-id\") pod \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697359 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjznr\" (UniqueName: \"kubernetes.io/projected/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-kube-api-access-qjznr\") pod \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697394 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vrn9\" (UniqueName: \"kubernetes.io/projected/8f9815d1-2297-4a66-9793-ba485053ca2a-kube-api-access-6vrn9\") pod \"8f9815d1-2297-4a66-9793-ba485053ca2a\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697428 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppn7t\" (UniqueName: \"kubernetes.io/projected/18fcab55-6a49-4c21-9314-435129cf376a-kube-api-access-ppn7t\") pod \"18fcab55-6a49-4c21-9314-435129cf376a\" (UID: \"18fcab55-6a49-4c21-9314-435129cf376a\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697470 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-etc-swift\") pod \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697528 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fcab55-6a49-4c21-9314-435129cf376a-combined-ca-bundle\") pod \"18fcab55-6a49-4c21-9314-435129cf376a\" (UID: \"18fcab55-6a49-4c21-9314-435129cf376a\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697551 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-run-httpd\") pod \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\" (UID: \"314d3b00-9bb4-4caa-a2dd-521e70e3d73d\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697586 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-vencrypt-tls-certs\") pod \"8f9815d1-2297-4a66-9793-ba485053ca2a\" (UID: \"8f9815d1-2297-4a66-9793-ba485053ca2a\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697624 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-scripts\") pod \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\" (UID: \"aec2d14e-7026-4f6d-a0b2-13ff53d5e124\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.697653 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fcab55-6a49-4c21-9314-435129cf376a-config-data\") pod \"18fcab55-6a49-4c21-9314-435129cf376a\" (UID: \"18fcab55-6a49-4c21-9314-435129cf376a\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.710331 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aec2d14e-7026-4f6d-a0b2-13ff53d5e124" (UID: "aec2d14e-7026-4f6d-a0b2-13ff53d5e124"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.714796 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "314d3b00-9bb4-4caa-a2dd-521e70e3d73d" (UID: "314d3b00-9bb4-4caa-a2dd-521e70e3d73d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.715892 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.716159 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="63e9edb8-ed05-4d0f-aff1-d59b369cd76d" containerName="memcached" containerID="cri-o://63572ca1ab3b819180a4d2cdb47a2c1f194a6daee761f767b694471277028ac6" gracePeriod=30 Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.750770 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "314d3b00-9bb4-4caa-a2dd-521e70e3d73d" (UID: "314d3b00-9bb4-4caa-a2dd-521e70e3d73d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.751116 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.761956 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18fcab55-6a49-4c21-9314-435129cf376a-kube-api-access-ppn7t" (OuterVolumeSpecName: "kube-api-access-ppn7t") pod "18fcab55-6a49-4c21-9314-435129cf376a" (UID: "18fcab55-6a49-4c21-9314-435129cf376a"). InnerVolumeSpecName "kube-api-access-ppn7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.766870 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-kube-api-access-qjznr" (OuterVolumeSpecName: "kube-api-access-qjznr") pod "314d3b00-9bb4-4caa-a2dd-521e70e3d73d" (UID: "314d3b00-9bb4-4caa-a2dd-521e70e3d73d"). InnerVolumeSpecName "kube-api-access-qjznr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.783531 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aec2d14e-7026-4f6d-a0b2-13ff53d5e124" (UID: "aec2d14e-7026-4f6d-a0b2-13ff53d5e124"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.786778 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-kube-api-access-2lbrg" (OuterVolumeSpecName: "kube-api-access-2lbrg") pod "aec2d14e-7026-4f6d-a0b2-13ff53d5e124" (UID: "aec2d14e-7026-4f6d-a0b2-13ff53d5e124"). InnerVolumeSpecName "kube-api-access-2lbrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.786820 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "314d3b00-9bb4-4caa-a2dd-521e70e3d73d" (UID: "314d3b00-9bb4-4caa-a2dd-521e70e3d73d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.810142 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f9815d1-2297-4a66-9793-ba485053ca2a-kube-api-access-6vrn9" (OuterVolumeSpecName: "kube-api-access-6vrn9") pod "8f9815d1-2297-4a66-9793-ba485053ca2a" (UID: "8f9815d1-2297-4a66-9793-ba485053ca2a"). InnerVolumeSpecName "kube-api-access-6vrn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.810421 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-scripts\") pod \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.810499 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-combined-ca-bundle\") pod \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.810626 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6n5k\" (UniqueName: \"kubernetes.io/projected/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-kube-api-access-g6n5k\") pod \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.835898 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-internal-tls-certs\") pod \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.836032 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-public-tls-certs\") pod \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.836115 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-logs\") pod \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.836140 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-config-data\") pod \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\" (UID: \"70c845eb-6695-4de7-8b4a-ef7c6a6701a4\") " Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.851979 4856 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.852028 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.852051 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lbrg\" (UniqueName: \"kubernetes.io/projected/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-kube-api-access-2lbrg\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.852067 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.852080 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.852092 4856 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.852103 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjznr\" (UniqueName: \"kubernetes.io/projected/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-kube-api-access-qjznr\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.852131 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vrn9\" (UniqueName: \"kubernetes.io/projected/8f9815d1-2297-4a66-9793-ba485053ca2a-kube-api-access-6vrn9\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.852144 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppn7t\" (UniqueName: \"kubernetes.io/projected/18fcab55-6a49-4c21-9314-435129cf376a-kube-api-access-ppn7t\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.863806 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fcab55-6a49-4c21-9314-435129cf376a-config-data" (OuterVolumeSpecName: "config-data") pod "18fcab55-6a49-4c21-9314-435129cf376a" (UID: "18fcab55-6a49-4c21-9314-435129cf376a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.869919 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-logs" (OuterVolumeSpecName: "logs") pod "70c845eb-6695-4de7-8b4a-ef7c6a6701a4" (UID: "70c845eb-6695-4de7-8b4a-ef7c6a6701a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.875962 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-scripts" (OuterVolumeSpecName: "scripts") pod "aec2d14e-7026-4f6d-a0b2-13ff53d5e124" (UID: "aec2d14e-7026-4f6d-a0b2-13ff53d5e124"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.876372 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_55308d58-6be6-483d-bc27-2904f15d32f0/ovsdbserver-nb/0.log" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.876440 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.885238 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.899542 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.909671 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-scripts" (OuterVolumeSpecName: "scripts") pod "70c845eb-6695-4de7-8b4a-ef7c6a6701a4" (UID: "70c845eb-6695-4de7-8b4a-ef7c6a6701a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.918856 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-kube-api-access-g6n5k" (OuterVolumeSpecName: "kube-api-access-g6n5k") pod "70c845eb-6695-4de7-8b4a-ef7c6a6701a4" (UID: "70c845eb-6695-4de7-8b4a-ef7c6a6701a4"). InnerVolumeSpecName "kube-api-access-g6n5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.935963 4856 generic.go:334] "Generic (PLEG): container finished" podID="519a764c-9ac2-4f94-84c6-7c284ab676cd" containerID="7c3193dd655842750b4950a2f0999bd2a78e535166e5cf2551e4cdaf1b19f49e" exitCode=0 Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.936057 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican29ba-account-delete-f7rqf" event={"ID":"519a764c-9ac2-4f94-84c6-7c284ab676cd","Type":"ContainerDied","Data":"7c3193dd655842750b4950a2f0999bd2a78e535166e5cf2551e4cdaf1b19f49e"} Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.949923 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-vqbwk"] Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.950841 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-fc96b95bb-4mtxg" event={"ID":"70c845eb-6695-4de7-8b4a-ef7c6a6701a4","Type":"ContainerDied","Data":"75113acf2dbb16ae1bdf42ce4f05d360fbccf9d7cd342f7d75054ed30301fec5"} Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.950854 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-fc96b95bb-4mtxg" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.954311 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.954345 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.954356 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6n5k\" (UniqueName: \"kubernetes.io/projected/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-kube-api-access-g6n5k\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.954366 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.954377 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18fcab55-6a49-4c21-9314-435129cf376a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.954753 4856 generic.go:334] "Generic (PLEG): container finished" podID="a66fa8fc-f908-43e7-a169-6156fc2092f8" containerID="2dec82303da62aba427908f57f4ad7d03b32f004aa85bacfde2cdfa11f792f02" exitCode=0 Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.954844 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell07477-account-delete-5hzjb" event={"ID":"a66fa8fc-f908-43e7-a169-6156fc2092f8","Type":"ContainerDied","Data":"2dec82303da62aba427908f57f4ad7d03b32f004aa85bacfde2cdfa11f792f02"} Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.957115 4856 generic.go:334] "Generic (PLEG): container finished" podID="9eaa66e0-ee9b-4115-b385-222e8ac0c21c" containerID="d07bafd279ede63c481207e2d867b890ff02bb3ed145878df74e9c9bf2234f52" exitCode=0 Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.957177 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement00a5-account-delete-qrc4g" event={"ID":"9eaa66e0-ee9b-4115-b385-222e8ac0c21c","Type":"ContainerDied","Data":"d07bafd279ede63c481207e2d867b890ff02bb3ed145878df74e9c9bf2234f52"} Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.958264 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wczqs"] Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.966664 4856 scope.go:117] "RemoveContainer" containerID="bb57a5740eec3fe63e3bb880f72bda941c5f54b634af051477157f490cf788ec" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.968919 4856 generic.go:334] "Generic (PLEG): container finished" podID="b049e107-76c1-4669-adb3-7b92560ef90d" containerID="ae19c364f3ae18bad5e6112fe44d680c855202a8622fbd6ab5f9cf49060fcd42" exitCode=0 Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.969056 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.969492 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-vqbwk"] Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.969540 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b049e107-76c1-4669-adb3-7b92560ef90d","Type":"ContainerDied","Data":"ae19c364f3ae18bad5e6112fe44d680c855202a8622fbd6ab5f9cf49060fcd42"} Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.969562 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b049e107-76c1-4669-adb3-7b92560ef90d","Type":"ContainerDied","Data":"1160ab49f39ac26a166bd02b3d5ea23beb4ec8bb76ba31c284a961efc8ae7ec7"} Nov 22 07:32:23 crc kubenswrapper[4856]: I1122 07:32:23.996995 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wczqs"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.023532 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"18fcab55-6a49-4c21-9314-435129cf376a","Type":"ContainerDied","Data":"5f110a7dd89cb87d7a3272051d8d04c315023c6ced6e8dada6479123218a9953"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.023904 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone39ec-account-delete-6j6qn"] Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.025573 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="314d3b00-9bb4-4caa-a2dd-521e70e3d73d" containerName="proxy-server" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.025658 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="314d3b00-9bb4-4caa-a2dd-521e70e3d73d" containerName="proxy-server" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.025675 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec2d14e-7026-4f6d-a0b2-13ff53d5e124" containerName="probe" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.025682 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec2d14e-7026-4f6d-a0b2-13ff53d5e124" containerName="probe" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.025706 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55308d58-6be6-483d-bc27-2904f15d32f0" containerName="ovsdbserver-nb" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.025713 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="55308d58-6be6-483d-bc27-2904f15d32f0" containerName="ovsdbserver-nb" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.025728 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0768fe63-c6c8-48c2-a121-7216823f73ef" containerName="openstack-network-exporter" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.025736 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0768fe63-c6c8-48c2-a121-7216823f73ef" containerName="openstack-network-exporter" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.025750 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee10c8a7-96d4-4ee5-8306-a17bceb73cf1" containerName="openstack-network-exporter" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.025758 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee10c8a7-96d4-4ee5-8306-a17bceb73cf1" containerName="openstack-network-exporter" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.025770 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b049e107-76c1-4669-adb3-7b92560ef90d" containerName="cinder-api-log" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.025779 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b049e107-76c1-4669-adb3-7b92560ef90d" containerName="cinder-api-log" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.027061 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.027794 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c845eb-6695-4de7-8b4a-ef7c6a6701a4" containerName="placement-api" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.027815 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c845eb-6695-4de7-8b4a-ef7c6a6701a4" containerName="placement-api" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.027986 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c76350d-ce88-42c5-8f7c-68c084a511e2" containerName="init" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028005 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c76350d-ce88-42c5-8f7c-68c084a511e2" containerName="init" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.028022 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18fcab55-6a49-4c21-9314-435129cf376a" containerName="nova-cell0-conductor-conductor" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028031 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="18fcab55-6a49-4c21-9314-435129cf376a" containerName="nova-cell0-conductor-conductor" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.028042 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec2d14e-7026-4f6d-a0b2-13ff53d5e124" containerName="cinder-scheduler" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028051 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec2d14e-7026-4f6d-a0b2-13ff53d5e124" containerName="cinder-scheduler" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.028073 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c76350d-ce88-42c5-8f7c-68c084a511e2" containerName="dnsmasq-dns" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028081 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c76350d-ce88-42c5-8f7c-68c084a511e2" containerName="dnsmasq-dns" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.028098 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b049e107-76c1-4669-adb3-7b92560ef90d" containerName="cinder-api" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028106 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b049e107-76c1-4669-adb3-7b92560ef90d" containerName="cinder-api" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.028125 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0a212d-74dc-40d3-84a4-bce83b78e788" containerName="glance-httpd" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028132 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0a212d-74dc-40d3-84a4-bce83b78e788" containerName="glance-httpd" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.028141 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" containerName="ovn-controller" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028148 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" containerName="ovn-controller" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.028164 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55308d58-6be6-483d-bc27-2904f15d32f0" containerName="openstack-network-exporter" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028173 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="55308d58-6be6-483d-bc27-2904f15d32f0" containerName="openstack-network-exporter" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.028190 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c845eb-6695-4de7-8b4a-ef7c6a6701a4" containerName="placement-log" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028197 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c845eb-6695-4de7-8b4a-ef7c6a6701a4" containerName="placement-log" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.028207 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f9815d1-2297-4a66-9793-ba485053ca2a" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028216 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f9815d1-2297-4a66-9793-ba485053ca2a" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.028229 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0768fe63-c6c8-48c2-a121-7216823f73ef" containerName="ovsdbserver-sb" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028236 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0768fe63-c6c8-48c2-a121-7216823f73ef" containerName="ovsdbserver-sb" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.028247 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="314d3b00-9bb4-4caa-a2dd-521e70e3d73d" containerName="proxy-httpd" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028254 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="314d3b00-9bb4-4caa-a2dd-521e70e3d73d" containerName="proxy-httpd" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.028264 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0a212d-74dc-40d3-84a4-bce83b78e788" containerName="glance-log" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028271 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0a212d-74dc-40d3-84a4-bce83b78e788" containerName="glance-log" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028552 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="314d3b00-9bb4-4caa-a2dd-521e70e3d73d" containerName="proxy-server" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028574 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="55308d58-6be6-483d-bc27-2904f15d32f0" containerName="ovsdbserver-nb" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028586 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0768fe63-c6c8-48c2-a121-7216823f73ef" containerName="openstack-network-exporter" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028597 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b049e107-76c1-4669-adb3-7b92560ef90d" containerName="cinder-api-log" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028606 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="55308d58-6be6-483d-bc27-2904f15d32f0" containerName="openstack-network-exporter" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028620 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e80ae2-f12d-4bfb-acca-e60281ef6dd3" containerName="ovn-controller" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028628 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b049e107-76c1-4669-adb3-7b92560ef90d" containerName="cinder-api" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028638 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f9815d1-2297-4a66-9793-ba485053ca2a" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028647 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c76350d-ce88-42c5-8f7c-68c084a511e2" containerName="dnsmasq-dns" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028659 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee10c8a7-96d4-4ee5-8306-a17bceb73cf1" containerName="openstack-network-exporter" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028673 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0a212d-74dc-40d3-84a4-bce83b78e788" containerName="glance-httpd" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028687 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c845eb-6695-4de7-8b4a-ef7c6a6701a4" containerName="placement-log" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028697 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0a212d-74dc-40d3-84a4-bce83b78e788" containerName="glance-log" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028709 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c845eb-6695-4de7-8b4a-ef7c6a6701a4" containerName="placement-api" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028719 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="18fcab55-6a49-4c21-9314-435129cf376a" containerName="nova-cell0-conductor-conductor" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028733 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="314d3b00-9bb4-4caa-a2dd-521e70e3d73d" containerName="proxy-httpd" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028741 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec2d14e-7026-4f6d-a0b2-13ff53d5e124" containerName="probe" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028758 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec2d14e-7026-4f6d-a0b2-13ff53d5e124" containerName="cinder-scheduler" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.028772 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0768fe63-c6c8-48c2-a121-7216823f73ef" containerName="ovsdbserver-sb" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.030280 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.030660 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone39ec-account-delete-6j6qn" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.035436 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6cf775d657-87zdn"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.035626 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-6cf775d657-87zdn" podUID="f6976ffd-7286-4347-b8af-607803a96768" containerName="keystone-api" containerID="cri-o://3cdce92348e8a5abc8c54f390907c002ea710c31b653f0e1d2c690885f3a2712" gracePeriod=30 Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.039662 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-config-data" (OuterVolumeSpecName: "config-data") pod "70c845eb-6695-4de7-8b4a-ef7c6a6701a4" (UID: "70c845eb-6695-4de7-8b4a-ef7c6a6701a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.044577 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.055580 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-scripts\") pod \"b049e107-76c1-4669-adb3-7b92560ef90d\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.055629 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55308d58-6be6-483d-bc27-2904f15d32f0-scripts\") pod \"55308d58-6be6-483d-bc27-2904f15d32f0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.055698 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b049e107-76c1-4669-adb3-7b92560ef90d-etc-machine-id\") pod \"b049e107-76c1-4669-adb3-7b92560ef90d\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.055750 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-scripts\") pod \"bb0a212d-74dc-40d3-84a4-bce83b78e788\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.055779 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b049e107-76c1-4669-adb3-7b92560ef90d-logs\") pod \"b049e107-76c1-4669-adb3-7b92560ef90d\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.055865 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-internal-tls-certs\") pod \"b049e107-76c1-4669-adb3-7b92560ef90d\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.055894 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rrhz\" (UniqueName: \"kubernetes.io/projected/bb0a212d-74dc-40d3-84a4-bce83b78e788-kube-api-access-4rrhz\") pod \"bb0a212d-74dc-40d3-84a4-bce83b78e788\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.055931 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55308d58-6be6-483d-bc27-2904f15d32f0-ovsdb-rundir\") pod \"55308d58-6be6-483d-bc27-2904f15d32f0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.055954 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ts6n\" (UniqueName: \"kubernetes.io/projected/55308d58-6be6-483d-bc27-2904f15d32f0-kube-api-access-9ts6n\") pod \"55308d58-6be6-483d-bc27-2904f15d32f0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.055983 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-public-tls-certs\") pod \"bb0a212d-74dc-40d3-84a4-bce83b78e788\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056012 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb0a212d-74dc-40d3-84a4-bce83b78e788-httpd-run\") pod \"bb0a212d-74dc-40d3-84a4-bce83b78e788\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056046 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55308d58-6be6-483d-bc27-2904f15d32f0-config\") pod \"55308d58-6be6-483d-bc27-2904f15d32f0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056089 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-config-data-custom\") pod \"b049e107-76c1-4669-adb3-7b92560ef90d\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056131 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-config-data\") pod \"b049e107-76c1-4669-adb3-7b92560ef90d\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056187 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndxkg\" (UniqueName: \"kubernetes.io/projected/b049e107-76c1-4669-adb3-7b92560ef90d-kube-api-access-ndxkg\") pod \"b049e107-76c1-4669-adb3-7b92560ef90d\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056241 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-metrics-certs-tls-certs\") pod \"55308d58-6be6-483d-bc27-2904f15d32f0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056270 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"bb0a212d-74dc-40d3-84a4-bce83b78e788\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056292 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-combined-ca-bundle\") pod \"b049e107-76c1-4669-adb3-7b92560ef90d\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056334 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"55308d58-6be6-483d-bc27-2904f15d32f0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056374 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a212d-74dc-40d3-84a4-bce83b78e788-logs\") pod \"bb0a212d-74dc-40d3-84a4-bce83b78e788\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056396 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-combined-ca-bundle\") pod \"55308d58-6be6-483d-bc27-2904f15d32f0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056417 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-ovsdbserver-nb-tls-certs\") pod \"55308d58-6be6-483d-bc27-2904f15d32f0\" (UID: \"55308d58-6be6-483d-bc27-2904f15d32f0\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056444 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-public-tls-certs\") pod \"b049e107-76c1-4669-adb3-7b92560ef90d\" (UID: \"b049e107-76c1-4669-adb3-7b92560ef90d\") " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.056478 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-combined-ca-bundle\") pod \"bb0a212d-74dc-40d3-84a4-bce83b78e788\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.056665 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="889ab0aa1988eeb2448a9ab0bc42e314c5c9c7e3df09896245e4cd6f9448c8fb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.057373 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-config-data\") pod \"bb0a212d-74dc-40d3-84a4-bce83b78e788\" (UID: \"bb0a212d-74dc-40d3-84a4-bce83b78e788\") " Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.058824 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="889ab0aa1988eeb2448a9ab0bc42e314c5c9c7e3df09896245e4cd6f9448c8fb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.059216 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0a212d-74dc-40d3-84a4-bce83b78e788-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bb0a212d-74dc-40d3-84a4-bce83b78e788" (UID: "bb0a212d-74dc-40d3-84a4-bce83b78e788"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.060174 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55308d58-6be6-483d-bc27-2904f15d32f0-config" (OuterVolumeSpecName: "config") pod "55308d58-6be6-483d-bc27-2904f15d32f0" (UID: "55308d58-6be6-483d-bc27-2904f15d32f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.061911 4856 generic.go:334] "Generic (PLEG): container finished" podID="e0f8403e-a06a-4804-b60a-98974506f547" containerID="b68c3e9d5fec381205cff7840dff84ed802d1d3dd4294ad59eed929c11d88ac0" exitCode=0 Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.062742 4856 generic.go:334] "Generic (PLEG): container finished" podID="e0f8403e-a06a-4804-b60a-98974506f547" containerID="b22af23b8eca911c39bf860e938113315fcb9f3dd60e8b97761359b25855b4a1" exitCode=2 Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.062119 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bb0a212d-74dc-40d3-84a4-bce83b78e788-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.062846 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55308d58-6be6-483d-bc27-2904f15d32f0-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.062865 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.063431 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b049e107-76c1-4669-adb3-7b92560ef90d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b049e107-76c1-4669-adb3-7b92560ef90d" (UID: "b049e107-76c1-4669-adb3-7b92560ef90d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.063731 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b049e107-76c1-4669-adb3-7b92560ef90d-logs" (OuterVolumeSpecName: "logs") pod "b049e107-76c1-4669-adb3-7b92560ef90d" (UID: "b049e107-76c1-4669-adb3-7b92560ef90d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.064059 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="889ab0aa1988eeb2448a9ab0bc42e314c5c9c7e3df09896245e4cd6f9448c8fb" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.064104 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="2b88f55c-12d5-4cba-a155-aa00c19c94f4" containerName="nova-cell1-conductor-conductor" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.065500 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55308d58-6be6-483d-bc27-2904f15d32f0-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "55308d58-6be6-483d-bc27-2904f15d32f0" (UID: "55308d58-6be6-483d-bc27-2904f15d32f0"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.065608 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0f8403e-a06a-4804-b60a-98974506f547","Type":"ContainerDied","Data":"b68c3e9d5fec381205cff7840dff84ed802d1d3dd4294ad59eed929c11d88ac0"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.065667 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0f8403e-a06a-4804-b60a-98974506f547","Type":"ContainerDied","Data":"b22af23b8eca911c39bf860e938113315fcb9f3dd60e8b97761359b25855b4a1"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.070718 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55308d58-6be6-483d-bc27-2904f15d32f0-scripts" (OuterVolumeSpecName: "scripts") pod "55308d58-6be6-483d-bc27-2904f15d32f0" (UID: "55308d58-6be6-483d-bc27-2904f15d32f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.070955 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone39ec-account-delete-6j6qn"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.089418 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0a212d-74dc-40d3-84a4-bce83b78e788-logs" (OuterVolumeSpecName: "logs") pod "bb0a212d-74dc-40d3-84a4-bce83b78e788" (UID: "bb0a212d-74dc-40d3-84a4-bce83b78e788"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.096833 4856 generic.go:334] "Generic (PLEG): container finished" podID="cfc2e8cc-04c1-4481-bf7d-d7e99972200f" containerID="2be53b136ee6ae58f4111e796dc3dacfcd801bacdf0e16aa09eabf48d1ca897c" exitCode=0 Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.097295 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0a212d-74dc-40d3-84a4-bce83b78e788-kube-api-access-4rrhz" (OuterVolumeSpecName: "kube-api-access-4rrhz") pod "bb0a212d-74dc-40d3-84a4-bce83b78e788" (UID: "bb0a212d-74dc-40d3-84a4-bce83b78e788"). InnerVolumeSpecName "kube-api-access-4rrhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.097333 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-rt7kb"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.097369 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi2c75-account-delete-c4rqx" event={"ID":"cfc2e8cc-04c1-4481-bf7d-d7e99972200f","Type":"ContainerDied","Data":"2be53b136ee6ae58f4111e796dc3dacfcd801bacdf0e16aa09eabf48d1ca897c"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.098638 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b049e107-76c1-4669-adb3-7b92560ef90d" (UID: "b049e107-76c1-4669-adb3-7b92560ef90d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.104880 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55308d58-6be6-483d-bc27-2904f15d32f0-kube-api-access-9ts6n" (OuterVolumeSpecName: "kube-api-access-9ts6n") pod "55308d58-6be6-483d-bc27-2904f15d32f0" (UID: "55308d58-6be6-483d-bc27-2904f15d32f0"). InnerVolumeSpecName "kube-api-access-9ts6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.106665 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-scripts" (OuterVolumeSpecName: "scripts") pod "b049e107-76c1-4669-adb3-7b92560ef90d" (UID: "b049e107-76c1-4669-adb3-7b92560ef90d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.106680 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "55308d58-6be6-483d-bc27-2904f15d32f0" (UID: "55308d58-6be6-483d-bc27-2904f15d32f0"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.106702 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-config-data" (OuterVolumeSpecName: "config-data") pod "8f9815d1-2297-4a66-9793-ba485053ca2a" (UID: "8f9815d1-2297-4a66-9793-ba485053ca2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.110342 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "bb0a212d-74dc-40d3-84a4-bce83b78e788" (UID: "bb0a212d-74dc-40d3-84a4-bce83b78e788"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.115518 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-scripts" (OuterVolumeSpecName: "scripts") pod "bb0a212d-74dc-40d3-84a4-bce83b78e788" (UID: "bb0a212d-74dc-40d3-84a4-bce83b78e788"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.117976 4856 generic.go:334] "Generic (PLEG): container finished" podID="4a38fdd7-2dc0-4ebc-91c7-359d0e437900" containerID="79ac3da01d567af671e8140ba0abef013a08691b348676216927e29a7c793bcc" exitCode=2 Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.118078 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a38fdd7-2dc0-4ebc-91c7-359d0e437900","Type":"ContainerDied","Data":"79ac3da01d567af671e8140ba0abef013a08691b348676216927e29a7c793bcc"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.122943 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-rt7kb"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.127971 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b049e107-76c1-4669-adb3-7b92560ef90d-kube-api-access-ndxkg" (OuterVolumeSpecName: "kube-api-access-ndxkg") pod "b049e107-76c1-4669-adb3-7b92560ef90d" (UID: "b049e107-76c1-4669-adb3-7b92560ef90d"). InnerVolumeSpecName "kube-api-access-ndxkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.131234 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-73f8-account-create-scphz"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.135580 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.140974 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron73f8-account-delete-b8zpb"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.148917 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_55308d58-6be6-483d-bc27-2904f15d32f0/ovsdbserver-nb/0.log" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.149009 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"55308d58-6be6-483d-bc27-2904f15d32f0","Type":"ContainerDied","Data":"c5d83babff0062e1a2e5abe3ec909b187ab8e44bbfd6eab77bdac52642c62e4b"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.149101 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.155022 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-73f8-account-create-scphz"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.167632 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-8278h"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169377 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88779\" (UniqueName: \"kubernetes.io/projected/5f0e688e-e928-4da2-b752-fb04a6307071-kube-api-access-88779\") pod \"keystone39ec-account-delete-6j6qn\" (UID: \"5f0e688e-e928-4da2-b752-fb04a6307071\") " pod="openstack/keystone39ec-account-delete-6j6qn" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169485 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f0e688e-e928-4da2-b752-fb04a6307071-operator-scripts\") pod \"keystone39ec-account-delete-6j6qn\" (UID: \"5f0e688e-e928-4da2-b752-fb04a6307071\") " pod="openstack/keystone39ec-account-delete-6j6qn" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169603 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rrhz\" (UniqueName: \"kubernetes.io/projected/bb0a212d-74dc-40d3-84a4-bce83b78e788-kube-api-access-4rrhz\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169658 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55308d58-6be6-483d-bc27-2904f15d32f0-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169686 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ts6n\" (UniqueName: \"kubernetes.io/projected/55308d58-6be6-483d-bc27-2904f15d32f0-kube-api-access-9ts6n\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169701 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169716 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169728 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndxkg\" (UniqueName: \"kubernetes.io/projected/b049e107-76c1-4669-adb3-7b92560ef90d-kube-api-access-ndxkg\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169756 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169774 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169787 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0a212d-74dc-40d3-84a4-bce83b78e788-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169798 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55308d58-6be6-483d-bc27-2904f15d32f0-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169808 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169819 4856 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b049e107-76c1-4669-adb3-7b92560ef90d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169830 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.169841 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b049e107-76c1-4669-adb3-7b92560ef90d-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.170778 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aec2d14e-7026-4f6d-a0b2-13ff53d5e124","Type":"ContainerDied","Data":"227074d547e57dc8859918fbb888dc891d073356f7896f051bd49804025d626c"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.170878 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.174572 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-8278h"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.181441 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8f9815d1-2297-4a66-9793-ba485053ca2a","Type":"ContainerDied","Data":"1d4ac275fe718fb06abe26d3a57ceedc8ad5abd3b62553ba6c0ce64fc14b2756"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.181543 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.183266 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-39ec-account-create-sh72f"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.190551 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone39ec-account-delete-6j6qn"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.191832 4856 generic.go:334] "Generic (PLEG): container finished" podID="fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8" containerID="acef6c657dbb259f9e9177fab5afef052fec2f5a02a3c72c2be4304e9d337a1c" exitCode=0 Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.191904 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron73f8-account-delete-b8zpb" event={"ID":"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8","Type":"ContainerDied","Data":"acef6c657dbb259f9e9177fab5afef052fec2f5a02a3c72c2be4304e9d337a1c"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.195633 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-39ec-account-create-sh72f"] Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.229893 4856 generic.go:334] "Generic (PLEG): container finished" podID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerID="4b3192676d3e19f237ce934c70e2e2105edb9e9415b2d7c5b848a4de24f6ac9a" exitCode=0 Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.230015 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79bdcb776d-cl77m" event={"ID":"39f7a457-9a5c-48b5-86c0-24d274596c8a","Type":"ContainerDied","Data":"4b3192676d3e19f237ce934c70e2e2105edb9e9415b2d7c5b848a4de24f6ac9a"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.255681 4856 generic.go:334] "Generic (PLEG): container finished" podID="5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f" containerID="9741711f509fcaac60e11d8c80612fcee97889c09aee1bfdf5b301c894e0da33" exitCode=0 Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.255804 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderceda-account-delete-chlrj" event={"ID":"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f","Type":"ContainerDied","Data":"9741711f509fcaac60e11d8c80612fcee97889c09aee1bfdf5b301c894e0da33"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.279009 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88779\" (UniqueName: \"kubernetes.io/projected/5f0e688e-e928-4da2-b752-fb04a6307071-kube-api-access-88779\") pod \"keystone39ec-account-delete-6j6qn\" (UID: \"5f0e688e-e928-4da2-b752-fb04a6307071\") " pod="openstack/keystone39ec-account-delete-6j6qn" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.279115 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f0e688e-e928-4da2-b752-fb04a6307071-operator-scripts\") pod \"keystone39ec-account-delete-6j6qn\" (UID: \"5f0e688e-e928-4da2-b752-fb04a6307071\") " pod="openstack/keystone39ec-account-delete-6j6qn" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.279251 4856 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.279296 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f0e688e-e928-4da2-b752-fb04a6307071-operator-scripts podName:5f0e688e-e928-4da2-b752-fb04a6307071 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:24.779282038 +0000 UTC m=+1787.192675296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5f0e688e-e928-4da2-b752-fb04a6307071-operator-scripts") pod "keystone39ec-account-delete-6j6qn" (UID: "5f0e688e-e928-4da2-b752-fb04a6307071") : configmap "openstack-scripts" not found Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.282790 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-56bc6597ff-ll6fl" event={"ID":"314d3b00-9bb4-4caa-a2dd-521e70e3d73d","Type":"ContainerDied","Data":"bf47a214c3fcadf89b3d1f49750ee64111d41e0ee997442284899dbb05d85345"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.282937 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-56bc6597ff-ll6fl" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.286464 4856 projected.go:194] Error preparing data for projected volume kube-api-access-88779 for pod openstack/keystone39ec-account-delete-6j6qn: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.286660 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f0e688e-e928-4da2-b752-fb04a6307071-kube-api-access-88779 podName:5f0e688e-e928-4da2-b752-fb04a6307071 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:24.786637888 +0000 UTC m=+1787.200031146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-88779" (UniqueName: "kubernetes.io/projected/5f0e688e-e928-4da2-b752-fb04a6307071-kube-api-access-88779") pod "keystone39ec-account-delete-6j6qn" (UID: "5f0e688e-e928-4da2-b752-fb04a6307071") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.297099 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "8f9815d1-2297-4a66-9793-ba485053ca2a" (UID: "8f9815d1-2297-4a66-9793-ba485053ca2a"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.303914 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f9815d1-2297-4a66-9793-ba485053ca2a" (UID: "8f9815d1-2297-4a66-9793-ba485053ca2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.305745 4856 generic.go:334] "Generic (PLEG): container finished" podID="bb0a212d-74dc-40d3-84a4-bce83b78e788" containerID="252762430df01a665bfdad1f0791906f5cce3eb5c2ead3119b24019aca7e6928" exitCode=0 Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.305925 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb0a212d-74dc-40d3-84a4-bce83b78e788","Type":"ContainerDied","Data":"252762430df01a665bfdad1f0791906f5cce3eb5c2ead3119b24019aca7e6928"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.306043 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bb0a212d-74dc-40d3-84a4-bce83b78e788","Type":"ContainerDied","Data":"145e73a89b1cf46146622956547b80e15f5d5360146cadd995a3e353c67367ed"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.306253 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.317343 4856 generic.go:334] "Generic (PLEG): container finished" podID="c75cebe3-86db-4be1-9755-4bd8a83c9796" containerID="cfc3e2910129f9e8a60e68b621e6eee3267b6c9aa86e078920823532cee13fa0" exitCode=0 Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.317424 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c75cebe3-86db-4be1-9755-4bd8a83c9796","Type":"ContainerDied","Data":"cfc3e2910129f9e8a60e68b621e6eee3267b6c9aa86e078920823532cee13fa0"} Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.367047 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.390145 4856 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.390384 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.390808 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.419223 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b049e107-76c1-4669-adb3-7b92560ef90d" (UID: "b049e107-76c1-4669-adb3-7b92560ef90d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.492428 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.492874 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.497948 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "8f9815d1-2297-4a66-9793-ba485053ca2a" (UID: "8f9815d1-2297-4a66-9793-ba485053ca2a"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.504649 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18fcab55-6a49-4c21-9314-435129cf376a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18fcab55-6a49-4c21-9314-435129cf376a" (UID: "18fcab55-6a49-4c21-9314-435129cf376a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.515914 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "70c845eb-6695-4de7-8b4a-ef7c6a6701a4" (UID: "70c845eb-6695-4de7-8b4a-ef7c6a6701a4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.542654 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "314d3b00-9bb4-4caa-a2dd-521e70e3d73d" (UID: "314d3b00-9bb4-4caa-a2dd-521e70e3d73d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.594472 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.594526 4856 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f9815d1-2297-4a66-9793-ba485053ca2a-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.594567 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.594582 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18fcab55-6a49-4c21-9314-435129cf376a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.599014 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.624370 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb0a212d-74dc-40d3-84a4-bce83b78e788" (UID: "bb0a212d-74dc-40d3-84a4-bce83b78e788"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.628154 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-config-data" (OuterVolumeSpecName: "config-data") pod "314d3b00-9bb4-4caa-a2dd-521e70e3d73d" (UID: "314d3b00-9bb4-4caa-a2dd-521e70e3d73d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.667811 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aec2d14e-7026-4f6d-a0b2-13ff53d5e124" (UID: "aec2d14e-7026-4f6d-a0b2-13ff53d5e124"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.698069 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.698112 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.698207 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.698230 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.731822 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="b27ecbc9-0058-49d3-8715-826a4a1bb544" containerName="galera" containerID="cri-o://bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7" gracePeriod=30 Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.741390 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0768fe63-c6c8-48c2-a121-7216823f73ef" path="/var/lib/kubelet/pods/0768fe63-c6c8-48c2-a121-7216823f73ef/volumes" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.743046 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="102e4706-2696-459a-88e6-b6cd95733094" path="/var/lib/kubelet/pods/102e4706-2696-459a-88e6-b6cd95733094/volumes" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.743927 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23bda6aa-0edd-4530-99a3-860bf6dff736" path="/var/lib/kubelet/pods/23bda6aa-0edd-4530-99a3-860bf6dff736/volumes" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.745286 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e9bfff-515e-462a-9a73-a9514676f9f8" path="/var/lib/kubelet/pods/42e9bfff-515e-462a-9a73-a9514676f9f8/volumes" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.745834 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62be1cd5-ba89-49d3-8f57-6ab0bf20848a" path="/var/lib/kubelet/pods/62be1cd5-ba89-49d3-8f57-6ab0bf20848a/volumes" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.747020 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b5de4ba-e26d-45de-a653-8cc9be68d5c3" path="/var/lib/kubelet/pods/8b5de4ba-e26d-45de-a653-8cc9be68d5c3/volumes" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.748025 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c76350d-ce88-42c5-8f7c-68c084a511e2" path="/var/lib/kubelet/pods/8c76350d-ce88-42c5-8f7c-68c084a511e2/volumes" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.748878 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a94a048-f961-4675-85bf-88414e414a51" path="/var/lib/kubelet/pods/9a94a048-f961-4675-85bf-88414e414a51/volumes" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.750568 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e04b9723-304a-46a5-a230-2daf9bcd6c3c" path="/var/lib/kubelet/pods/e04b9723-304a-46a5-a230-2daf9bcd6c3c/volumes" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.777363 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": read tcp 10.217.0.2:34294->10.217.0.208:8775: read: connection reset by peer" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.777407 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": read tcp 10.217.0.2:34280->10.217.0.208:8775: read: connection reset by peer" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.800041 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88779\" (UniqueName: \"kubernetes.io/projected/5f0e688e-e928-4da2-b752-fb04a6307071-kube-api-access-88779\") pod \"keystone39ec-account-delete-6j6qn\" (UID: \"5f0e688e-e928-4da2-b752-fb04a6307071\") " pod="openstack/keystone39ec-account-delete-6j6qn" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.800912 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f0e688e-e928-4da2-b752-fb04a6307071-operator-scripts\") pod \"keystone39ec-account-delete-6j6qn\" (UID: \"5f0e688e-e928-4da2-b752-fb04a6307071\") " pod="openstack/keystone39ec-account-delete-6j6qn" Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.801063 4856 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.801112 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f0e688e-e928-4da2-b752-fb04a6307071-operator-scripts podName:5f0e688e-e928-4da2-b752-fb04a6307071 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:25.801093478 +0000 UTC m=+1788.214486736 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5f0e688e-e928-4da2-b752-fb04a6307071-operator-scripts") pod "keystone39ec-account-delete-6j6qn" (UID: "5f0e688e-e928-4da2-b752-fb04a6307071") : configmap "openstack-scripts" not found Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.804055 4856 projected.go:194] Error preparing data for projected volume kube-api-access-88779 for pod openstack/keystone39ec-account-delete-6j6qn: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:32:24 crc kubenswrapper[4856]: E1122 07:32:24.804207 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f0e688e-e928-4da2-b752-fb04a6307071-kube-api-access-88779 podName:5f0e688e-e928-4da2-b752-fb04a6307071 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:25.804154491 +0000 UTC m=+1788.217547829 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-88779" (UniqueName: "kubernetes.io/projected/5f0e688e-e928-4da2-b752-fb04a6307071-kube-api-access-88779") pod "keystone39ec-account-delete-6j6qn" (UID: "5f0e688e-e928-4da2-b752-fb04a6307071") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.813467 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55308d58-6be6-483d-bc27-2904f15d32f0" (UID: "55308d58-6be6-483d-bc27-2904f15d32f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.818184 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "314d3b00-9bb4-4caa-a2dd-521e70e3d73d" (UID: "314d3b00-9bb4-4caa-a2dd-521e70e3d73d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.820210 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bb0a212d-74dc-40d3-84a4-bce83b78e788" (UID: "bb0a212d-74dc-40d3-84a4-bce83b78e788"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.847774 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "314d3b00-9bb4-4caa-a2dd-521e70e3d73d" (UID: "314d3b00-9bb4-4caa-a2dd-521e70e3d73d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.879476 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b049e107-76c1-4669-adb3-7b92560ef90d" (UID: "b049e107-76c1-4669-adb3-7b92560ef90d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.903521 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.903562 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.903574 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.903585 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.903642 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/314d3b00-9bb4-4caa-a2dd-521e70e3d73d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.925968 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "55308d58-6be6-483d-bc27-2904f15d32f0" (UID: "55308d58-6be6-483d-bc27-2904f15d32f0"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.941623 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70c845eb-6695-4de7-8b4a-ef7c6a6701a4" (UID: "70c845eb-6695-4de7-8b4a-ef7c6a6701a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.953660 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b049e107-76c1-4669-adb3-7b92560ef90d" (UID: "b049e107-76c1-4669-adb3-7b92560ef90d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.954380 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "55308d58-6be6-483d-bc27-2904f15d32f0" (UID: "55308d58-6be6-483d-bc27-2904f15d32f0"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.964732 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "70c845eb-6695-4de7-8b4a-ef7c6a6701a4" (UID: "70c845eb-6695-4de7-8b4a-ef7c6a6701a4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:24 crc kubenswrapper[4856]: I1122 07:32:24.970379 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-config-data" (OuterVolumeSpecName: "config-data") pod "b049e107-76c1-4669-adb3-7b92560ef90d" (UID: "b049e107-76c1-4669-adb3-7b92560ef90d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.015742 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.015776 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.015791 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.015810 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/55308d58-6be6-483d-bc27-2904f15d32f0-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.015823 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b049e107-76c1-4669-adb3-7b92560ef90d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.015836 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/70c845eb-6695-4de7-8b4a-ef7c6a6701a4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.042151 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-config-data" (OuterVolumeSpecName: "config-data") pod "bb0a212d-74dc-40d3-84a4-bce83b78e788" (UID: "bb0a212d-74dc-40d3-84a4-bce83b78e788"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.065760 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-config-data" (OuterVolumeSpecName: "config-data") pod "aec2d14e-7026-4f6d-a0b2-13ff53d5e124" (UID: "aec2d14e-7026-4f6d-a0b2-13ff53d5e124"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.120676 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0a212d-74dc-40d3-84a4-bce83b78e788-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.120714 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aec2d14e-7026-4f6d-a0b2-13ff53d5e124-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.144677 4856 scope.go:117] "RemoveContainer" containerID="4271d7224db735b34906645781ea2372db51f2e3d614022512e9b52eee61ba39" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.220824 4856 scope.go:117] "RemoveContainer" containerID="ae19c364f3ae18bad5e6112fe44d680c855202a8622fbd6ab5f9cf49060fcd42" Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.271257 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57 is running failed: container process not found" containerID="9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.271844 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57 is running failed: container process not found" containerID="9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.272186 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57 is running failed: container process not found" containerID="9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.272255 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="07329cf7-c3ff-410a-8ab7-8f19ae9d3974" containerName="nova-scheduler-scheduler" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.305855 4856 scope.go:117] "RemoveContainer" containerID="0439eeb4750e14d8f9ae56ff7b037034d989e88f9f897866cf9e1ff613634e01" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.340017 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c75cebe3-86db-4be1-9755-4bd8a83c9796","Type":"ContainerDied","Data":"e33450b9f082c55f7b154961241179c872319fb3ef16075de2e014c64cc91197"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.340236 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e33450b9f082c55f7b154961241179c872319fb3ef16075de2e014c64cc91197" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.341883 4856 generic.go:334] "Generic (PLEG): container finished" podID="2b88f55c-12d5-4cba-a155-aa00c19c94f4" containerID="889ab0aa1988eeb2448a9ab0bc42e314c5c9c7e3df09896245e4cd6f9448c8fb" exitCode=0 Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.341962 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2b88f55c-12d5-4cba-a155-aa00c19c94f4","Type":"ContainerDied","Data":"889ab0aa1988eeb2448a9ab0bc42e314c5c9c7e3df09896245e4cd6f9448c8fb"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.342090 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2b88f55c-12d5-4cba-a155-aa00c19c94f4","Type":"ContainerDied","Data":"30e27716525ee234195b0b17bda99ec7ef8a3f241fa24c80ac5dc2e3afb9fe20"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.342179 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30e27716525ee234195b0b17bda99ec7ef8a3f241fa24c80ac5dc2e3afb9fe20" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.344444 4856 generic.go:334] "Generic (PLEG): container finished" podID="1d3a5d31-7183-4298-87ea-4aa84aa395b4" containerID="cde1d5e34fed489806a536b0abe875c6d7151093d591a234d52ed41c693e2b63" exitCode=0 Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.344560 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1d3a5d31-7183-4298-87ea-4aa84aa395b4","Type":"ContainerDied","Data":"cde1d5e34fed489806a536b0abe875c6d7151093d591a234d52ed41c693e2b63"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.344629 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"1d3a5d31-7183-4298-87ea-4aa84aa395b4","Type":"ContainerDied","Data":"55b8b6cc11950f796e4ca15a70ab3b5a09ce69182c5025eae1e348eb376cbc08"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.344713 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55b8b6cc11950f796e4ca15a70ab3b5a09ce69182c5025eae1e348eb376cbc08" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.347475 4856 generic.go:334] "Generic (PLEG): container finished" podID="07329cf7-c3ff-410a-8ab7-8f19ae9d3974" containerID="9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57" exitCode=0 Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.347600 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"07329cf7-c3ff-410a-8ab7-8f19ae9d3974","Type":"ContainerDied","Data":"9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.347668 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"07329cf7-c3ff-410a-8ab7-8f19ae9d3974","Type":"ContainerDied","Data":"1803224d4bc27f76558788710d44cf87b3090071d8a0bc5d61101c30d3510424"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.347851 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1803224d4bc27f76558788710d44cf87b3090071d8a0bc5d61101c30d3510424" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.351911 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79bdcb776d-cl77m" event={"ID":"39f7a457-9a5c-48b5-86c0-24d274596c8a","Type":"ContainerDied","Data":"d6fe7529ff8e811824319a6266cace19801991ea250affe344e2f1ecfd121999"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.352056 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6fe7529ff8e811824319a6266cace19801991ea250affe344e2f1ecfd121999" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.353436 4856 generic.go:334] "Generic (PLEG): container finished" podID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerID="34b8ef8ac4487f65f5dff6c904e4aa6b5fc3a3fd278121552b6ef063060959ec" exitCode=0 Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.353638 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92","Type":"ContainerDied","Data":"34b8ef8ac4487f65f5dff6c904e4aa6b5fc3a3fd278121552b6ef063060959ec"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.356265 4856 generic.go:334] "Generic (PLEG): container finished" podID="63e9edb8-ed05-4d0f-aff1-d59b369cd76d" containerID="63572ca1ab3b819180a4d2cdb47a2c1f194a6daee761f767b694471277028ac6" exitCode=0 Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.356400 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"63e9edb8-ed05-4d0f-aff1-d59b369cd76d","Type":"ContainerDied","Data":"63572ca1ab3b819180a4d2cdb47a2c1f194a6daee761f767b694471277028ac6"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.369120 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"4a38fdd7-2dc0-4ebc-91c7-359d0e437900","Type":"ContainerDied","Data":"01875bf103004a78036f815cb505ee109cef1d2451273e63e585335a0418eaf0"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.369199 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01875bf103004a78036f815cb505ee109cef1d2451273e63e585335a0418eaf0" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.372735 4856 generic.go:334] "Generic (PLEG): container finished" podID="b1ccf431-f692-459f-b249-66bd9747d09c" containerID="748dcd5bb334b4bc2361b63a4afbafd4286f9d6147d5c3a3a460a57c1f55b549" exitCode=0 Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.372879 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b1ccf431-f692-459f-b249-66bd9747d09c","Type":"ContainerDied","Data":"748dcd5bb334b4bc2361b63a4afbafd4286f9d6147d5c3a3a460a57c1f55b549"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.379595 4856 scope.go:117] "RemoveContainer" containerID="ae19c364f3ae18bad5e6112fe44d680c855202a8622fbd6ab5f9cf49060fcd42" Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.379913 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae19c364f3ae18bad5e6112fe44d680c855202a8622fbd6ab5f9cf49060fcd42\": container with ID starting with ae19c364f3ae18bad5e6112fe44d680c855202a8622fbd6ab5f9cf49060fcd42 not found: ID does not exist" containerID="ae19c364f3ae18bad5e6112fe44d680c855202a8622fbd6ab5f9cf49060fcd42" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.379958 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae19c364f3ae18bad5e6112fe44d680c855202a8622fbd6ab5f9cf49060fcd42"} err="failed to get container status \"ae19c364f3ae18bad5e6112fe44d680c855202a8622fbd6ab5f9cf49060fcd42\": rpc error: code = NotFound desc = could not find container \"ae19c364f3ae18bad5e6112fe44d680c855202a8622fbd6ab5f9cf49060fcd42\": container with ID starting with ae19c364f3ae18bad5e6112fe44d680c855202a8622fbd6ab5f9cf49060fcd42 not found: ID does not exist" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.379990 4856 scope.go:117] "RemoveContainer" containerID="0439eeb4750e14d8f9ae56ff7b037034d989e88f9f897866cf9e1ff613634e01" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.380751 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.380989 4856 generic.go:334] "Generic (PLEG): container finished" podID="e0f8403e-a06a-4804-b60a-98974506f547" containerID="02a270d659156bdef916a33cbab50d2c8c0cc0527187e2d9fcd2dc12495e6671" exitCode=0 Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.381111 4856 generic.go:334] "Generic (PLEG): container finished" podID="e0f8403e-a06a-4804-b60a-98974506f547" containerID="24259cf1c1f38f1bc7f64997b64b9ed69fb4bf62d123b79b4fadefd0f143056d" exitCode=0 Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.381037 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0f8403e-a06a-4804-b60a-98974506f547","Type":"ContainerDied","Data":"02a270d659156bdef916a33cbab50d2c8c0cc0527187e2d9fcd2dc12495e6671"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.382699 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0f8403e-a06a-4804-b60a-98974506f547","Type":"ContainerDied","Data":"24259cf1c1f38f1bc7f64997b64b9ed69fb4bf62d123b79b4fadefd0f143056d"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.382818 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e0f8403e-a06a-4804-b60a-98974506f547","Type":"ContainerDied","Data":"6be3023150b988cc76c05c3bd45b087a3c33a10aafda575fc6de920562f9d152"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.382953 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6be3023150b988cc76c05c3bd45b087a3c33a10aafda575fc6de920562f9d152" Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.380995 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0439eeb4750e14d8f9ae56ff7b037034d989e88f9f897866cf9e1ff613634e01\": container with ID starting with 0439eeb4750e14d8f9ae56ff7b037034d989e88f9f897866cf9e1ff613634e01 not found: ID does not exist" containerID="0439eeb4750e14d8f9ae56ff7b037034d989e88f9f897866cf9e1ff613634e01" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.383138 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0439eeb4750e14d8f9ae56ff7b037034d989e88f9f897866cf9e1ff613634e01"} err="failed to get container status \"0439eeb4750e14d8f9ae56ff7b037034d989e88f9f897866cf9e1ff613634e01\": rpc error: code = NotFound desc = could not find container \"0439eeb4750e14d8f9ae56ff7b037034d989e88f9f897866cf9e1ff613634e01\": container with ID starting with 0439eeb4750e14d8f9ae56ff7b037034d989e88f9f897866cf9e1ff613634e01 not found: ID does not exist" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.383358 4856 scope.go:117] "RemoveContainer" containerID="ae2a400802cf450e80a83dad86eb4c2623ee43d44b73239e9fc8e7d9b2dbe411" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.408611 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.408657 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance1a70-account-delete-m5qqx" event={"ID":"df337886-1469-499f-bbb4-564f479cafa7","Type":"ContainerDied","Data":"8286b09b3af8de22cd7b4baf2b144f226e9a825dc711ba4c0cb6b9829ff161b4"} Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.408964 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8286b09b3af8de22cd7b4baf2b144f226e9a825dc711ba4c0cb6b9829ff161b4" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.444178 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.456351 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-88779 operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone39ec-account-delete-6j6qn" podUID="5f0e688e-e928-4da2-b752-fb04a6307071" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.457842 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-fc96b95bb-4mtxg"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.470525 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance1a70-account-delete-m5qqx" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.478551 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.482545 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.486902 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-fc96b95bb-4mtxg"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.506041 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.506287 4856 scope.go:117] "RemoveContainer" containerID="b3b1f2a0ac6e8ef5ca8623acaf447ee1e4d4c639c63af0026dc10d1cc70ff28a" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.510379 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.529370 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.530011 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-internal-tls-certs\") pod \"c75cebe3-86db-4be1-9755-4bd8a83c9796\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.530190 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-state-metrics-tls-certs\") pod \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536281 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-config-data\") pod \"39f7a457-9a5c-48b5-86c0-24d274596c8a\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536337 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-scripts\") pod \"c75cebe3-86db-4be1-9755-4bd8a83c9796\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536372 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f7a457-9a5c-48b5-86c0-24d274596c8a-logs\") pod \"39f7a457-9a5c-48b5-86c0-24d274596c8a\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536398 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-internal-tls-certs\") pod \"39f7a457-9a5c-48b5-86c0-24d274596c8a\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536447 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-combined-ca-bundle\") pod \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536491 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-public-tls-certs\") pod \"39f7a457-9a5c-48b5-86c0-24d274596c8a\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536600 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcxnm\" (UniqueName: \"kubernetes.io/projected/39f7a457-9a5c-48b5-86c0-24d274596c8a-kube-api-access-zcxnm\") pod \"39f7a457-9a5c-48b5-86c0-24d274596c8a\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536638 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75cebe3-86db-4be1-9755-4bd8a83c9796-logs\") pod \"c75cebe3-86db-4be1-9755-4bd8a83c9796\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536661 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-config-data-custom\") pod \"39f7a457-9a5c-48b5-86c0-24d274596c8a\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536682 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"c75cebe3-86db-4be1-9755-4bd8a83c9796\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536709 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-state-metrics-tls-config\") pod \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536747 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wb7g\" (UniqueName: \"kubernetes.io/projected/c75cebe3-86db-4be1-9755-4bd8a83c9796-kube-api-access-4wb7g\") pod \"c75cebe3-86db-4be1-9755-4bd8a83c9796\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536778 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-combined-ca-bundle\") pod \"39f7a457-9a5c-48b5-86c0-24d274596c8a\" (UID: \"39f7a457-9a5c-48b5-86c0-24d274596c8a\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536820 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-combined-ca-bundle\") pod \"c75cebe3-86db-4be1-9755-4bd8a83c9796\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536891 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-config-data\") pod \"c75cebe3-86db-4be1-9755-4bd8a83c9796\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.536915 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c75cebe3-86db-4be1-9755-4bd8a83c9796-httpd-run\") pod \"c75cebe3-86db-4be1-9755-4bd8a83c9796\" (UID: \"c75cebe3-86db-4be1-9755-4bd8a83c9796\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.538293 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h85ws\" (UniqueName: \"kubernetes.io/projected/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-api-access-h85ws\") pod \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\" (UID: \"4a38fdd7-2dc0-4ebc-91c7-359d0e437900\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.539976 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c75cebe3-86db-4be1-9755-4bd8a83c9796-logs" (OuterVolumeSpecName: "logs") pod "c75cebe3-86db-4be1-9755-4bd8a83c9796" (UID: "c75cebe3-86db-4be1-9755-4bd8a83c9796"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.535639 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.543580 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c75cebe3-86db-4be1-9755-4bd8a83c9796-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.547294 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.552946 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "c75cebe3-86db-4be1-9755-4bd8a83c9796" (UID: "c75cebe3-86db-4be1-9755-4bd8a83c9796"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.559991 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.561436 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39f7a457-9a5c-48b5-86c0-24d274596c8a-logs" (OuterVolumeSpecName: "logs") pod "39f7a457-9a5c-48b5-86c0-24d274596c8a" (UID: "39f7a457-9a5c-48b5-86c0-24d274596c8a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.562947 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-scripts" (OuterVolumeSpecName: "scripts") pod "c75cebe3-86db-4be1-9755-4bd8a83c9796" (UID: "c75cebe3-86db-4be1-9755-4bd8a83c9796"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.563569 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c75cebe3-86db-4be1-9755-4bd8a83c9796-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c75cebe3-86db-4be1-9755-4bd8a83c9796" (UID: "c75cebe3-86db-4be1-9755-4bd8a83c9796"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.580130 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.580309 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-api-access-h85ws" (OuterVolumeSpecName: "kube-api-access-h85ws") pod "4a38fdd7-2dc0-4ebc-91c7-359d0e437900" (UID: "4a38fdd7-2dc0-4ebc-91c7-359d0e437900"). InnerVolumeSpecName "kube-api-access-h85ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.586650 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c75cebe3-86db-4be1-9755-4bd8a83c9796-kube-api-access-4wb7g" (OuterVolumeSpecName: "kube-api-access-4wb7g") pod "c75cebe3-86db-4be1-9755-4bd8a83c9796" (UID: "c75cebe3-86db-4be1-9755-4bd8a83c9796"). InnerVolumeSpecName "kube-api-access-4wb7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.591688 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "39f7a457-9a5c-48b5-86c0-24d274596c8a" (UID: "39f7a457-9a5c-48b5-86c0-24d274596c8a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.591682 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.593749 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.593795 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27ecbc9-0058-49d3-8715-826a4a1bb544" containerName="galera" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.599864 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.606738 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f7a457-9a5c-48b5-86c0-24d274596c8a-kube-api-access-zcxnm" (OuterVolumeSpecName: "kube-api-access-zcxnm") pod "39f7a457-9a5c-48b5-86c0-24d274596c8a" (UID: "39f7a457-9a5c-48b5-86c0-24d274596c8a"). InnerVolumeSpecName "kube-api-access-zcxnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.611995 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.621267 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.622642 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.627820 4856 scope.go:117] "RemoveContainer" containerID="beff5c4f9865829069fb5a650f73d4daaf877eaaaf7cd411dbc96c82233e8e19" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.628024 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.630830 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.638935 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644238 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-combined-ca-bundle\") pod \"b1ccf431-f692-459f-b249-66bd9747d09c\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644285 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-internal-tls-certs\") pod \"b1ccf431-f692-459f-b249-66bd9747d09c\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644339 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6xnw\" (UniqueName: \"kubernetes.io/projected/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-kube-api-access-l6xnw\") pod \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\" (UID: \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644368 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b88f55c-12d5-4cba-a155-aa00c19c94f4-config-data\") pod \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\" (UID: \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644396 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-kolla-config\") pod \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644417 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d3a5d31-7183-4298-87ea-4aa84aa395b4-config-data-generated\") pod \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644457 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-public-tls-certs\") pod \"b1ccf431-f692-459f-b249-66bd9747d09c\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644476 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-sg-core-conf-yaml\") pod \"e0f8403e-a06a-4804-b60a-98974506f547\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644496 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-ceilometer-tls-certs\") pod \"e0f8403e-a06a-4804-b60a-98974506f547\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644547 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df337886-1469-499f-bbb4-564f479cafa7-operator-scripts\") pod \"df337886-1469-499f-bbb4-564f479cafa7\" (UID: \"df337886-1469-499f-bbb4-564f479cafa7\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644569 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-scripts\") pod \"e0f8403e-a06a-4804-b60a-98974506f547\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644590 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-config-data-default\") pod \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644613 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-config-data\") pod \"e0f8403e-a06a-4804-b60a-98974506f547\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644648 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b88f55c-12d5-4cba-a155-aa00c19c94f4-combined-ca-bundle\") pod \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\" (UID: \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644691 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx7nj\" (UniqueName: \"kubernetes.io/projected/1d3a5d31-7183-4298-87ea-4aa84aa395b4-kube-api-access-kx7nj\") pod \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644720 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0f8403e-a06a-4804-b60a-98974506f547-log-httpd\") pod \"e0f8403e-a06a-4804-b60a-98974506f547\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644738 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644761 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-operator-scripts\") pod \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644806 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgdmw\" (UniqueName: \"kubernetes.io/projected/2b88f55c-12d5-4cba-a155-aa00c19c94f4-kube-api-access-mgdmw\") pod \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\" (UID: \"2b88f55c-12d5-4cba-a155-aa00c19c94f4\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644828 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3a5d31-7183-4298-87ea-4aa84aa395b4-combined-ca-bundle\") pod \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644855 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0f8403e-a06a-4804-b60a-98974506f547-run-httpd\") pod \"e0f8403e-a06a-4804-b60a-98974506f547\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644882 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data\") pod \"b1ccf431-f692-459f-b249-66bd9747d09c\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644901 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-combined-ca-bundle\") pod \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\" (UID: \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644926 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k87gc\" (UniqueName: \"kubernetes.io/projected/b1ccf431-f692-459f-b249-66bd9747d09c-kube-api-access-k87gc\") pod \"b1ccf431-f692-459f-b249-66bd9747d09c\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644960 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1ccf431-f692-459f-b249-66bd9747d09c-logs\") pod \"b1ccf431-f692-459f-b249-66bd9747d09c\" (UID: \"b1ccf431-f692-459f-b249-66bd9747d09c\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.644987 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d3a5d31-7183-4298-87ea-4aa84aa395b4-galera-tls-certs\") pod \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\" (UID: \"1d3a5d31-7183-4298-87ea-4aa84aa395b4\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645012 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft2fw\" (UniqueName: \"kubernetes.io/projected/e0f8403e-a06a-4804-b60a-98974506f547-kube-api-access-ft2fw\") pod \"e0f8403e-a06a-4804-b60a-98974506f547\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645052 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-combined-ca-bundle\") pod \"e0f8403e-a06a-4804-b60a-98974506f547\" (UID: \"e0f8403e-a06a-4804-b60a-98974506f547\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645082 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-config-data\") pod \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\" (UID: \"07329cf7-c3ff-410a-8ab7-8f19ae9d3974\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645064 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d3a5d31-7183-4298-87ea-4aa84aa395b4-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "1d3a5d31-7183-4298-87ea-4aa84aa395b4" (UID: "1d3a5d31-7183-4298-87ea-4aa84aa395b4"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645121 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhkvs\" (UniqueName: \"kubernetes.io/projected/df337886-1469-499f-bbb4-564f479cafa7-kube-api-access-xhkvs\") pod \"df337886-1469-499f-bbb4-564f479cafa7\" (UID: \"df337886-1469-499f-bbb4-564f479cafa7\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645497 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645544 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645556 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wb7g\" (UniqueName: \"kubernetes.io/projected/c75cebe3-86db-4be1-9755-4bd8a83c9796-kube-api-access-4wb7g\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645568 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/1d3a5d31-7183-4298-87ea-4aa84aa395b4-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645577 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c75cebe3-86db-4be1-9755-4bd8a83c9796-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645585 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h85ws\" (UniqueName: \"kubernetes.io/projected/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-api-access-h85ws\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645593 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645600 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39f7a457-9a5c-48b5-86c0-24d274596c8a-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.645609 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcxnm\" (UniqueName: \"kubernetes.io/projected/39f7a457-9a5c-48b5-86c0-24d274596c8a-kube-api-access-zcxnm\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.647474 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "1d3a5d31-7183-4298-87ea-4aa84aa395b4" (UID: "1d3a5d31-7183-4298-87ea-4aa84aa395b4"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.648198 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d3a5d31-7183-4298-87ea-4aa84aa395b4" (UID: "1d3a5d31-7183-4298-87ea-4aa84aa395b4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.652144 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ccf431-f692-459f-b249-66bd9747d09c-logs" (OuterVolumeSpecName: "logs") pod "b1ccf431-f692-459f-b249-66bd9747d09c" (UID: "b1ccf431-f692-459f-b249-66bd9747d09c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.655128 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f8403e-a06a-4804-b60a-98974506f547-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e0f8403e-a06a-4804-b60a-98974506f547" (UID: "e0f8403e-a06a-4804-b60a-98974506f547"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.655490 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.656153 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "1d3a5d31-7183-4298-87ea-4aa84aa395b4" (UID: "1d3a5d31-7183-4298-87ea-4aa84aa395b4"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.656291 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df337886-1469-499f-bbb4-564f479cafa7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "df337886-1469-499f-bbb4-564f479cafa7" (UID: "df337886-1469-499f-bbb4-564f479cafa7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.656743 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0f8403e-a06a-4804-b60a-98974506f547-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e0f8403e-a06a-4804-b60a-98974506f547" (UID: "e0f8403e-a06a-4804-b60a-98974506f547"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.665942 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-kube-api-access-l6xnw" (OuterVolumeSpecName: "kube-api-access-l6xnw") pod "07329cf7-c3ff-410a-8ab7-8f19ae9d3974" (UID: "07329cf7-c3ff-410a-8ab7-8f19ae9d3974"). InnerVolumeSpecName "kube-api-access-l6xnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.666020 4856 scope.go:117] "RemoveContainer" containerID="52b5d54be089b50ed66bdc784286dd4da166b47053361ce527fcf601f29f2b4c" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.668759 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0f8403e-a06a-4804-b60a-98974506f547-kube-api-access-ft2fw" (OuterVolumeSpecName: "kube-api-access-ft2fw") pod "e0f8403e-a06a-4804-b60a-98974506f547" (UID: "e0f8403e-a06a-4804-b60a-98974506f547"). InnerVolumeSpecName "kube-api-access-ft2fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.669036 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a38fdd7-2dc0-4ebc-91c7-359d0e437900" (UID: "4a38fdd7-2dc0-4ebc-91c7-359d0e437900"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.672148 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d3a5d31-7183-4298-87ea-4aa84aa395b4-kube-api-access-kx7nj" (OuterVolumeSpecName: "kube-api-access-kx7nj") pod "1d3a5d31-7183-4298-87ea-4aa84aa395b4" (UID: "1d3a5d31-7183-4298-87ea-4aa84aa395b4"). InnerVolumeSpecName "kube-api-access-kx7nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.697951 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ccf431-f692-459f-b249-66bd9747d09c-kube-api-access-k87gc" (OuterVolumeSpecName: "kube-api-access-k87gc") pod "b1ccf431-f692-459f-b249-66bd9747d09c" (UID: "b1ccf431-f692-459f-b249-66bd9747d09c"). InnerVolumeSpecName "kube-api-access-k87gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.698062 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b88f55c-12d5-4cba-a155-aa00c19c94f4-kube-api-access-mgdmw" (OuterVolumeSpecName: "kube-api-access-mgdmw") pod "2b88f55c-12d5-4cba-a155-aa00c19c94f4" (UID: "2b88f55c-12d5-4cba-a155-aa00c19c94f4"). InnerVolumeSpecName "kube-api-access-mgdmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.701663 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df337886-1469-499f-bbb4-564f479cafa7-kube-api-access-xhkvs" (OuterVolumeSpecName: "kube-api-access-xhkvs") pod "df337886-1469-499f-bbb4-564f479cafa7" (UID: "df337886-1469-499f-bbb4-564f479cafa7"). InnerVolumeSpecName "kube-api-access-xhkvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.715044 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.725342 4856 scope.go:117] "RemoveContainer" containerID="72290d753c232f9f411f4eca62ef3cf6c13d4eb7af108e1e14ff35b4c3746200" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.729665 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-scripts" (OuterVolumeSpecName: "scripts") pod "e0f8403e-a06a-4804-b60a-98974506f547" (UID: "e0f8403e-a06a-4804-b60a-98974506f547"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.730868 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-56bc6597ff-ll6fl"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.743659 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-56bc6597ff-ll6fl"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.747143 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfpsw\" (UniqueName: \"kubernetes.io/projected/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-kube-api-access-kfpsw\") pod \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.747341 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-combined-ca-bundle\") pod \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.747378 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-memcached-tls-certs\") pod \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.747405 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-config-data\") pod \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.747477 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-combined-ca-bundle\") pod \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.747563 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-nova-metadata-tls-certs\") pod \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.747651 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-logs\") pod \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.747747 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data\") pod \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\" (UID: \"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.748489 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-kolla-config\") pod \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.748557 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4gch\" (UniqueName: \"kubernetes.io/projected/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-kube-api-access-q4gch\") pod \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\" (UID: \"63e9edb8-ed05-4d0f-aff1-d59b369cd76d\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.748790 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-logs" (OuterVolumeSpecName: "logs") pod "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" (UID: "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.749237 4856 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.749320 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data podName:4ac8c44e-0667-43f7-aebd-a7b4c5bcb429 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:33.749277809 +0000 UTC m=+1796.162671097 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data") pod "rabbitmq-cell1-server-0" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429") : configmap "rabbitmq-cell1-config-data" not found Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.749340 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-config-data" (OuterVolumeSpecName: "config-data") pod "63e9edb8-ed05-4d0f-aff1-d59b369cd76d" (UID: "63e9edb8-ed05-4d0f-aff1-d59b369cd76d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.749899 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.749918 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.749934 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgdmw\" (UniqueName: \"kubernetes.io/projected/2b88f55c-12d5-4cba-a155-aa00c19c94f4-kube-api-access-mgdmw\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.749946 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0f8403e-a06a-4804-b60a-98974506f547-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.749958 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k87gc\" (UniqueName: \"kubernetes.io/projected/b1ccf431-f692-459f-b249-66bd9747d09c-kube-api-access-k87gc\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.749971 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1ccf431-f692-459f-b249-66bd9747d09c-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.750455 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "63e9edb8-ed05-4d0f-aff1-d59b369cd76d" (UID: "63e9edb8-ed05-4d0f-aff1-d59b369cd76d"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.751323 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft2fw\" (UniqueName: \"kubernetes.io/projected/e0f8403e-a06a-4804-b60a-98974506f547-kube-api-access-ft2fw\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.751362 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.751380 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhkvs\" (UniqueName: \"kubernetes.io/projected/df337886-1469-499f-bbb4-564f479cafa7-kube-api-access-xhkvs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.751394 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6xnw\" (UniqueName: \"kubernetes.io/projected/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-kube-api-access-l6xnw\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.751408 4856 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.751420 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.751432 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/df337886-1469-499f-bbb4-564f479cafa7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.751444 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.751457 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/1d3a5d31-7183-4298-87ea-4aa84aa395b4-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.751470 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx7nj\" (UniqueName: \"kubernetes.io/projected/1d3a5d31-7183-4298-87ea-4aa84aa395b4-kube-api-access-kx7nj\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.751482 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e0f8403e-a06a-4804-b60a-98974506f547-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.757636 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "4a38fdd7-2dc0-4ebc-91c7-359d0e437900" (UID: "4a38fdd7-2dc0-4ebc-91c7-359d0e437900"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.770537 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39f7a457-9a5c-48b5-86c0-24d274596c8a" (UID: "39f7a457-9a5c-48b5-86c0-24d274596c8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.770776 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "mysql-db") pod "1d3a5d31-7183-4298-87ea-4aa84aa395b4" (UID: "1d3a5d31-7183-4298-87ea-4aa84aa395b4"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.773937 4856 scope.go:117] "RemoveContainer" containerID="e5b7a326f0ad6ee2471d7167a3c293c93e8329469da146c3d10a4dab31910b17" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.780408 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-kube-api-access-kfpsw" (OuterVolumeSpecName: "kube-api-access-kfpsw") pod "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" (UID: "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92"). InnerVolumeSpecName "kube-api-access-kfpsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.798312 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-kube-api-access-q4gch" (OuterVolumeSpecName: "kube-api-access-q4gch") pod "63e9edb8-ed05-4d0f-aff1-d59b369cd76d" (UID: "63e9edb8-ed05-4d0f-aff1-d59b369cd76d"). InnerVolumeSpecName "kube-api-access-q4gch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.804177 4856 scope.go:117] "RemoveContainer" containerID="75e814a4cfa4f97ecc9bfab324de4d5b2b33d836ae12cc47b87c6782b91c5dae" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.805321 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.814907 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.822975 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "4a38fdd7-2dc0-4ebc-91c7-359d0e437900" (UID: "4a38fdd7-2dc0-4ebc-91c7-359d0e437900"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.826780 4856 scope.go:117] "RemoveContainer" containerID="545281c9124acb52b1ddf1192147efb7e07a95b9f53d9d183531f5e1698bb14f" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.834882 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c75cebe3-86db-4be1-9755-4bd8a83c9796" (UID: "c75cebe3-86db-4be1-9755-4bd8a83c9796"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.852396 4856 scope.go:117] "RemoveContainer" containerID="6219833dba75dd8b4b4fd8f9b3965d45ed8beebecf788175cdad2c1025ca7eea" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.854743 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican29ba-account-delete-f7rqf" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.854885 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88779\" (UniqueName: \"kubernetes.io/projected/5f0e688e-e928-4da2-b752-fb04a6307071-kube-api-access-88779\") pod \"keystone39ec-account-delete-6j6qn\" (UID: \"5f0e688e-e928-4da2-b752-fb04a6307071\") " pod="openstack/keystone39ec-account-delete-6j6qn" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.855045 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f0e688e-e928-4da2-b752-fb04a6307071-operator-scripts\") pod \"keystone39ec-account-delete-6j6qn\" (UID: \"5f0e688e-e928-4da2-b752-fb04a6307071\") " pod="openstack/keystone39ec-account-delete-6j6qn" Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.859188 4856 projected.go:194] Error preparing data for projected volume kube-api-access-88779 for pod openstack/keystone39ec-account-delete-6j6qn: failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.860389 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.863459 4856 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.863528 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f0e688e-e928-4da2-b752-fb04a6307071-operator-scripts podName:5f0e688e-e928-4da2-b752-fb04a6307071 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:27.863494969 +0000 UTC m=+1790.276888227 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5f0e688e-e928-4da2-b752-fb04a6307071-operator-scripts") pod "keystone39ec-account-delete-6j6qn" (UID: "5f0e688e-e928-4da2-b752-fb04a6307071") : configmap "openstack-scripts" not found Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.863559 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f0e688e-e928-4da2-b752-fb04a6307071-kube-api-access-88779 podName:5f0e688e-e928-4da2-b752-fb04a6307071 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:27.863553881 +0000 UTC m=+1790.276947139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-88779" (UniqueName: "kubernetes.io/projected/5f0e688e-e928-4da2-b752-fb04a6307071-kube-api-access-88779") pod "keystone39ec-account-delete-6j6qn" (UID: "5f0e688e-e928-4da2-b752-fb04a6307071") : failed to fetch token: serviceaccounts "galera-openstack" not found Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.864185 4856 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.864216 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4gch\" (UniqueName: \"kubernetes.io/projected/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-kube-api-access-q4gch\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.864228 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfpsw\" (UniqueName: \"kubernetes.io/projected/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-kube-api-access-kfpsw\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.864238 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.864248 4856 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.864259 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.864290 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.864317 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.864327 4856 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/4a38fdd7-2dc0-4ebc-91c7-359d0e437900-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.892906 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "39f7a457-9a5c-48b5-86c0-24d274596c8a" (UID: "39f7a457-9a5c-48b5-86c0-24d274596c8a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.900406 4856 scope.go:117] "RemoveContainer" containerID="252762430df01a665bfdad1f0791906f5cce3eb5c2ead3119b24019aca7e6928" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.913586 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-config-data" (OuterVolumeSpecName: "config-data") pod "07329cf7-c3ff-410a-8ab7-8f19ae9d3974" (UID: "07329cf7-c3ff-410a-8ab7-8f19ae9d3974"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.913598 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e0f8403e-a06a-4804-b60a-98974506f547" (UID: "e0f8403e-a06a-4804-b60a-98974506f547"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.914675 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c75cebe3-86db-4be1-9755-4bd8a83c9796" (UID: "c75cebe3-86db-4be1-9755-4bd8a83c9796"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.924692 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b88f55c-12d5-4cba-a155-aa00c19c94f4-config-data" (OuterVolumeSpecName: "config-data") pod "2b88f55c-12d5-4cba-a155-aa00c19c94f4" (UID: "2b88f55c-12d5-4cba-a155-aa00c19c94f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.931538 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-config-data" (OuterVolumeSpecName: "config-data") pod "c75cebe3-86db-4be1-9755-4bd8a83c9796" (UID: "c75cebe3-86db-4be1-9755-4bd8a83c9796"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.940183 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d3a5d31-7183-4298-87ea-4aa84aa395b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d3a5d31-7183-4298-87ea-4aa84aa395b4" (UID: "1d3a5d31-7183-4298-87ea-4aa84aa395b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.942404 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1ccf431-f692-459f-b249-66bd9747d09c" (UID: "b1ccf431-f692-459f-b249-66bd9747d09c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.946657 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0837d9798ef5bdddf9e9d11f1d4578cefe9b49abb3e9b5697828bae554298534" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.947033 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b88f55c-12d5-4cba-a155-aa00c19c94f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b88f55c-12d5-4cba-a155-aa00c19c94f4" (UID: "2b88f55c-12d5-4cba-a155-aa00c19c94f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.948613 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0837d9798ef5bdddf9e9d11f1d4578cefe9b49abb3e9b5697828bae554298534" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.949934 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0837d9798ef5bdddf9e9d11f1d4578cefe9b49abb3e9b5697828bae554298534" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 07:32:25 crc kubenswrapper[4856]: E1122 07:32:25.949969 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="3aa24715-1df9-4a47-9817-4a1b68679d08" containerName="ovn-northd" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.950845 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b1ccf431-f692-459f-b249-66bd9747d09c" (UID: "b1ccf431-f692-459f-b249-66bd9747d09c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.957693 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d3a5d31-7183-4298-87ea-4aa84aa395b4-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "1d3a5d31-7183-4298-87ea-4aa84aa395b4" (UID: "1d3a5d31-7183-4298-87ea-4aa84aa395b4"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.957814 4856 scope.go:117] "RemoveContainer" containerID="f37064b2330b484c7d969fce3d2a893f3abae84ebef184dd47e6d9b5cc22dbd0" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.966686 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkvz4\" (UniqueName: \"kubernetes.io/projected/519a764c-9ac2-4f94-84c6-7c284ab676cd-kube-api-access-vkvz4\") pod \"519a764c-9ac2-4f94-84c6-7c284ab676cd\" (UID: \"519a764c-9ac2-4f94-84c6-7c284ab676cd\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.966754 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/519a764c-9ac2-4f94-84c6-7c284ab676cd-operator-scripts\") pod \"519a764c-9ac2-4f94-84c6-7c284ab676cd\" (UID: \"519a764c-9ac2-4f94-84c6-7c284ab676cd\") " Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.967264 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.967289 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.967302 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.967312 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b88f55c-12d5-4cba-a155-aa00c19c94f4-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.967323 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.967334 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b88f55c-12d5-4cba-a155-aa00c19c94f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.967344 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.967355 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c75cebe3-86db-4be1-9755-4bd8a83c9796-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.967365 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d3a5d31-7183-4298-87ea-4aa84aa395b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.967378 4856 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d3a5d31-7183-4298-87ea-4aa84aa395b4-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.967389 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.967916 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/519a764c-9ac2-4f94-84c6-7c284ab676cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "519a764c-9ac2-4f94-84c6-7c284ab676cd" (UID: "519a764c-9ac2-4f94-84c6-7c284ab676cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.982951 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "39f7a457-9a5c-48b5-86c0-24d274596c8a" (UID: "39f7a457-9a5c-48b5-86c0-24d274596c8a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.982951 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data" (OuterVolumeSpecName: "config-data") pod "b1ccf431-f692-459f-b249-66bd9747d09c" (UID: "b1ccf431-f692-459f-b249-66bd9747d09c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:25 crc kubenswrapper[4856]: I1122 07:32:25.992922 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/519a764c-9ac2-4f94-84c6-7c284ab676cd-kube-api-access-vkvz4" (OuterVolumeSpecName: "kube-api-access-vkvz4") pod "519a764c-9ac2-4f94-84c6-7c284ab676cd" (UID: "519a764c-9ac2-4f94-84c6-7c284ab676cd"). InnerVolumeSpecName "kube-api-access-vkvz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.040502 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data" (OuterVolumeSpecName: "config-data") pod "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" (UID: "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.042588 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" (UID: "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.072921 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.082575 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.083088 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.083104 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.083116 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkvz4\" (UniqueName: \"kubernetes.io/projected/519a764c-9ac2-4f94-84c6-7c284ab676cd-kube-api-access-vkvz4\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.083128 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.083166 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/519a764c-9ac2-4f94-84c6-7c284ab676cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.083176 4856 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: E1122 07:32:26.083273 4856 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Nov 22 07:32:26 crc kubenswrapper[4856]: E1122 07:32:26.083358 4856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data podName:0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89 nodeName:}" failed. No retries permitted until 2025-11-22 07:32:34.083339307 +0000 UTC m=+1796.496732575 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data") pod "rabbitmq-server-0" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89") : configmap "rabbitmq-config-data" not found Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.093446 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07329cf7-c3ff-410a-8ab7-8f19ae9d3974" (UID: "07329cf7-c3ff-410a-8ab7-8f19ae9d3974"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.121281 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-config-data" (OuterVolumeSpecName: "config-data") pod "39f7a457-9a5c-48b5-86c0-24d274596c8a" (UID: "39f7a457-9a5c-48b5-86c0-24d274596c8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.125121 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "63e9edb8-ed05-4d0f-aff1-d59b369cd76d" (UID: "63e9edb8-ed05-4d0f-aff1-d59b369cd76d"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.129781 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63e9edb8-ed05-4d0f-aff1-d59b369cd76d" (UID: "63e9edb8-ed05-4d0f-aff1-d59b369cd76d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.133311 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e0f8403e-a06a-4804-b60a-98974506f547" (UID: "e0f8403e-a06a-4804-b60a-98974506f547"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.134123 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" (UID: "e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.141499 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b1ccf431-f692-459f-b249-66bd9747d09c" (UID: "b1ccf431-f692-459f-b249-66bd9747d09c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.171628 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0f8403e-a06a-4804-b60a-98974506f547" (UID: "e0f8403e-a06a-4804-b60a-98974506f547"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.184236 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07329cf7-c3ff-410a-8ab7-8f19ae9d3974-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.184268 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39f7a457-9a5c-48b5-86c0-24d274596c8a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.184278 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.184288 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.184297 4856 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/63e9edb8-ed05-4d0f-aff1-d59b369cd76d-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.184305 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1ccf431-f692-459f-b249-66bd9747d09c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.184314 4856 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.184323 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.193247 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-config-data" (OuterVolumeSpecName: "config-data") pod "e0f8403e-a06a-4804-b60a-98974506f547" (UID: "e0f8403e-a06a-4804-b60a-98974506f547"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.265943 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement00a5-account-delete-qrc4g" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.272107 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinderceda-account-delete-chlrj" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.279914 4856 scope.go:117] "RemoveContainer" containerID="252762430df01a665bfdad1f0791906f5cce3eb5c2ead3119b24019aca7e6928" Nov 22 07:32:26 crc kubenswrapper[4856]: E1122 07:32:26.280413 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"252762430df01a665bfdad1f0791906f5cce3eb5c2ead3119b24019aca7e6928\": container with ID starting with 252762430df01a665bfdad1f0791906f5cce3eb5c2ead3119b24019aca7e6928 not found: ID does not exist" containerID="252762430df01a665bfdad1f0791906f5cce3eb5c2ead3119b24019aca7e6928" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.280479 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"252762430df01a665bfdad1f0791906f5cce3eb5c2ead3119b24019aca7e6928"} err="failed to get container status \"252762430df01a665bfdad1f0791906f5cce3eb5c2ead3119b24019aca7e6928\": rpc error: code = NotFound desc = could not find container \"252762430df01a665bfdad1f0791906f5cce3eb5c2ead3119b24019aca7e6928\": container with ID starting with 252762430df01a665bfdad1f0791906f5cce3eb5c2ead3119b24019aca7e6928 not found: ID does not exist" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.280544 4856 scope.go:117] "RemoveContainer" containerID="f37064b2330b484c7d969fce3d2a893f3abae84ebef184dd47e6d9b5cc22dbd0" Nov 22 07:32:26 crc kubenswrapper[4856]: E1122 07:32:26.282275 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37064b2330b484c7d969fce3d2a893f3abae84ebef184dd47e6d9b5cc22dbd0\": container with ID starting with f37064b2330b484c7d969fce3d2a893f3abae84ebef184dd47e6d9b5cc22dbd0 not found: ID does not exist" containerID="f37064b2330b484c7d969fce3d2a893f3abae84ebef184dd47e6d9b5cc22dbd0" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.282325 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37064b2330b484c7d969fce3d2a893f3abae84ebef184dd47e6d9b5cc22dbd0"} err="failed to get container status \"f37064b2330b484c7d969fce3d2a893f3abae84ebef184dd47e6d9b5cc22dbd0\": rpc error: code = NotFound desc = could not find container \"f37064b2330b484c7d969fce3d2a893f3abae84ebef184dd47e6d9b5cc22dbd0\": container with ID starting with f37064b2330b484c7d969fce3d2a893f3abae84ebef184dd47e6d9b5cc22dbd0 not found: ID does not exist" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.285122 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaa66e0-ee9b-4115-b385-222e8ac0c21c-operator-scripts\") pod \"9eaa66e0-ee9b-4115-b385-222e8ac0c21c\" (UID: \"9eaa66e0-ee9b-4115-b385-222e8ac0c21c\") " Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.285205 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4flb\" (UniqueName: \"kubernetes.io/projected/5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f-kube-api-access-t4flb\") pod \"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f\" (UID: \"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f\") " Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.285237 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6kdp\" (UniqueName: \"kubernetes.io/projected/9eaa66e0-ee9b-4115-b385-222e8ac0c21c-kube-api-access-g6kdp\") pod \"9eaa66e0-ee9b-4115-b385-222e8ac0c21c\" (UID: \"9eaa66e0-ee9b-4115-b385-222e8ac0c21c\") " Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.285260 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f-operator-scripts\") pod \"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f\" (UID: \"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f\") " Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.285871 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0f8403e-a06a-4804-b60a-98974506f547-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.286808 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell07477-account-delete-5hzjb" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.287433 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f" (UID: "5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.287770 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9eaa66e0-ee9b-4115-b385-222e8ac0c21c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9eaa66e0-ee9b-4115-b385-222e8ac0c21c" (UID: "9eaa66e0-ee9b-4115-b385-222e8ac0c21c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.290722 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eaa66e0-ee9b-4115-b385-222e8ac0c21c-kube-api-access-g6kdp" (OuterVolumeSpecName: "kube-api-access-g6kdp") pod "9eaa66e0-ee9b-4115-b385-222e8ac0c21c" (UID: "9eaa66e0-ee9b-4115-b385-222e8ac0c21c"). InnerVolumeSpecName "kube-api-access-g6kdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.296839 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f-kube-api-access-t4flb" (OuterVolumeSpecName: "kube-api-access-t4flb") pod "5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f" (UID: "5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f"). InnerVolumeSpecName "kube-api-access-t4flb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.325355 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi2c75-account-delete-c4rqx" Nov 22 07:32:26 crc kubenswrapper[4856]: I1122 07:32:26.334548 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron73f8-account-delete-b8zpb" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.386284 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxnqt\" (UniqueName: \"kubernetes.io/projected/a66fa8fc-f908-43e7-a169-6156fc2092f8-kube-api-access-pxnqt\") pod \"a66fa8fc-f908-43e7-a169-6156fc2092f8\" (UID: \"a66fa8fc-f908-43e7-a169-6156fc2092f8\") " Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.386348 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfc2e8cc-04c1-4481-bf7d-d7e99972200f-operator-scripts\") pod \"cfc2e8cc-04c1-4481-bf7d-d7e99972200f\" (UID: \"cfc2e8cc-04c1-4481-bf7d-d7e99972200f\") " Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.386386 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8-operator-scripts\") pod \"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8\" (UID: \"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8\") " Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.386430 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tddl8\" (UniqueName: \"kubernetes.io/projected/fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8-kube-api-access-tddl8\") pod \"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8\" (UID: \"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8\") " Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.386467 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zc4r\" (UniqueName: \"kubernetes.io/projected/cfc2e8cc-04c1-4481-bf7d-d7e99972200f-kube-api-access-5zc4r\") pod \"cfc2e8cc-04c1-4481-bf7d-d7e99972200f\" (UID: \"cfc2e8cc-04c1-4481-bf7d-d7e99972200f\") " Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.386518 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a66fa8fc-f908-43e7-a169-6156fc2092f8-operator-scripts\") pod \"a66fa8fc-f908-43e7-a169-6156fc2092f8\" (UID: \"a66fa8fc-f908-43e7-a169-6156fc2092f8\") " Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.386828 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9eaa66e0-ee9b-4115-b385-222e8ac0c21c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.386843 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4flb\" (UniqueName: \"kubernetes.io/projected/5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f-kube-api-access-t4flb\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.386855 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6kdp\" (UniqueName: \"kubernetes.io/projected/9eaa66e0-ee9b-4115-b385-222e8ac0c21c-kube-api-access-g6kdp\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.386866 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.387246 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a66fa8fc-f908-43e7-a169-6156fc2092f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a66fa8fc-f908-43e7-a169-6156fc2092f8" (UID: "a66fa8fc-f908-43e7-a169-6156fc2092f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.387675 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfc2e8cc-04c1-4481-bf7d-d7e99972200f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cfc2e8cc-04c1-4481-bf7d-d7e99972200f" (UID: "cfc2e8cc-04c1-4481-bf7d-d7e99972200f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.388042 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8" (UID: "fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.389528 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a66fa8fc-f908-43e7-a169-6156fc2092f8-kube-api-access-pxnqt" (OuterVolumeSpecName: "kube-api-access-pxnqt") pod "a66fa8fc-f908-43e7-a169-6156fc2092f8" (UID: "a66fa8fc-f908-43e7-a169-6156fc2092f8"). InnerVolumeSpecName "kube-api-access-pxnqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.394760 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfc2e8cc-04c1-4481-bf7d-d7e99972200f-kube-api-access-5zc4r" (OuterVolumeSpecName: "kube-api-access-5zc4r") pod "cfc2e8cc-04c1-4481-bf7d-d7e99972200f" (UID: "cfc2e8cc-04c1-4481-bf7d-d7e99972200f"). InnerVolumeSpecName "kube-api-access-5zc4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.394861 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8-kube-api-access-tddl8" (OuterVolumeSpecName: "kube-api-access-tddl8") pod "fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8" (UID: "fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8"). InnerVolumeSpecName "kube-api-access-tddl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.437395 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron73f8-account-delete-b8zpb" event={"ID":"fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8","Type":"ContainerDied","Data":"193af241635c99ea47ec3315efa583c295c6bf29160559a1528156c08c60bc9d"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.437445 4856 scope.go:117] "RemoveContainer" containerID="acef6c657dbb259f9e9177fab5afef052fec2f5a02a3c72c2be4304e9d337a1c" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.437568 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron73f8-account-delete-b8zpb" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.445765 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"63e9edb8-ed05-4d0f-aff1-d59b369cd76d","Type":"ContainerDied","Data":"302f5810ebcb14d86323414de2c3e642b10138700566cdfa4601f3ae41122fba"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.445899 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.452662 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinderceda-account-delete-chlrj" event={"ID":"5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f","Type":"ContainerDied","Data":"9abfc1f2b88c3f7fa0191ded52eb115d679f975757c1bf640410340993f23dc1"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.452692 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9abfc1f2b88c3f7fa0191ded52eb115d679f975757c1bf640410340993f23dc1" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.452742 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinderceda-account-delete-chlrj" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.454291 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement00a5-account-delete-qrc4g" event={"ID":"9eaa66e0-ee9b-4115-b385-222e8ac0c21c","Type":"ContainerDied","Data":"9970fb69254ee38cebcff01cf14152d6038ef05428b95cedacc4f1c2e4b74be5"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.454326 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9970fb69254ee38cebcff01cf14152d6038ef05428b95cedacc4f1c2e4b74be5" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.454391 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement00a5-account-delete-qrc4g" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.461208 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican29ba-account-delete-f7rqf" event={"ID":"519a764c-9ac2-4f94-84c6-7c284ab676cd","Type":"ContainerDied","Data":"65dcf2061b4f7648accb48bcaf27113b70129034cad6d9ac5be7da9168939260"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.461297 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65dcf2061b4f7648accb48bcaf27113b70129034cad6d9ac5be7da9168939260" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.461354 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican29ba-account-delete-f7rqf" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.471909 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3aa24715-1df9-4a47-9817-4a1b68679d08/ovn-northd/0.log" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.471943 4856 generic.go:334] "Generic (PLEG): container finished" podID="3aa24715-1df9-4a47-9817-4a1b68679d08" containerID="0837d9798ef5bdddf9e9d11f1d4578cefe9b49abb3e9b5697828bae554298534" exitCode=139 Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.471992 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3aa24715-1df9-4a47-9817-4a1b68679d08","Type":"ContainerDied","Data":"0837d9798ef5bdddf9e9d11f1d4578cefe9b49abb3e9b5697828bae554298534"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.484644 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92","Type":"ContainerDied","Data":"d90632d330d38e9be6cef5206c738ec59d81e870670d687f6bccc464bedfaadd"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.484747 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.496622 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi2c75-account-delete-c4rqx" event={"ID":"cfc2e8cc-04c1-4481-bf7d-d7e99972200f","Type":"ContainerDied","Data":"85e485f6c2ad89407e94f73dfdb8f6483fd2408bef8c7f63bada4c52644e6f6e"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.496661 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85e485f6c2ad89407e94f73dfdb8f6483fd2408bef8c7f63bada4c52644e6f6e" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.496729 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi2c75-account-delete-c4rqx" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.497093 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a66fa8fc-f908-43e7-a169-6156fc2092f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.497691 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxnqt\" (UniqueName: \"kubernetes.io/projected/a66fa8fc-f908-43e7-a169-6156fc2092f8-kube-api-access-pxnqt\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.497719 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cfc2e8cc-04c1-4481-bf7d-d7e99972200f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.497731 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.497741 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tddl8\" (UniqueName: \"kubernetes.io/projected/fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8-kube-api-access-tddl8\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.497751 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zc4r\" (UniqueName: \"kubernetes.io/projected/cfc2e8cc-04c1-4481-bf7d-d7e99972200f-kube-api-access-5zc4r\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.507674 4856 scope.go:117] "RemoveContainer" containerID="63572ca1ab3b819180a4d2cdb47a2c1f194a6daee761f767b694471277028ac6" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.514264 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b1ccf431-f692-459f-b249-66bd9747d09c","Type":"ContainerDied","Data":"ed53884447da7721af2bd041c798876cdfd0f649185ae17195b88a7da8863f6e"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.514382 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.522393 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell07477-account-delete-5hzjb" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.522446 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.522501 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell07477-account-delete-5hzjb" event={"ID":"a66fa8fc-f908-43e7-a169-6156fc2092f8","Type":"ContainerDied","Data":"8aa6b04a05a5d76fcbff6f69a1a8e583400f0b15c27cb5021061d3d8cf44602a"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.522579 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aa6b04a05a5d76fcbff6f69a1a8e583400f0b15c27cb5021061d3d8cf44602a" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.523979 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance1a70-account-delete-m5qqx" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.525739 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.525851 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.525889 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.526002 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.526042 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.528087 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone39ec-account-delete-6j6qn" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.528673 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79bdcb776d-cl77m" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.528759 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.560622 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.570533 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron73f8-account-delete-b8zpb"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.577187 4856 scope.go:117] "RemoveContainer" containerID="34b8ef8ac4487f65f5dff6c904e4aa6b5fc3a3fd278121552b6ef063060959ec" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.586762 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron73f8-account-delete-b8zpb"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.603118 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.619886 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone39ec-account-delete-6j6qn" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.619898 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.645194 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.667841 4856 scope.go:117] "RemoveContainer" containerID="30f6c92bfa88e0c50a824bbd5fb87ff5b3d7fbb4606aca9dfc830b62320a94a1" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.678031 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.735836 4856 scope.go:117] "RemoveContainer" containerID="748dcd5bb334b4bc2361b63a4afbafd4286f9d6147d5c3a3a460a57c1f55b549" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.738436 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18fcab55-6a49-4c21-9314-435129cf376a" path="/var/lib/kubelet/pods/18fcab55-6a49-4c21-9314-435129cf376a/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.741339 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="314d3b00-9bb4-4caa-a2dd-521e70e3d73d" path="/var/lib/kubelet/pods/314d3b00-9bb4-4caa-a2dd-521e70e3d73d/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.742488 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55308d58-6be6-483d-bc27-2904f15d32f0" path="/var/lib/kubelet/pods/55308d58-6be6-483d-bc27-2904f15d32f0/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.743862 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63e9edb8-ed05-4d0f-aff1-d59b369cd76d" path="/var/lib/kubelet/pods/63e9edb8-ed05-4d0f-aff1-d59b369cd76d/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.745230 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c845eb-6695-4de7-8b4a-ef7c6a6701a4" path="/var/lib/kubelet/pods/70c845eb-6695-4de7-8b4a-ef7c6a6701a4/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.745912 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f9815d1-2297-4a66-9793-ba485053ca2a" path="/var/lib/kubelet/pods/8f9815d1-2297-4a66-9793-ba485053ca2a/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.746498 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec2d14e-7026-4f6d-a0b2-13ff53d5e124" path="/var/lib/kubelet/pods/aec2d14e-7026-4f6d-a0b2-13ff53d5e124/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.751240 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b049e107-76c1-4669-adb3-7b92560ef90d" path="/var/lib/kubelet/pods/b049e107-76c1-4669-adb3-7b92560ef90d/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.752490 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ccf431-f692-459f-b249-66bd9747d09c" path="/var/lib/kubelet/pods/b1ccf431-f692-459f-b249-66bd9747d09c/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.753836 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0a212d-74dc-40d3-84a4-bce83b78e788" path="/var/lib/kubelet/pods/bb0a212d-74dc-40d3-84a4-bce83b78e788/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.754600 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" path="/var/lib/kubelet/pods/e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.755095 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8" path="/var/lib/kubelet/pods/fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.758199 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.758233 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.767780 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.782003 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.789919 4856 scope.go:117] "RemoveContainer" containerID="f0b1a60d0b1a6de591d20e91274f4f847de400e209ee3854019d56a6b7527817" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.794107 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.817180 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3aa24715-1df9-4a47-9817-4a1b68679d08/ovn-northd/0.log" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.817239 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.820655 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.826450 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-79bdcb776d-cl77m"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.832188 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-79bdcb776d-cl77m"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.842999 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.844348 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.860653 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.861000 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.861018 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:26.872307 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.008846 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxgrp\" (UniqueName: \"kubernetes.io/projected/3aa24715-1df9-4a47-9817-4a1b68679d08-kube-api-access-zxgrp\") pod \"3aa24715-1df9-4a47-9817-4a1b68679d08\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.008904 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-metrics-certs-tls-certs\") pod \"3aa24715-1df9-4a47-9817-4a1b68679d08\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.009007 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aa24715-1df9-4a47-9817-4a1b68679d08-config\") pod \"3aa24715-1df9-4a47-9817-4a1b68679d08\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.009079 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-ovn-northd-tls-certs\") pod \"3aa24715-1df9-4a47-9817-4a1b68679d08\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.009133 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-combined-ca-bundle\") pod \"3aa24715-1df9-4a47-9817-4a1b68679d08\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.009171 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa24715-1df9-4a47-9817-4a1b68679d08-scripts\") pod \"3aa24715-1df9-4a47-9817-4a1b68679d08\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.009188 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3aa24715-1df9-4a47-9817-4a1b68679d08-ovn-rundir\") pod \"3aa24715-1df9-4a47-9817-4a1b68679d08\" (UID: \"3aa24715-1df9-4a47-9817-4a1b68679d08\") " Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.009951 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aa24715-1df9-4a47-9817-4a1b68679d08-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "3aa24715-1df9-4a47-9817-4a1b68679d08" (UID: "3aa24715-1df9-4a47-9817-4a1b68679d08"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.010119 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aa24715-1df9-4a47-9817-4a1b68679d08-scripts" (OuterVolumeSpecName: "scripts") pod "3aa24715-1df9-4a47-9817-4a1b68679d08" (UID: "3aa24715-1df9-4a47-9817-4a1b68679d08"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.010208 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aa24715-1df9-4a47-9817-4a1b68679d08-config" (OuterVolumeSpecName: "config") pod "3aa24715-1df9-4a47-9817-4a1b68679d08" (UID: "3aa24715-1df9-4a47-9817-4a1b68679d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.014528 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa24715-1df9-4a47-9817-4a1b68679d08-kube-api-access-zxgrp" (OuterVolumeSpecName: "kube-api-access-zxgrp") pod "3aa24715-1df9-4a47-9817-4a1b68679d08" (UID: "3aa24715-1df9-4a47-9817-4a1b68679d08"). InnerVolumeSpecName "kube-api-access-zxgrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.031618 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3aa24715-1df9-4a47-9817-4a1b68679d08" (UID: "3aa24715-1df9-4a47-9817-4a1b68679d08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.077390 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "3aa24715-1df9-4a47-9817-4a1b68679d08" (UID: "3aa24715-1df9-4a47-9817-4a1b68679d08"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.084310 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "3aa24715-1df9-4a47-9817-4a1b68679d08" (UID: "3aa24715-1df9-4a47-9817-4a1b68679d08"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.111266 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3aa24715-1df9-4a47-9817-4a1b68679d08-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.111293 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.111302 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.111309 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa24715-1df9-4a47-9817-4a1b68679d08-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.111319 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/3aa24715-1df9-4a47-9817-4a1b68679d08-ovn-rundir\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.111327 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxgrp\" (UniqueName: \"kubernetes.io/projected/3aa24715-1df9-4a47-9817-4a1b68679d08-kube-api-access-zxgrp\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.111335 4856 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/3aa24715-1df9-4a47-9817-4a1b68679d08-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.535849 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_3aa24715-1df9-4a47-9817-4a1b68679d08/ovn-northd/0.log" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.535899 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"3aa24715-1df9-4a47-9817-4a1b68679d08","Type":"ContainerDied","Data":"42d5b029ad6e5e568979cf1befdc02d83feaca02fd64ca4d444389e8422eafc3"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.535929 4856 scope.go:117] "RemoveContainer" containerID="bef68756d75607bcf49b118ee011e2d46c1fca15a0f4988d5490ac2121c7d6ec" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.536024 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.542730 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone39ec-account-delete-6j6qn" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.567907 4856 scope.go:117] "RemoveContainer" containerID="0837d9798ef5bdddf9e9d11f1d4578cefe9b49abb3e9b5697828bae554298534" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.586802 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone39ec-account-delete-6j6qn"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.595448 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone39ec-account-delete-6j6qn"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.605212 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.613089 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.677487 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-lwbks"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.691239 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-lwbks"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.700494 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement00a5-account-delete-qrc4g"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.707127 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-00a5-account-create-tpf6w"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.713132 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-00a5-account-create-tpf6w"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.730229 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5f0e688e-e928-4da2-b752-fb04a6307071-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.730263 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88779\" (UniqueName: \"kubernetes.io/projected/5f0e688e-e928-4da2-b752-fb04a6307071-kube-api-access-88779\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.737620 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement00a5-account-delete-qrc4g"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.870780 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4896z"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.878097 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4896z"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.886136 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance1a70-account-delete-m5qqx"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.893209 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-1a70-account-create-dp24l"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.901071 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance1a70-account-delete-m5qqx"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.907896 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-1a70-account-create-dp24l"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.969136 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-vkfmv"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.974194 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-vkfmv"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.982321 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican29ba-account-delete-f7rqf"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.988084 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-29ba-account-create-xlkjx"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.993617 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican29ba-account-delete-f7rqf"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:27.999399 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-29ba-account-create-xlkjx"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.067747 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-t4bxw"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.074791 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-t4bxw"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.095150 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinderceda-account-delete-chlrj"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.101814 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ceda-account-create-p77h2"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.108134 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ceda-account-create-p77h2"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.114406 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinderceda-account-delete-chlrj"] Nov 22 07:32:28 crc kubenswrapper[4856]: E1122 07:32:28.164974 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:28 crc kubenswrapper[4856]: E1122 07:32:28.165117 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:28 crc kubenswrapper[4856]: E1122 07:32:28.165262 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:28 crc kubenswrapper[4856]: E1122 07:32:28.170107 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:28 crc kubenswrapper[4856]: E1122 07:32:28.170180 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:28 crc kubenswrapper[4856]: E1122 07:32:28.170219 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovsdb-server" Nov 22 07:32:28 crc kubenswrapper[4856]: E1122 07:32:28.174108 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:28 crc kubenswrapper[4856]: E1122 07:32:28.174167 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovs-vswitchd" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.343545 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vr4x8"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.356740 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vr4x8"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.361895 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2c75-account-create-vt7h7"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.367119 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi2c75-account-delete-c4rqx"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.372576 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2c75-account-create-vt7h7"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.378763 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapi2c75-account-delete-c4rqx"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.463881 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jrng7"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.475291 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jrng7"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.480341 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7477-account-create-b2vn8"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.493692 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell07477-account-delete-5hzjb"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.493767 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7477-account-create-b2vn8"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.498777 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell07477-account-delete-5hzjb"] Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.554876 4856 generic.go:334] "Generic (PLEG): container finished" podID="bfd5417e-43d6-4fe2-807c-8c203cb74c0a" containerID="08e96c872138b89aa87fe681eda59fce3d594656121c84a13f4d89a1c5be6ca8" exitCode=0 Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.554956 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f69556b5c-qmsmf" event={"ID":"bfd5417e-43d6-4fe2-807c-8c203cb74c0a","Type":"ContainerDied","Data":"08e96c872138b89aa87fe681eda59fce3d594656121c84a13f4d89a1c5be6ca8"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.557684 4856 generic.go:334] "Generic (PLEG): container finished" podID="f6976ffd-7286-4347-b8af-607803a96768" containerID="3cdce92348e8a5abc8c54f390907c002ea710c31b653f0e1d2c690885f3a2712" exitCode=0 Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.557921 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cf775d657-87zdn" event={"ID":"f6976ffd-7286-4347-b8af-607803a96768","Type":"ContainerDied","Data":"3cdce92348e8a5abc8c54f390907c002ea710c31b653f0e1d2c690885f3a2712"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.560907 4856 generic.go:334] "Generic (PLEG): container finished" podID="665dbe7c-5370-4a97-8502-e9b25c8acd3a" containerID="79ce02c0e12e71d034284ed8bae98790aa968294e2855ff785b9729ddd86f16b" exitCode=0 Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.560938 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" event={"ID":"665dbe7c-5370-4a97-8502-e9b25c8acd3a","Type":"ContainerDied","Data":"79ce02c0e12e71d034284ed8bae98790aa968294e2855ff785b9729ddd86f16b"} Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.608216 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="b049e107-76c1-4669-adb3-7b92560ef90d" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.179:8776/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.718920 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07329cf7-c3ff-410a-8ab7-8f19ae9d3974" path="/var/lib/kubelet/pods/07329cf7-c3ff-410a-8ab7-8f19ae9d3974/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.932903 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb1e06c-d7a8-4456-8614-d71e182d6ad2" path="/var/lib/kubelet/pods/1cb1e06c-d7a8-4456-8614-d71e182d6ad2/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.933637 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3a5d31-7183-4298-87ea-4aa84aa395b4" path="/var/lib/kubelet/pods/1d3a5d31-7183-4298-87ea-4aa84aa395b4/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.934284 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b88f55c-12d5-4cba-a155-aa00c19c94f4" path="/var/lib/kubelet/pods/2b88f55c-12d5-4cba-a155-aa00c19c94f4/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.936253 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ccea049-279e-43e8-9da2-04101b095f12" path="/var/lib/kubelet/pods/2ccea049-279e-43e8-9da2-04101b095f12/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.936889 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" path="/var/lib/kubelet/pods/39f7a457-9a5c-48b5-86c0-24d274596c8a/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.937848 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aa24715-1df9-4a47-9817-4a1b68679d08" path="/var/lib/kubelet/pods/3aa24715-1df9-4a47-9817-4a1b68679d08/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.939016 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44d612c2-f369-4085-8e65-fc4d80281c5a" path="/var/lib/kubelet/pods/44d612c2-f369-4085-8e65-fc4d80281c5a/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.939606 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a38fdd7-2dc0-4ebc-91c7-359d0e437900" path="/var/lib/kubelet/pods/4a38fdd7-2dc0-4ebc-91c7-359d0e437900/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.940245 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="519a764c-9ac2-4f94-84c6-7c284ab676cd" path="/var/lib/kubelet/pods/519a764c-9ac2-4f94-84c6-7c284ab676cd/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.941171 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f" path="/var/lib/kubelet/pods/5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.941657 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db71edd-7a64-44d0-abda-ffc266851549" path="/var/lib/kubelet/pods/5db71edd-7a64-44d0-abda-ffc266851549/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.942075 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f0e688e-e928-4da2-b752-fb04a6307071" path="/var/lib/kubelet/pods/5f0e688e-e928-4da2-b752-fb04a6307071/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.942366 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cb599bf-1bc1-4497-82a8-2165e566aaa4" path="/var/lib/kubelet/pods/7cb599bf-1bc1-4497-82a8-2165e566aaa4/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.943289 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e55d62f-386c-4731-870a-a4909fb100b9" path="/var/lib/kubelet/pods/7e55d62f-386c-4731-870a-a4909fb100b9/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.943820 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eaa66e0-ee9b-4115-b385-222e8ac0c21c" path="/var/lib/kubelet/pods/9eaa66e0-ee9b-4115-b385-222e8ac0c21c/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.944421 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1ae4cc7-5c62-4d6d-a578-ed26f892a159" path="/var/lib/kubelet/pods/a1ae4cc7-5c62-4d6d-a578-ed26f892a159/volumes" Nov 22 07:32:28 crc kubenswrapper[4856]: I1122 07:32:28.946124 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a66fa8fc-f908-43e7-a169-6156fc2092f8" path="/var/lib/kubelet/pods/a66fa8fc-f908-43e7-a169-6156fc2092f8/volumes" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.106824 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4500308-9c55-4560-afc5-8e34d65bcfa7" path="/var/lib/kubelet/pods/b4500308-9c55-4560-afc5-8e34d65bcfa7/volumes" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.107502 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c75cebe3-86db-4be1-9755-4bd8a83c9796" path="/var/lib/kubelet/pods/c75cebe3-86db-4be1-9755-4bd8a83c9796/volumes" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.124931 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfc2e8cc-04c1-4481-bf7d-d7e99972200f" path="/var/lib/kubelet/pods/cfc2e8cc-04c1-4481-bf7d-d7e99972200f/volumes" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.125987 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0079df7-afe2-44a1-9c44-aabed35e0920" path="/var/lib/kubelet/pods/d0079df7-afe2-44a1-9c44-aabed35e0920/volumes" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.126537 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5bd7b67-77ce-4a59-a510-f5b39de503d8" path="/var/lib/kubelet/pods/d5bd7b67-77ce-4a59-a510-f5b39de503d8/volumes" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.127020 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df337886-1469-499f-bbb4-564f479cafa7" path="/var/lib/kubelet/pods/df337886-1469-499f-bbb4-564f479cafa7/volumes" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.128086 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0f8403e-a06a-4804-b60a-98974506f547" path="/var/lib/kubelet/pods/e0f8403e-a06a-4804-b60a-98974506f547/volumes" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.128913 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f061b34a-dff9-42e7-8b22-2cce81c12234" path="/var/lib/kubelet/pods/f061b34a-dff9-42e7-8b22-2cce81c12234/volumes" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.130029 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5897169-cfb8-4105-bc18-4fc7cbe28eee" path="/var/lib/kubelet/pods/f5897169-cfb8-4105-bc18-4fc7cbe28eee/volumes" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.223080 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.230843 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.249017 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364144 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-config-data-custom\") pod \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364200 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8k9b\" (UniqueName: \"kubernetes.io/projected/665dbe7c-5370-4a97-8502-e9b25c8acd3a-kube-api-access-x8k9b\") pod \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364225 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-fernet-keys\") pod \"f6976ffd-7286-4347-b8af-607803a96768\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364255 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6lgj\" (UniqueName: \"kubernetes.io/projected/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-kube-api-access-w6lgj\") pod \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364278 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-config-data\") pod \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364299 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-combined-ca-bundle\") pod \"f6976ffd-7286-4347-b8af-607803a96768\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364330 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-combined-ca-bundle\") pod \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364345 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-config-data-custom\") pod \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364394 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-credential-keys\") pod \"f6976ffd-7286-4347-b8af-607803a96768\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364431 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-combined-ca-bundle\") pod \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364449 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5nwf\" (UniqueName: \"kubernetes.io/projected/f6976ffd-7286-4347-b8af-607803a96768-kube-api-access-r5nwf\") pod \"f6976ffd-7286-4347-b8af-607803a96768\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364476 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-public-tls-certs\") pod \"f6976ffd-7286-4347-b8af-607803a96768\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364494 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-internal-tls-certs\") pod \"f6976ffd-7286-4347-b8af-607803a96768\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364525 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-logs\") pod \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\" (UID: \"bfd5417e-43d6-4fe2-807c-8c203cb74c0a\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364548 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-config-data\") pod \"f6976ffd-7286-4347-b8af-607803a96768\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364565 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/665dbe7c-5370-4a97-8502-e9b25c8acd3a-logs\") pod \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364585 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-config-data\") pod \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\" (UID: \"665dbe7c-5370-4a97-8502-e9b25c8acd3a\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.364599 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-scripts\") pod \"f6976ffd-7286-4347-b8af-607803a96768\" (UID: \"f6976ffd-7286-4347-b8af-607803a96768\") " Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.366275 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-logs" (OuterVolumeSpecName: "logs") pod "bfd5417e-43d6-4fe2-807c-8c203cb74c0a" (UID: "bfd5417e-43d6-4fe2-807c-8c203cb74c0a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.367920 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/665dbe7c-5370-4a97-8502-e9b25c8acd3a-logs" (OuterVolumeSpecName: "logs") pod "665dbe7c-5370-4a97-8502-e9b25c8acd3a" (UID: "665dbe7c-5370-4a97-8502-e9b25c8acd3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.369655 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/665dbe7c-5370-4a97-8502-e9b25c8acd3a-kube-api-access-x8k9b" (OuterVolumeSpecName: "kube-api-access-x8k9b") pod "665dbe7c-5370-4a97-8502-e9b25c8acd3a" (UID: "665dbe7c-5370-4a97-8502-e9b25c8acd3a"). InnerVolumeSpecName "kube-api-access-x8k9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.369667 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-scripts" (OuterVolumeSpecName: "scripts") pod "f6976ffd-7286-4347-b8af-607803a96768" (UID: "f6976ffd-7286-4347-b8af-607803a96768"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.370621 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bfd5417e-43d6-4fe2-807c-8c203cb74c0a" (UID: "bfd5417e-43d6-4fe2-807c-8c203cb74c0a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.371987 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-kube-api-access-w6lgj" (OuterVolumeSpecName: "kube-api-access-w6lgj") pod "bfd5417e-43d6-4fe2-807c-8c203cb74c0a" (UID: "bfd5417e-43d6-4fe2-807c-8c203cb74c0a"). InnerVolumeSpecName "kube-api-access-w6lgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.372226 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "665dbe7c-5370-4a97-8502-e9b25c8acd3a" (UID: "665dbe7c-5370-4a97-8502-e9b25c8acd3a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.374127 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6976ffd-7286-4347-b8af-607803a96768-kube-api-access-r5nwf" (OuterVolumeSpecName: "kube-api-access-r5nwf") pod "f6976ffd-7286-4347-b8af-607803a96768" (UID: "f6976ffd-7286-4347-b8af-607803a96768"). InnerVolumeSpecName "kube-api-access-r5nwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.377258 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f6976ffd-7286-4347-b8af-607803a96768" (UID: "f6976ffd-7286-4347-b8af-607803a96768"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.391718 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "665dbe7c-5370-4a97-8502-e9b25c8acd3a" (UID: "665dbe7c-5370-4a97-8502-e9b25c8acd3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.394739 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f6976ffd-7286-4347-b8af-607803a96768" (UID: "f6976ffd-7286-4347-b8af-607803a96768"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.394893 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bfd5417e-43d6-4fe2-807c-8c203cb74c0a" (UID: "bfd5417e-43d6-4fe2-807c-8c203cb74c0a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.395206 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-config-data" (OuterVolumeSpecName: "config-data") pod "f6976ffd-7286-4347-b8af-607803a96768" (UID: "f6976ffd-7286-4347-b8af-607803a96768"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.400890 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6976ffd-7286-4347-b8af-607803a96768" (UID: "f6976ffd-7286-4347-b8af-607803a96768"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.412723 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f6976ffd-7286-4347-b8af-607803a96768" (UID: "f6976ffd-7286-4347-b8af-607803a96768"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.415701 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-config-data" (OuterVolumeSpecName: "config-data") pod "665dbe7c-5370-4a97-8502-e9b25c8acd3a" (UID: "665dbe7c-5370-4a97-8502-e9b25c8acd3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.418620 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-config-data" (OuterVolumeSpecName: "config-data") pod "bfd5417e-43d6-4fe2-807c-8c203cb74c0a" (UID: "bfd5417e-43d6-4fe2-807c-8c203cb74c0a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.456478 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f6976ffd-7286-4347-b8af-607803a96768" (UID: "f6976ffd-7286-4347-b8af-607803a96768"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466337 4856 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466387 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466402 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5nwf\" (UniqueName: \"kubernetes.io/projected/f6976ffd-7286-4347-b8af-607803a96768-kube-api-access-r5nwf\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466416 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466427 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466437 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466445 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466454 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/665dbe7c-5370-4a97-8502-e9b25c8acd3a-logs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466461 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466469 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466478 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466486 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8k9b\" (UniqueName: \"kubernetes.io/projected/665dbe7c-5370-4a97-8502-e9b25c8acd3a-kube-api-access-x8k9b\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466495 4856 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466504 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6lgj\" (UniqueName: \"kubernetes.io/projected/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-kube-api-access-w6lgj\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466527 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466535 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6976ffd-7286-4347-b8af-607803a96768-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466543 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfd5417e-43d6-4fe2-807c-8c203cb74c0a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.466551 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/665dbe7c-5370-4a97-8502-e9b25c8acd3a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.572117 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-f69556b5c-qmsmf" event={"ID":"bfd5417e-43d6-4fe2-807c-8c203cb74c0a","Type":"ContainerDied","Data":"b85204fbdfdf859441b4e75d2ce56a7c02f478ec81a958626410d1abc75e637c"} Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.572163 4856 scope.go:117] "RemoveContainer" containerID="08e96c872138b89aa87fe681eda59fce3d594656121c84a13f4d89a1c5be6ca8" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.572276 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-f69556b5c-qmsmf" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.583470 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cf775d657-87zdn" event={"ID":"f6976ffd-7286-4347-b8af-607803a96768","Type":"ContainerDied","Data":"45afbbbd66324ae2272304d5459e72de6394c18ce1ca18ce20af0b57f9941bec"} Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.583557 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cf775d657-87zdn" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.588024 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" event={"ID":"665dbe7c-5370-4a97-8502-e9b25c8acd3a","Type":"ContainerDied","Data":"c49e3d050a13649ab2b85b71a7fc7be52f04efb484f94677a16f0203aca5d2b7"} Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.588119 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-68b59dd9f8-dgbs9" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.613387 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-f69556b5c-qmsmf"] Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.620470 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-f69556b5c-qmsmf"] Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.624440 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-68b59dd9f8-dgbs9"] Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.629420 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-68b59dd9f8-dgbs9"] Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.633747 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6cf775d657-87zdn"] Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.637709 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6cf775d657-87zdn"] Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.790149 4856 scope.go:117] "RemoveContainer" containerID="df445e7e3ade77c2dd919f37116927b5b747d07550482260db0ad6f5970682fd" Nov 22 07:32:29 crc kubenswrapper[4856]: I1122 07:32:29.858202 4856 scope.go:117] "RemoveContainer" containerID="3cdce92348e8a5abc8c54f390907c002ea710c31b653f0e1d2c690885f3a2712" Nov 22 07:32:30 crc kubenswrapper[4856]: I1122 07:32:30.039308 4856 scope.go:117] "RemoveContainer" containerID="79ce02c0e12e71d034284ed8bae98790aa968294e2855ff785b9729ddd86f16b" Nov 22 07:32:30 crc kubenswrapper[4856]: I1122 07:32:30.580890 4856 scope.go:117] "RemoveContainer" containerID="89fb7a00fd4efc74515a0c3d4a20db20a62bcd9de48f98ba66ab6036caf8a420" Nov 22 07:32:30 crc kubenswrapper[4856]: I1122 07:32:30.600311 4856 generic.go:334] "Generic (PLEG): container finished" podID="0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" containerID="1a983d61b8dfe6b5b848b2945b31f7053bd5045dbc03ba4867c1e7855f9b3dcd" exitCode=0 Nov 22 07:32:30 crc kubenswrapper[4856]: I1122 07:32:30.600377 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89","Type":"ContainerDied","Data":"1a983d61b8dfe6b5b848b2945b31f7053bd5045dbc03ba4867c1e7855f9b3dcd"} Nov 22 07:32:30 crc kubenswrapper[4856]: I1122 07:32:30.603365 4856 generic.go:334] "Generic (PLEG): container finished" podID="4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" containerID="fe053dc6b4b700a119cd588385a844042a2dde38e5a679600fc61619199db0cc" exitCode=0 Nov 22 07:32:30 crc kubenswrapper[4856]: I1122 07:32:30.603394 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429","Type":"ContainerDied","Data":"fe053dc6b4b700a119cd588385a844042a2dde38e5a679600fc61619199db0cc"} Nov 22 07:32:30 crc kubenswrapper[4856]: I1122 07:32:30.924609 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="665dbe7c-5370-4a97-8502-e9b25c8acd3a" path="/var/lib/kubelet/pods/665dbe7c-5370-4a97-8502-e9b25c8acd3a/volumes" Nov 22 07:32:30 crc kubenswrapper[4856]: I1122 07:32:30.925429 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfd5417e-43d6-4fe2-807c-8c203cb74c0a" path="/var/lib/kubelet/pods/bfd5417e-43d6-4fe2-807c-8c203cb74c0a/volumes" Nov 22 07:32:30 crc kubenswrapper[4856]: I1122 07:32:30.925972 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6976ffd-7286-4347-b8af-607803a96768" path="/var/lib/kubelet/pods/f6976ffd-7286-4347-b8af-607803a96768/volumes" Nov 22 07:32:31 crc kubenswrapper[4856]: E1122 07:32:31.192360 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode782df2b_d7a8_4319_aead_d5165a61314a.slice\": RecentStats: unable to find data in memory cache]" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.387675 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.393369 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.522922 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-confd\") pod \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523489 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-plugins-conf\") pod \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523527 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-plugins-conf\") pod \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523555 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523589 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data\") pod \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523607 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-erlang-cookie-secret\") pod \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523625 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-tls\") pod \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523645 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-erlang-cookie\") pod \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523664 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-erlang-cookie-secret\") pod \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523683 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-pod-info\") pod \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523705 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkd7q\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-kube-api-access-pkd7q\") pod \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523721 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-tls\") pod \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523743 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-plugins\") pod \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523770 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcvcb\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-kube-api-access-xcvcb\") pod \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523801 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-erlang-cookie\") pod \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523818 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-pod-info\") pod \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523834 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data\") pod \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523851 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-server-conf\") pod \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523866 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.523887 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-confd\") pod \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.524294 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.525261 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.525342 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.525827 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.526438 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.528805 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-kube-api-access-pkd7q" (OuterVolumeSpecName: "kube-api-access-pkd7q") pod "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429"). InnerVolumeSpecName "kube-api-access-pkd7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.528875 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-plugins\") pod \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\" (UID: \"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.528905 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-server-conf\") pod \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\" (UID: \"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429\") " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.530328 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.532112 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-pod-info" (OuterVolumeSpecName: "pod-info") pod "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.532197 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.532880 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.534417 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.534450 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-kube-api-access-xcvcb" (OuterVolumeSpecName: "kube-api-access-xcvcb") pod "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89"). InnerVolumeSpecName "kube-api-access-xcvcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.534612 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.535770 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.535817 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.535830 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.535839 4856 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.535881 4856 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.535900 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.535910 4856 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.535918 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.535926 4856 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.535935 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkd7q\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-kube-api-access-pkd7q\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.535943 4856 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.535952 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.535962 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcvcb\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-kube-api-access-xcvcb\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.544303 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.549015 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data" (OuterVolumeSpecName: "config-data") pod "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.557386 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-pod-info" (OuterVolumeSpecName: "pod-info") pod "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.559790 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.561858 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.564792 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data" (OuterVolumeSpecName: "config-data") pod "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.573102 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.594051 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-server-conf" (OuterVolumeSpecName: "server-conf") pod "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.600228 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-server-conf" (OuterVolumeSpecName: "server-conf") pod "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.615450 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" (UID: "4ac8c44e-0667-43f7-aebd-a7b4c5bcb429"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.618127 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89","Type":"ContainerDied","Data":"28bac9423af5affa00aaa0be97f54a42134b5ba014610634481243e61a0a4c61"} Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.618172 4856 scope.go:117] "RemoveContainer" containerID="1a983d61b8dfe6b5b848b2945b31f7053bd5045dbc03ba4867c1e7855f9b3dcd" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.618287 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.619442 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" (UID: "0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.622287 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"4ac8c44e-0667-43f7-aebd-a7b4c5bcb429","Type":"ContainerDied","Data":"791ac470b0a6a247ad8c7af344f1a356c217449002fca7553a956a180dba9c6b"} Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.622354 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.637570 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.637604 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.637619 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.637630 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.637641 4856 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.637653 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.637664 4856 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.637676 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.637686 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.637697 4856 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.637709 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.656003 4856 scope.go:117] "RemoveContainer" containerID="c10cec0c537e858b480226f22e5be592da7a6e4e6ce33e779e0e631dde2f8987" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.662198 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.669497 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.719028 4856 scope.go:117] "RemoveContainer" containerID="fe053dc6b4b700a119cd588385a844042a2dde38e5a679600fc61619199db0cc" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.761467 4856 scope.go:117] "RemoveContainer" containerID="f5173b778bc6df84dd44ccb0081f7b0478ee848a30a82116594357ab8bd607c4" Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.949323 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:32:31 crc kubenswrapper[4856]: I1122 07:32:31.956553 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 07:32:32 crc kubenswrapper[4856]: I1122 07:32:32.719663 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" path="/var/lib/kubelet/pods/0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89/volumes" Nov 22 07:32:32 crc kubenswrapper[4856]: I1122 07:32:32.720794 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" path="/var/lib/kubelet/pods/4ac8c44e-0667-43f7-aebd-a7b4c5bcb429/volumes" Nov 22 07:32:33 crc kubenswrapper[4856]: E1122 07:32:33.164600 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:33 crc kubenswrapper[4856]: E1122 07:32:33.165249 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:33 crc kubenswrapper[4856]: E1122 07:32:33.165579 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:33 crc kubenswrapper[4856]: E1122 07:32:33.165602 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:33 crc kubenswrapper[4856]: E1122 07:32:33.166178 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovsdb-server" Nov 22 07:32:33 crc kubenswrapper[4856]: E1122 07:32:33.167073 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:33 crc kubenswrapper[4856]: E1122 07:32:33.168317 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:33 crc kubenswrapper[4856]: E1122 07:32:33.168409 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovs-vswitchd" Nov 22 07:32:33 crc kubenswrapper[4856]: I1122 07:32:33.710079 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:32:33 crc kubenswrapper[4856]: E1122 07:32:33.710429 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:32:35 crc kubenswrapper[4856]: E1122 07:32:35.572833 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 07:32:35 crc kubenswrapper[4856]: E1122 07:32:35.574866 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 07:32:35 crc kubenswrapper[4856]: E1122 07:32:35.576429 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Nov 22 07:32:35 crc kubenswrapper[4856]: E1122 07:32:35.576498 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="b27ecbc9-0058-49d3-8715-826a4a1bb544" containerName="galera" Nov 22 07:32:38 crc kubenswrapper[4856]: E1122 07:32:38.164440 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:38 crc kubenswrapper[4856]: E1122 07:32:38.165322 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:38 crc kubenswrapper[4856]: E1122 07:32:38.165868 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:38 crc kubenswrapper[4856]: E1122 07:32:38.165966 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovsdb-server" Nov 22 07:32:38 crc kubenswrapper[4856]: E1122 07:32:38.166439 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:38 crc kubenswrapper[4856]: E1122 07:32:38.168421 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:38 crc kubenswrapper[4856]: E1122 07:32:38.170572 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:38 crc kubenswrapper[4856]: E1122 07:32:38.170618 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovs-vswitchd" Nov 22 07:32:38 crc kubenswrapper[4856]: I1122 07:32:38.699995 4856 generic.go:334] "Generic (PLEG): container finished" podID="cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" containerID="273797dc3d1ff426732192e04e6bd642a97dc99523e657e806f91b951e7b928a" exitCode=0 Nov 22 07:32:38 crc kubenswrapper[4856]: I1122 07:32:38.700040 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c448d48d9-lmlhj" event={"ID":"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313","Type":"ContainerDied","Data":"273797dc3d1ff426732192e04e6bd642a97dc99523e657e806f91b951e7b928a"} Nov 22 07:32:38 crc kubenswrapper[4856]: I1122 07:32:38.960499 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.070115 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxmvb\" (UniqueName: \"kubernetes.io/projected/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-kube-api-access-wxmvb\") pod \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.070160 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-internal-tls-certs\") pod \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.070221 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-combined-ca-bundle\") pod \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.070251 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-public-tls-certs\") pod \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.070864 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-ovndb-tls-certs\") pod \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.070997 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-config\") pod \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.071131 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-httpd-config\") pod \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\" (UID: \"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313\") " Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.075775 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-kube-api-access-wxmvb" (OuterVolumeSpecName: "kube-api-access-wxmvb") pod "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" (UID: "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313"). InnerVolumeSpecName "kube-api-access-wxmvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.077476 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" (UID: "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.111252 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-config" (OuterVolumeSpecName: "config") pod "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" (UID: "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.113091 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" (UID: "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.115152 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" (UID: "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.124348 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" (UID: "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.135355 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" (UID: "cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.172653 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxmvb\" (UniqueName: \"kubernetes.io/projected/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-kube-api-access-wxmvb\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.172726 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.172737 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.172748 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.172758 4856 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.172769 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.172777 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.711853 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c448d48d9-lmlhj" event={"ID":"cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313","Type":"ContainerDied","Data":"5aa5082b981bf0694d914bdadbcba55364731f0db70a966eab09f765f33a7755"} Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.711934 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c448d48d9-lmlhj" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.711983 4856 scope.go:117] "RemoveContainer" containerID="e4ce8b9ed4b91b14fe577f0657b03ac8159da3736fa9337862e230ef16a43afb" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.745696 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c448d48d9-lmlhj"] Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.749805 4856 scope.go:117] "RemoveContainer" containerID="273797dc3d1ff426732192e04e6bd642a97dc99523e657e806f91b951e7b928a" Nov 22 07:32:39 crc kubenswrapper[4856]: I1122 07:32:39.751135 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c448d48d9-lmlhj"] Nov 22 07:32:40 crc kubenswrapper[4856]: I1122 07:32:40.721045 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" path="/var/lib/kubelet/pods/cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313/volumes" Nov 22 07:32:43 crc kubenswrapper[4856]: E1122 07:32:43.164208 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:43 crc kubenswrapper[4856]: E1122 07:32:43.164820 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:43 crc kubenswrapper[4856]: E1122 07:32:43.165047 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:43 crc kubenswrapper[4856]: E1122 07:32:43.165072 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovsdb-server" Nov 22 07:32:43 crc kubenswrapper[4856]: E1122 07:32:43.165558 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:43 crc kubenswrapper[4856]: E1122 07:32:43.166814 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:43 crc kubenswrapper[4856]: E1122 07:32:43.168837 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:43 crc kubenswrapper[4856]: E1122 07:32:43.168903 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovs-vswitchd" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.696650 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.747968 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcqg5\" (UniqueName: \"kubernetes.io/projected/b27ecbc9-0058-49d3-8715-826a4a1bb544-kube-api-access-wcqg5\") pod \"b27ecbc9-0058-49d3-8715-826a4a1bb544\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.748009 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-kolla-config\") pod \"b27ecbc9-0058-49d3-8715-826a4a1bb544\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.748063 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-operator-scripts\") pod \"b27ecbc9-0058-49d3-8715-826a4a1bb544\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.748084 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"b27ecbc9-0058-49d3-8715-826a4a1bb544\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.748128 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27ecbc9-0058-49d3-8715-826a4a1bb544-combined-ca-bundle\") pod \"b27ecbc9-0058-49d3-8715-826a4a1bb544\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.748161 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b27ecbc9-0058-49d3-8715-826a4a1bb544-config-data-generated\") pod \"b27ecbc9-0058-49d3-8715-826a4a1bb544\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.748176 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-config-data-default\") pod \"b27ecbc9-0058-49d3-8715-826a4a1bb544\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.748209 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27ecbc9-0058-49d3-8715-826a4a1bb544-galera-tls-certs\") pod \"b27ecbc9-0058-49d3-8715-826a4a1bb544\" (UID: \"b27ecbc9-0058-49d3-8715-826a4a1bb544\") " Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.751408 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27ecbc9-0058-49d3-8715-826a4a1bb544-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "b27ecbc9-0058-49d3-8715-826a4a1bb544" (UID: "b27ecbc9-0058-49d3-8715-826a4a1bb544"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.752040 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b27ecbc9-0058-49d3-8715-826a4a1bb544" (UID: "b27ecbc9-0058-49d3-8715-826a4a1bb544"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.752126 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "b27ecbc9-0058-49d3-8715-826a4a1bb544" (UID: "b27ecbc9-0058-49d3-8715-826a4a1bb544"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.752900 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "b27ecbc9-0058-49d3-8715-826a4a1bb544" (UID: "b27ecbc9-0058-49d3-8715-826a4a1bb544"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.758160 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27ecbc9-0058-49d3-8715-826a4a1bb544-kube-api-access-wcqg5" (OuterVolumeSpecName: "kube-api-access-wcqg5") pod "b27ecbc9-0058-49d3-8715-826a4a1bb544" (UID: "b27ecbc9-0058-49d3-8715-826a4a1bb544"). InnerVolumeSpecName "kube-api-access-wcqg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.763835 4856 generic.go:334] "Generic (PLEG): container finished" podID="b27ecbc9-0058-49d3-8715-826a4a1bb544" containerID="bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7" exitCode=0 Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.763884 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27ecbc9-0058-49d3-8715-826a4a1bb544","Type":"ContainerDied","Data":"bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7"} Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.763915 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b27ecbc9-0058-49d3-8715-826a4a1bb544","Type":"ContainerDied","Data":"88b50e703fe21af5e61c3aaf7e283d3f6fb2d2434709cdbb182ffec13dadd42d"} Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.763944 4856 scope.go:117] "RemoveContainer" containerID="bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.763981 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.772665 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "mysql-db") pod "b27ecbc9-0058-49d3-8715-826a4a1bb544" (UID: "b27ecbc9-0058-49d3-8715-826a4a1bb544"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.785041 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27ecbc9-0058-49d3-8715-826a4a1bb544-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b27ecbc9-0058-49d3-8715-826a4a1bb544" (UID: "b27ecbc9-0058-49d3-8715-826a4a1bb544"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.800687 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27ecbc9-0058-49d3-8715-826a4a1bb544-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "b27ecbc9-0058-49d3-8715-826a4a1bb544" (UID: "b27ecbc9-0058-49d3-8715-826a4a1bb544"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.840270 4856 scope.go:117] "RemoveContainer" containerID="6707e619f62035dae0710f67046d5933fec25522a3bdba7ddad574baf16ec1fd" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.850489 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27ecbc9-0058-49d3-8715-826a4a1bb544-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.850569 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b27ecbc9-0058-49d3-8715-826a4a1bb544-config-data-generated\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.850587 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-config-data-default\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.850598 4856 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27ecbc9-0058-49d3-8715-826a4a1bb544-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.850610 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcqg5\" (UniqueName: \"kubernetes.io/projected/b27ecbc9-0058-49d3-8715-826a4a1bb544-kube-api-access-wcqg5\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.850623 4856 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-kolla-config\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.850635 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b27ecbc9-0058-49d3-8715-826a4a1bb544-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.850676 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.864893 4856 scope.go:117] "RemoveContainer" containerID="bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7" Nov 22 07:32:44 crc kubenswrapper[4856]: E1122 07:32:44.865474 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7\": container with ID starting with bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7 not found: ID does not exist" containerID="bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.865534 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7"} err="failed to get container status \"bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7\": rpc error: code = NotFound desc = could not find container \"bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7\": container with ID starting with bd6a490e89f365657d044043a2ed8ad3a91c8441955834491ffd93735bec7ab7 not found: ID does not exist" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.865556 4856 scope.go:117] "RemoveContainer" containerID="6707e619f62035dae0710f67046d5933fec25522a3bdba7ddad574baf16ec1fd" Nov 22 07:32:44 crc kubenswrapper[4856]: E1122 07:32:44.865873 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6707e619f62035dae0710f67046d5933fec25522a3bdba7ddad574baf16ec1fd\": container with ID starting with 6707e619f62035dae0710f67046d5933fec25522a3bdba7ddad574baf16ec1fd not found: ID does not exist" containerID="6707e619f62035dae0710f67046d5933fec25522a3bdba7ddad574baf16ec1fd" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.865957 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6707e619f62035dae0710f67046d5933fec25522a3bdba7ddad574baf16ec1fd"} err="failed to get container status \"6707e619f62035dae0710f67046d5933fec25522a3bdba7ddad574baf16ec1fd\": rpc error: code = NotFound desc = could not find container \"6707e619f62035dae0710f67046d5933fec25522a3bdba7ddad574baf16ec1fd\": container with ID starting with 6707e619f62035dae0710f67046d5933fec25522a3bdba7ddad574baf16ec1fd not found: ID does not exist" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.867832 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 22 07:32:44 crc kubenswrapper[4856]: I1122 07:32:44.952600 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:45 crc kubenswrapper[4856]: I1122 07:32:45.093494 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:32:45 crc kubenswrapper[4856]: I1122 07:32:45.098821 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 07:32:46 crc kubenswrapper[4856]: I1122 07:32:46.721243 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27ecbc9-0058-49d3-8715-826a4a1bb544" path="/var/lib/kubelet/pods/b27ecbc9-0058-49d3-8715-826a4a1bb544/volumes" Nov 22 07:32:48 crc kubenswrapper[4856]: E1122 07:32:48.163903 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964 is running failed: container process not found" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:48 crc kubenswrapper[4856]: E1122 07:32:48.164356 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964 is running failed: container process not found" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:48 crc kubenswrapper[4856]: E1122 07:32:48.163921 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:48 crc kubenswrapper[4856]: E1122 07:32:48.164667 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964 is running failed: container process not found" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Nov 22 07:32:48 crc kubenswrapper[4856]: E1122 07:32:48.164693 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovs-vswitchd" Nov 22 07:32:48 crc kubenswrapper[4856]: E1122 07:32:48.164960 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:48 crc kubenswrapper[4856]: E1122 07:32:48.165235 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Nov 22 07:32:48 crc kubenswrapper[4856]: E1122 07:32:48.165272 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-zz5h4" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovsdb-server" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.659738 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zz5h4_285d77d1-e278-4664-97f0-7562e2740a0b/ovs-vswitchd/0.log" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.660669 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707207 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-run\") pod \"285d77d1-e278-4664-97f0-7562e2740a0b\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707248 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/285d77d1-e278-4664-97f0-7562e2740a0b-scripts\") pod \"285d77d1-e278-4664-97f0-7562e2740a0b\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707271 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-log\") pod \"285d77d1-e278-4664-97f0-7562e2740a0b\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707301 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-lib\") pod \"285d77d1-e278-4664-97f0-7562e2740a0b\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707321 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg5jb\" (UniqueName: \"kubernetes.io/projected/285d77d1-e278-4664-97f0-7562e2740a0b-kube-api-access-bg5jb\") pod \"285d77d1-e278-4664-97f0-7562e2740a0b\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707342 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-etc-ovs\") pod \"285d77d1-e278-4664-97f0-7562e2740a0b\" (UID: \"285d77d1-e278-4664-97f0-7562e2740a0b\") " Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707390 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-run" (OuterVolumeSpecName: "var-run") pod "285d77d1-e278-4664-97f0-7562e2740a0b" (UID: "285d77d1-e278-4664-97f0-7562e2740a0b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707439 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-log" (OuterVolumeSpecName: "var-log") pod "285d77d1-e278-4664-97f0-7562e2740a0b" (UID: "285d77d1-e278-4664-97f0-7562e2740a0b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707436 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-lib" (OuterVolumeSpecName: "var-lib") pod "285d77d1-e278-4664-97f0-7562e2740a0b" (UID: "285d77d1-e278-4664-97f0-7562e2740a0b"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707530 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "285d77d1-e278-4664-97f0-7562e2740a0b" (UID: "285d77d1-e278-4664-97f0-7562e2740a0b"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707695 4856 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-run\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707711 4856 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-log\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707719 4856 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-var-lib\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.707727 4856 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/285d77d1-e278-4664-97f0-7562e2740a0b-etc-ovs\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.708437 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/285d77d1-e278-4664-97f0-7562e2740a0b-scripts" (OuterVolumeSpecName: "scripts") pod "285d77d1-e278-4664-97f0-7562e2740a0b" (UID: "285d77d1-e278-4664-97f0-7562e2740a0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.712912 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:32:48 crc kubenswrapper[4856]: E1122 07:32:48.713159 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.713705 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285d77d1-e278-4664-97f0-7562e2740a0b-kube-api-access-bg5jb" (OuterVolumeSpecName: "kube-api-access-bg5jb") pod "285d77d1-e278-4664-97f0-7562e2740a0b" (UID: "285d77d1-e278-4664-97f0-7562e2740a0b"). InnerVolumeSpecName "kube-api-access-bg5jb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.806878 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zz5h4_285d77d1-e278-4664-97f0-7562e2740a0b/ovs-vswitchd/0.log" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.807655 4856 generic.go:334] "Generic (PLEG): container finished" podID="285d77d1-e278-4664-97f0-7562e2740a0b" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" exitCode=137 Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.807702 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zz5h4" event={"ID":"285d77d1-e278-4664-97f0-7562e2740a0b","Type":"ContainerDied","Data":"1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964"} Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.807722 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zz5h4" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.807780 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zz5h4" event={"ID":"285d77d1-e278-4664-97f0-7562e2740a0b","Type":"ContainerDied","Data":"8348d4e10904380f8f331e39a468968f43a9942115652a22a69a7414ef1393da"} Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.807801 4856 scope.go:117] "RemoveContainer" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.809232 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/285d77d1-e278-4664-97f0-7562e2740a0b-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.809257 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg5jb\" (UniqueName: \"kubernetes.io/projected/285d77d1-e278-4664-97f0-7562e2740a0b-kube-api-access-bg5jb\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.834334 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-zz5h4"] Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.841663 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-zz5h4"] Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.842852 4856 scope.go:117] "RemoveContainer" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.862192 4856 scope.go:117] "RemoveContainer" containerID="358cb685cfdf5202beff450aed4be128c31a4dc4aec2d4dd68e5c932a2da3838" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.893195 4856 scope.go:117] "RemoveContainer" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" Nov 22 07:32:48 crc kubenswrapper[4856]: E1122 07:32:48.893777 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964\": container with ID starting with 1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964 not found: ID does not exist" containerID="1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.893805 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964"} err="failed to get container status \"1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964\": rpc error: code = NotFound desc = could not find container \"1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964\": container with ID starting with 1f5dcaa1eca51a18c0b72c027b4bc6c8c30363606c2c26e996988970261e1964 not found: ID does not exist" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.893826 4856 scope.go:117] "RemoveContainer" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" Nov 22 07:32:48 crc kubenswrapper[4856]: E1122 07:32:48.894230 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470\": container with ID starting with f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 not found: ID does not exist" containerID="f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.894252 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470"} err="failed to get container status \"f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470\": rpc error: code = NotFound desc = could not find container \"f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470\": container with ID starting with f31d69233cf4bae0ebfbf69006e3ec3c5023cacbdf108beedc579220e092e470 not found: ID does not exist" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.894265 4856 scope.go:117] "RemoveContainer" containerID="358cb685cfdf5202beff450aed4be128c31a4dc4aec2d4dd68e5c932a2da3838" Nov 22 07:32:48 crc kubenswrapper[4856]: E1122 07:32:48.894627 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"358cb685cfdf5202beff450aed4be128c31a4dc4aec2d4dd68e5c932a2da3838\": container with ID starting with 358cb685cfdf5202beff450aed4be128c31a4dc4aec2d4dd68e5c932a2da3838 not found: ID does not exist" containerID="358cb685cfdf5202beff450aed4be128c31a4dc4aec2d4dd68e5c932a2da3838" Nov 22 07:32:48 crc kubenswrapper[4856]: I1122 07:32:48.894646 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"358cb685cfdf5202beff450aed4be128c31a4dc4aec2d4dd68e5c932a2da3838"} err="failed to get container status \"358cb685cfdf5202beff450aed4be128c31a4dc4aec2d4dd68e5c932a2da3838\": rpc error: code = NotFound desc = could not find container \"358cb685cfdf5202beff450aed4be128c31a4dc4aec2d4dd68e5c932a2da3838\": container with ID starting with 358cb685cfdf5202beff450aed4be128c31a4dc4aec2d4dd68e5c932a2da3838 not found: ID does not exist" Nov 22 07:32:49 crc kubenswrapper[4856]: I1122 07:32:49.829289 4856 generic.go:334] "Generic (PLEG): container finished" podID="8b649794-30ba-493c-9285-05a58981ed36" containerID="f6b36d1ad73481da60eada98f0cdb3c61e2e68ee475247d1ff9682f6f708afb3" exitCode=137 Nov 22 07:32:49 crc kubenswrapper[4856]: I1122 07:32:49.829322 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"f6b36d1ad73481da60eada98f0cdb3c61e2e68ee475247d1ff9682f6f708afb3"} Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.719840 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" path="/var/lib/kubelet/pods/285d77d1-e278-4664-97f0-7562e2740a0b/volumes" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.726953 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.840023 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b649794-30ba-493c-9285-05a58981ed36-cache\") pod \"8b649794-30ba-493c-9285-05a58981ed36\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.840095 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq8fr\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-kube-api-access-rq8fr\") pod \"8b649794-30ba-493c-9285-05a58981ed36\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.840147 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift\") pod \"8b649794-30ba-493c-9285-05a58981ed36\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.840198 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b649794-30ba-493c-9285-05a58981ed36-lock\") pod \"8b649794-30ba-493c-9285-05a58981ed36\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.840265 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"8b649794-30ba-493c-9285-05a58981ed36\" (UID: \"8b649794-30ba-493c-9285-05a58981ed36\") " Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.840832 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b649794-30ba-493c-9285-05a58981ed36-cache" (OuterVolumeSpecName: "cache") pod "8b649794-30ba-493c-9285-05a58981ed36" (UID: "8b649794-30ba-493c-9285-05a58981ed36"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.841586 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b649794-30ba-493c-9285-05a58981ed36-lock" (OuterVolumeSpecName: "lock") pod "8b649794-30ba-493c-9285-05a58981ed36" (UID: "8b649794-30ba-493c-9285-05a58981ed36"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.845091 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8b649794-30ba-493c-9285-05a58981ed36" (UID: "8b649794-30ba-493c-9285-05a58981ed36"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.845699 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-kube-api-access-rq8fr" (OuterVolumeSpecName: "kube-api-access-rq8fr") pod "8b649794-30ba-493c-9285-05a58981ed36" (UID: "8b649794-30ba-493c-9285-05a58981ed36"). InnerVolumeSpecName "kube-api-access-rq8fr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.845983 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "swift") pod "8b649794-30ba-493c-9285-05a58981ed36" (UID: "8b649794-30ba-493c-9285-05a58981ed36"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.849145 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"8b649794-30ba-493c-9285-05a58981ed36","Type":"ContainerDied","Data":"c3a82fe013330aee4b49a20895b7832fbd9f0ff8a51956b8475b88650d0ca91f"} Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.849202 4856 scope.go:117] "RemoveContainer" containerID="dafe6ce95027e629d7af60bc33995b31a71bb7ef4de51b371a2ee48e7639d083" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.849295 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.910418 4856 scope.go:117] "RemoveContainer" containerID="423b2c9f27662f7d6367f52a13a9033ed0e18cb78b5dc553d9b64162d80e2544" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.923872 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.930167 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.937501 4856 scope.go:117] "RemoveContainer" containerID="9b6021a67115d6e55eab967cf6d9caa17bd06d922a3d54b43b6f5dec9196e96d" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.941618 4856 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8b649794-30ba-493c-9285-05a58981ed36-cache\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.941645 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq8fr\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-kube-api-access-rq8fr\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.941654 4856 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8b649794-30ba-493c-9285-05a58981ed36-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.941664 4856 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/8b649794-30ba-493c-9285-05a58981ed36-lock\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.941687 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.958429 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.966428 4856 scope.go:117] "RemoveContainer" containerID="1cf12acdc3f6a6abb938bdcfc295ffa2101088f787027d51f80b951797bb5873" Nov 22 07:32:50 crc kubenswrapper[4856]: I1122 07:32:50.985613 4856 scope.go:117] "RemoveContainer" containerID="f6b36d1ad73481da60eada98f0cdb3c61e2e68ee475247d1ff9682f6f708afb3" Nov 22 07:32:51 crc kubenswrapper[4856]: I1122 07:32:51.003634 4856 scope.go:117] "RemoveContainer" containerID="ecc44836c8466c6fbcc848350b1a769fe7507c5c9ee03a0001c9685bf0cd78bc" Nov 22 07:32:51 crc kubenswrapper[4856]: I1122 07:32:51.039607 4856 scope.go:117] "RemoveContainer" containerID="9f435952eb044c7ab5dcb833fc12c8685ca6e3fd82a9405acc66ff7e0a5e1488" Nov 22 07:32:51 crc kubenswrapper[4856]: I1122 07:32:51.042846 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 22 07:32:51 crc kubenswrapper[4856]: I1122 07:32:51.082395 4856 scope.go:117] "RemoveContainer" containerID="4011c89f0b6803e45417d4182117f87df790db47e51c6dc417714bdbab0d9328" Nov 22 07:32:51 crc kubenswrapper[4856]: I1122 07:32:51.101236 4856 scope.go:117] "RemoveContainer" containerID="be283db24da6932b997e62df069e78ce522bed9042d62990be78c405a0d8baff" Nov 22 07:32:51 crc kubenswrapper[4856]: I1122 07:32:51.119184 4856 scope.go:117] "RemoveContainer" containerID="7739725925a289b294a1260a2963889a83f70dbfee02df9ebc4a046996eec165" Nov 22 07:32:51 crc kubenswrapper[4856]: I1122 07:32:51.137644 4856 scope.go:117] "RemoveContainer" containerID="5aab3b9349e7624b4bdd58b9ddc145142c8697523405f28d16e4f3c04ea145ae" Nov 22 07:32:51 crc kubenswrapper[4856]: I1122 07:32:51.158763 4856 scope.go:117] "RemoveContainer" containerID="a5a09f33961facab4f00ff54e2e02326d023fd20d2ac164e6dacaf7131204425" Nov 22 07:32:51 crc kubenswrapper[4856]: I1122 07:32:51.178495 4856 scope.go:117] "RemoveContainer" containerID="199edfe080cf33b200ed5effe88b6a79246b1c89eb804c543da87be52e6c569e" Nov 22 07:32:51 crc kubenswrapper[4856]: I1122 07:32:51.197555 4856 scope.go:117] "RemoveContainer" containerID="507063dad370d0aa753a3a159944ec9f090dd4d59c3360495ed98d90f8250c2e" Nov 22 07:32:51 crc kubenswrapper[4856]: I1122 07:32:51.218790 4856 scope.go:117] "RemoveContainer" containerID="c22be9584965ebc42abd66c9bfe89aca421bd210a908db30115541e641df706a" Nov 22 07:32:52 crc kubenswrapper[4856]: I1122 07:32:52.721334 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b649794-30ba-493c-9285-05a58981ed36" path="/var/lib/kubelet/pods/8b649794-30ba-493c-9285-05a58981ed36/volumes" Nov 22 07:32:59 crc kubenswrapper[4856]: I1122 07:32:59.710790 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:32:59 crc kubenswrapper[4856]: E1122 07:32:59.711211 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:33:14 crc kubenswrapper[4856]: I1122 07:33:14.709239 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:33:14 crc kubenswrapper[4856]: E1122 07:33:14.710004 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.240726 4856 scope.go:117] "RemoveContainer" containerID="a47e77fe08fca49e6ceafb9d80866fae9b23a969620c35a33c829ec365ae8186" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.265959 4856 scope.go:117] "RemoveContainer" containerID="c4f22629590a58ea96054eca0236b16ae796cd096384d9dace24279356f1b90a" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.317818 4856 scope.go:117] "RemoveContainer" containerID="310c395b35f6d8ce91619f7277306489a7826437eb66bb531fd9e9b73c33c26d" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.338284 4856 scope.go:117] "RemoveContainer" containerID="07540c49088f81e1a1251cf274f0f75cda056c029e0a8f46a36b5f128a0b8a70" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.379749 4856 scope.go:117] "RemoveContainer" containerID="a214e899de1fa12d232a5f7ae7432c6684e5ff6c933f40705502235cb59cf8ba" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.398411 4856 scope.go:117] "RemoveContainer" containerID="c3368e9bb887c530083f7a09aacec83accf90141e2a0af6a2fffe8655043dddd" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.442648 4856 scope.go:117] "RemoveContainer" containerID="4525c416276fb4175bdbacfe90bd2046611ed1f320269576d4c9ceac24f98c99" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.472946 4856 scope.go:117] "RemoveContainer" containerID="d90855eebce3d108812258ad5edea4fee4c4190885d76c83348fd0a3eef22ab3" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.497958 4856 scope.go:117] "RemoveContainer" containerID="bea09664f4a7eb9a8d241c32f7456d5ae5ff024cd6a89a93dc01db73a7452dd2" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.520779 4856 scope.go:117] "RemoveContainer" containerID="0db464afbcabb57a2015f38a0ea5f2f9f6a53038f4ec14b448d5d42cb67e6f59" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.542757 4856 scope.go:117] "RemoveContainer" containerID="208c215251646795ab6cb26edb516b97fe496400e51e1e8f665ed937642f1204" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.559660 4856 scope.go:117] "RemoveContainer" containerID="6dbc2c42beeb03f5f93f9ca2890f1f6f74875cdba0da041cffe6c07e36ced3cf" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.580771 4856 scope.go:117] "RemoveContainer" containerID="6e1eddbe04b1be2ec22be58a8fe8aa1417daa37d9b0e151e04b44582e14fb8d3" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.602481 4856 scope.go:117] "RemoveContainer" containerID="6525b4e2de9799c74ff23a66dac29f1d95107568c6850b31be0fdcb315d454e7" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.623115 4856 scope.go:117] "RemoveContainer" containerID="ea317fed3d371307a9aff011a5dcf70ef5c76887c02d1086551cc16eb012b860" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.643142 4856 scope.go:117] "RemoveContainer" containerID="cde1d5e34fed489806a536b0abe875c6d7151093d591a234d52ed41c693e2b63" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.663410 4856 scope.go:117] "RemoveContainer" containerID="1e67d8cd584ceeb200c9518aba1f39886ff3c391d12da5f8ac55f49863259170" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.695914 4856 scope.go:117] "RemoveContainer" containerID="85cf79e96fc13c34ea3abd9d2877f21dd93203c7de56aaa2097e3d1a062e3a4e" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.725924 4856 scope.go:117] "RemoveContainer" containerID="e506d22d373d63c5c5df7338ebfdf37d6f2889f6528d9d2937bd53d522fa657f" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.765121 4856 scope.go:117] "RemoveContainer" containerID="fe385768b79ac31126e52f3869af0ea80aced065b1de1ca8f0de99c92dbf7f22" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.793387 4856 scope.go:117] "RemoveContainer" containerID="1d30ff58d06db234c6dbea2039224f88949d93b4b3b10ffc82e7d58969c01365" Nov 22 07:33:17 crc kubenswrapper[4856]: I1122 07:33:17.835384 4856 scope.go:117] "RemoveContainer" containerID="c86e59d45b1d7c3c0e2462f84ad716038842e4262fc6e161703b245f174c63d7" Nov 22 07:33:28 crc kubenswrapper[4856]: I1122 07:33:28.713946 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:33:28 crc kubenswrapper[4856]: E1122 07:33:28.714782 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:33:41 crc kubenswrapper[4856]: I1122 07:33:41.709701 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:33:41 crc kubenswrapper[4856]: E1122 07:33:41.710453 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:33:55 crc kubenswrapper[4856]: I1122 07:33:55.709351 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:33:55 crc kubenswrapper[4856]: E1122 07:33:55.710033 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:34:06 crc kubenswrapper[4856]: I1122 07:34:06.710412 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:34:06 crc kubenswrapper[4856]: E1122 07:34:06.711094 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:34:17 crc kubenswrapper[4856]: I1122 07:34:17.709837 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:34:17 crc kubenswrapper[4856]: E1122 07:34:17.710632 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:34:18 crc kubenswrapper[4856]: I1122 07:34:18.631129 4856 scope.go:117] "RemoveContainer" containerID="b6356dec8e3af2060f0508772909c3164a9dbf1ad47a0fddc1e261b2db1f8b4f" Nov 22 07:34:18 crc kubenswrapper[4856]: I1122 07:34:18.658787 4856 scope.go:117] "RemoveContainer" containerID="d8ce4b0ff118154c61796fcae8303cb155b81851dc7f2edb9facb56abc699957" Nov 22 07:34:18 crc kubenswrapper[4856]: I1122 07:34:18.685074 4856 scope.go:117] "RemoveContainer" containerID="e51f9c56ea1c0e3182c1c3c0d9428cb803206db87582ea1f3fdb797ee1304a25" Nov 22 07:34:18 crc kubenswrapper[4856]: I1122 07:34:18.713636 4856 scope.go:117] "RemoveContainer" containerID="4692846d3f96731874a1774fa70ea5b09c98e71c90605b52844703235cc88004" Nov 22 07:34:18 crc kubenswrapper[4856]: I1122 07:34:18.732881 4856 scope.go:117] "RemoveContainer" containerID="4b3a34271aa5ac787753f7b938e7ae22608f0db7bdfd20a4d3e671e077fbfc32" Nov 22 07:34:18 crc kubenswrapper[4856]: I1122 07:34:18.758029 4856 scope.go:117] "RemoveContainer" containerID="d74045845a7dba814efb401d7b033582ccdbf8ee08845c8e8fdf207bd5c6d465" Nov 22 07:34:18 crc kubenswrapper[4856]: I1122 07:34:18.795470 4856 scope.go:117] "RemoveContainer" containerID="cfc3e2910129f9e8a60e68b621e6eee3267b6c9aa86e078920823532cee13fa0" Nov 22 07:34:18 crc kubenswrapper[4856]: I1122 07:34:18.816258 4856 scope.go:117] "RemoveContainer" containerID="87c89906bf819de89643974ff91061bf464fcbe0da565621b557fdb026d38601" Nov 22 07:34:18 crc kubenswrapper[4856]: I1122 07:34:18.835592 4856 scope.go:117] "RemoveContainer" containerID="d5d879863319206355a86b5a2ece30cd07fa7ad8e1156bd4465388cd8948de14" Nov 22 07:34:18 crc kubenswrapper[4856]: I1122 07:34:18.854582 4856 scope.go:117] "RemoveContainer" containerID="efb8a5d4d8343649cf607898ba5bd73e65ec3c1f989943bfee33058f83c6e13b" Nov 22 07:34:18 crc kubenswrapper[4856]: I1122 07:34:18.873941 4856 scope.go:117] "RemoveContainer" containerID="4b3192676d3e19f237ce934c70e2e2105edb9e9415b2d7c5b848a4de24f6ac9a" Nov 22 07:34:30 crc kubenswrapper[4856]: I1122 07:34:30.709477 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:34:30 crc kubenswrapper[4856]: E1122 07:34:30.710491 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:34:41 crc kubenswrapper[4856]: I1122 07:34:41.710309 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:34:41 crc kubenswrapper[4856]: E1122 07:34:41.710952 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:34:53 crc kubenswrapper[4856]: I1122 07:34:53.709004 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:34:53 crc kubenswrapper[4856]: E1122 07:34:53.709738 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:35:04 crc kubenswrapper[4856]: I1122 07:35:04.709930 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:35:04 crc kubenswrapper[4856]: E1122 07:35:04.710464 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:35:18 crc kubenswrapper[4856]: I1122 07:35:18.713372 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:35:18 crc kubenswrapper[4856]: E1122 07:35:18.714158 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:35:32 crc kubenswrapper[4856]: I1122 07:35:32.710104 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:35:33 crc kubenswrapper[4856]: I1122 07:35:33.175605 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"390190f2e77b02ceb8fd2ed59e451cf120a15ca7d5e154142042d4828039a7b8"} Nov 22 07:36:19 crc kubenswrapper[4856]: I1122 07:36:19.017286 4856 scope.go:117] "RemoveContainer" containerID="79ac3da01d567af671e8140ba0abef013a08691b348676216927e29a7c793bcc" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.171672 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bzzwb"] Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.172729 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.172744 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.172761 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovsdb-server-init" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.172814 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovsdb-server-init" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.172840 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfc2e8cc-04c1-4481-bf7d-d7e99972200f" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.172849 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfc2e8cc-04c1-4481-bf7d-d7e99972200f" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.172858 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovsdb-server" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.172902 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovsdb-server" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.172915 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-server" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.172935 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-server" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.172959 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" containerName="rabbitmq" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.172969 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" containerName="rabbitmq" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.172979 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665dbe7c-5370-4a97-8502-e9b25c8acd3a" containerName="barbican-keystone-listener" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.172986 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="665dbe7c-5370-4a97-8502-e9b25c8acd3a" containerName="barbican-keystone-listener" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173000 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173007 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173017 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" containerName="neutron-api" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173025 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" containerName="neutron-api" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173038 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e9edb8-ed05-4d0f-aff1-d59b369cd76d" containerName="memcached" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173048 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e9edb8-ed05-4d0f-aff1-d59b369cd76d" containerName="memcached" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173063 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07329cf7-c3ff-410a-8ab7-8f19ae9d3974" containerName="nova-scheduler-scheduler" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173072 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="07329cf7-c3ff-410a-8ab7-8f19ae9d3974" containerName="nova-scheduler-scheduler" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173084 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" containerName="setup-container" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173092 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" containerName="setup-container" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173101 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovs-vswitchd" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173108 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovs-vswitchd" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173118 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173126 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173135 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa24715-1df9-4a47-9817-4a1b68679d08" containerName="openstack-network-exporter" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173142 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa24715-1df9-4a47-9817-4a1b68679d08" containerName="openstack-network-exporter" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173154 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-replicator" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173162 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-replicator" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173173 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-expirer" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173180 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-expirer" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173188 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-auditor" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173193 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-auditor" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173201 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-auditor" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173207 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-auditor" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173215 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="proxy-httpd" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173221 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="proxy-httpd" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173230 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-replicator" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173236 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-replicator" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173244 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" containerName="neutron-httpd" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173249 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" containerName="neutron-httpd" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173256 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-updater" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173262 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-updater" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173270 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa24715-1df9-4a47-9817-4a1b68679d08" containerName="ovn-northd" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173279 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa24715-1df9-4a47-9817-4a1b68679d08" containerName="ovn-northd" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173292 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a66fa8fc-f908-43e7-a169-6156fc2092f8" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173300 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a66fa8fc-f908-43e7-a169-6156fc2092f8" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173311 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27ecbc9-0058-49d3-8715-826a4a1bb544" containerName="galera" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173316 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27ecbc9-0058-49d3-8715-826a4a1bb544" containerName="galera" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173324 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eaa66e0-ee9b-4115-b385-222e8ac0c21c" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173330 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eaa66e0-ee9b-4115-b385-222e8ac0c21c" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173339 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75cebe3-86db-4be1-9755-4bd8a83c9796" containerName="glance-log" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173345 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75cebe3-86db-4be1-9755-4bd8a83c9796" containerName="glance-log" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173356 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27ecbc9-0058-49d3-8715-826a4a1bb544" containerName="mysql-bootstrap" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173362 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27ecbc9-0058-49d3-8715-826a4a1bb544" containerName="mysql-bootstrap" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173369 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3a5d31-7183-4298-87ea-4aa84aa395b4" containerName="galera" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173376 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3a5d31-7183-4298-87ea-4aa84aa395b4" containerName="galera" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173386 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" containerName="rabbitmq" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173391 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" containerName="rabbitmq" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173400 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6976ffd-7286-4347-b8af-607803a96768" containerName="keystone-api" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173407 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6976ffd-7286-4347-b8af-607803a96768" containerName="keystone-api" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173420 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ccf431-f692-459f-b249-66bd9747d09c" containerName="nova-api-api" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173426 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ccf431-f692-459f-b249-66bd9747d09c" containerName="nova-api-api" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173436 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="519a764c-9ac2-4f94-84c6-7c284ab676cd" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173442 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="519a764c-9ac2-4f94-84c6-7c284ab676cd" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173450 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-reaper" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173456 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-reaper" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173466 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api-log" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173473 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api-log" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173479 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df337886-1469-499f-bbb4-564f479cafa7" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173486 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="df337886-1469-499f-bbb4-564f479cafa7" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173494 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" containerName="setup-container" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173500 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" containerName="setup-container" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173532 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerName="nova-metadata-log" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173539 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerName="nova-metadata-log" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173550 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerName="nova-metadata-metadata" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173556 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerName="nova-metadata-metadata" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173565 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-server" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173570 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-server" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173578 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="ceilometer-notification-agent" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173585 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="ceilometer-notification-agent" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173597 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfd5417e-43d6-4fe2-807c-8c203cb74c0a" containerName="barbican-worker-log" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173603 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfd5417e-43d6-4fe2-807c-8c203cb74c0a" containerName="barbican-worker-log" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173612 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c75cebe3-86db-4be1-9755-4bd8a83c9796" containerName="glance-httpd" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173620 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c75cebe3-86db-4be1-9755-4bd8a83c9796" containerName="glance-httpd" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173631 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ccf431-f692-459f-b249-66bd9747d09c" containerName="nova-api-log" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173638 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ccf431-f692-459f-b249-66bd9747d09c" containerName="nova-api-log" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173646 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-auditor" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173652 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-auditor" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173661 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-replicator" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173667 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-replicator" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173675 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="ceilometer-central-agent" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173681 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="ceilometer-central-agent" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173690 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-updater" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173695 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-updater" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173707 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="665dbe7c-5370-4a97-8502-e9b25c8acd3a" containerName="barbican-keystone-listener-log" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173713 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="665dbe7c-5370-4a97-8502-e9b25c8acd3a" containerName="barbican-keystone-listener-log" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173720 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="swift-recon-cron" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173726 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="swift-recon-cron" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173736 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a38fdd7-2dc0-4ebc-91c7-359d0e437900" containerName="kube-state-metrics" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173742 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a38fdd7-2dc0-4ebc-91c7-359d0e437900" containerName="kube-state-metrics" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173751 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-server" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173757 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-server" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173765 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="rsync" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173771 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="rsync" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173779 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b88f55c-12d5-4cba-a155-aa00c19c94f4" containerName="nova-cell1-conductor-conductor" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173785 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b88f55c-12d5-4cba-a155-aa00c19c94f4" containerName="nova-cell1-conductor-conductor" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173794 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfd5417e-43d6-4fe2-807c-8c203cb74c0a" containerName="barbican-worker" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173800 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfd5417e-43d6-4fe2-807c-8c203cb74c0a" containerName="barbican-worker" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173809 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="sg-core" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173815 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="sg-core" Nov 22 07:37:10 crc kubenswrapper[4856]: E1122 07:37:10.173823 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3a5d31-7183-4298-87ea-4aa84aa395b4" containerName="mysql-bootstrap" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173829 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3a5d31-7183-4298-87ea-4aa84aa395b4" containerName="mysql-bootstrap" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173967 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="rsync" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173974 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="665dbe7c-5370-4a97-8502-e9b25c8acd3a" containerName="barbican-keystone-listener" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173983 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3a5d31-7183-4298-87ea-4aa84aa395b4" containerName="galera" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.173994 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="665dbe7c-5370-4a97-8502-e9b25c8acd3a" containerName="barbican-keystone-listener-log" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174000 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="519a764c-9ac2-4f94-84c6-7c284ab676cd" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174009 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-replicator" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174016 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-replicator" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174024 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" containerName="neutron-api" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174042 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerName="nova-metadata-metadata" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174057 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e9edb8-ed05-4d0f-aff1-d59b369cd76d" containerName="memcached" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174067 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-server" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174074 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfd5417e-43d6-4fe2-807c-8c203cb74c0a" containerName="barbican-worker-log" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174081 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3a26a3d-4d1f-4d48-bf93-ce78fbb8dd92" containerName="nova-metadata-log" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174092 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfc2e8cc-04c1-4481-bf7d-d7e99972200f" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174101 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovsdb-server" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174108 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-server" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174121 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-auditor" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174134 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-expirer" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174144 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ccf431-f692-459f-b249-66bd9747d09c" containerName="nova-api-api" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174160 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="ceilometer-notification-agent" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174182 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac8c44e-0667-43f7-aebd-a7b4c5bcb429" containerName="rabbitmq" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174192 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa24715-1df9-4a47-9817-4a1b68679d08" containerName="openstack-network-exporter" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174203 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="07329cf7-c3ff-410a-8ab7-8f19ae9d3974" containerName="nova-scheduler-scheduler" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174217 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eaa66e0-ee9b-4115-b385-222e8ac0c21c" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174231 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d5250b2-5ff2-4da6-a2b9-038ff0e0d30f" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174247 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="fddc3a2d-fda9-4a61-a79c-3a66a7b6d3b8" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174262 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75cebe3-86db-4be1-9755-4bd8a83c9796" containerName="glance-log" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174277 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a38fdd7-2dc0-4ebc-91c7-359d0e437900" containerName="kube-state-metrics" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174287 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="sg-core" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174295 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-auditor" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174307 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0c7ec5-7da6-492d-bdbf-ff2cb8d15313" containerName="neutron-httpd" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174320 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfd5417e-43d6-4fe2-807c-8c203cb74c0a" containerName="barbican-worker" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.174494 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api-log" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176369 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="ceilometer-central-agent" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176391 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3319e5-bf4f-4c00-9cd0-2b13d77aaa89" containerName="rabbitmq" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176400 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa24715-1df9-4a47-9817-4a1b68679d08" containerName="ovn-northd" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176408 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6976ffd-7286-4347-b8af-607803a96768" containerName="keystone-api" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176422 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="swift-recon-cron" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176432 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-replicator" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176445 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f7a457-9a5c-48b5-86c0-24d274596c8a" containerName="barbican-api" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176456 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="285d77d1-e278-4664-97f0-7562e2740a0b" containerName="ovs-vswitchd" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176468 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b88f55c-12d5-4cba-a155-aa00c19c94f4" containerName="nova-cell1-conductor-conductor" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176479 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ccf431-f692-459f-b249-66bd9747d09c" containerName="nova-api-log" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176492 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27ecbc9-0058-49d3-8715-826a4a1bb544" containerName="galera" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176501 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c75cebe3-86db-4be1-9755-4bd8a83c9796" containerName="glance-httpd" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176536 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-server" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176551 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="account-reaper" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176559 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-auditor" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176570 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="object-updater" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176580 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b649794-30ba-493c-9285-05a58981ed36" containerName="container-updater" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176591 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0f8403e-a06a-4804-b60a-98974506f547" containerName="proxy-httpd" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176599 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="df337886-1469-499f-bbb4-564f479cafa7" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.176609 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a66fa8fc-f908-43e7-a169-6156fc2092f8" containerName="mariadb-account-delete" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.178034 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.190430 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bzzwb"] Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.376407 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qj58\" (UniqueName: \"kubernetes.io/projected/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-kube-api-access-9qj58\") pod \"redhat-operators-bzzwb\" (UID: \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\") " pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.376546 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-utilities\") pod \"redhat-operators-bzzwb\" (UID: \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\") " pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.376980 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-catalog-content\") pod \"redhat-operators-bzzwb\" (UID: \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\") " pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.478606 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-utilities\") pod \"redhat-operators-bzzwb\" (UID: \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\") " pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.478705 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-catalog-content\") pod \"redhat-operators-bzzwb\" (UID: \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\") " pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.478767 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qj58\" (UniqueName: \"kubernetes.io/projected/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-kube-api-access-9qj58\") pod \"redhat-operators-bzzwb\" (UID: \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\") " pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.479431 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-utilities\") pod \"redhat-operators-bzzwb\" (UID: \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\") " pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.479535 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-catalog-content\") pod \"redhat-operators-bzzwb\" (UID: \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\") " pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.503812 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qj58\" (UniqueName: \"kubernetes.io/projected/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-kube-api-access-9qj58\") pod \"redhat-operators-bzzwb\" (UID: \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\") " pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.513222 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:10 crc kubenswrapper[4856]: I1122 07:37:10.995463 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bzzwb"] Nov 22 07:37:11 crc kubenswrapper[4856]: I1122 07:37:11.945791 4856 generic.go:334] "Generic (PLEG): container finished" podID="1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" containerID="012574d092912c923f91d557526230151f8d41e9c5bcde26d225aab14ba86089" exitCode=0 Nov 22 07:37:11 crc kubenswrapper[4856]: I1122 07:37:11.946112 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzzwb" event={"ID":"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15","Type":"ContainerDied","Data":"012574d092912c923f91d557526230151f8d41e9c5bcde26d225aab14ba86089"} Nov 22 07:37:11 crc kubenswrapper[4856]: I1122 07:37:11.946146 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzzwb" event={"ID":"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15","Type":"ContainerStarted","Data":"24fc62821276aa31261dbff70315ec52e5c614b54b2898fa45dac7d773e23ad2"} Nov 22 07:37:11 crc kubenswrapper[4856]: I1122 07:37:11.950777 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:37:13 crc kubenswrapper[4856]: I1122 07:37:13.967867 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzzwb" event={"ID":"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15","Type":"ContainerStarted","Data":"c34dddd7fd959da8b4ea3d7a16e4f1d0ef1269bfd94ebdb3a1466838eac66912"} Nov 22 07:37:14 crc kubenswrapper[4856]: I1122 07:37:14.987103 4856 generic.go:334] "Generic (PLEG): container finished" podID="1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" containerID="c34dddd7fd959da8b4ea3d7a16e4f1d0ef1269bfd94ebdb3a1466838eac66912" exitCode=0 Nov 22 07:37:14 crc kubenswrapper[4856]: I1122 07:37:14.987165 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzzwb" event={"ID":"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15","Type":"ContainerDied","Data":"c34dddd7fd959da8b4ea3d7a16e4f1d0ef1269bfd94ebdb3a1466838eac66912"} Nov 22 07:37:17 crc kubenswrapper[4856]: I1122 07:37:17.005612 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzzwb" event={"ID":"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15","Type":"ContainerStarted","Data":"a6f172e66adae7d8994272fc31c2796929876db4f56f8ae303c781d8984e4c78"} Nov 22 07:37:17 crc kubenswrapper[4856]: I1122 07:37:17.026289 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bzzwb" podStartSLOduration=2.6989316089999997 podStartE2EDuration="7.026264121s" podCreationTimestamp="2025-11-22 07:37:10 +0000 UTC" firstStartedPulling="2025-11-22 07:37:11.950431572 +0000 UTC m=+2074.363824840" lastFinishedPulling="2025-11-22 07:37:16.277764094 +0000 UTC m=+2078.691157352" observedRunningTime="2025-11-22 07:37:17.022110398 +0000 UTC m=+2079.435503716" watchObservedRunningTime="2025-11-22 07:37:17.026264121 +0000 UTC m=+2079.439657379" Nov 22 07:37:19 crc kubenswrapper[4856]: I1122 07:37:19.067081 4856 scope.go:117] "RemoveContainer" containerID="dfb96d957f6cb86c56972d43dc87c8482e105284bd355469527ebc982327a614" Nov 22 07:37:19 crc kubenswrapper[4856]: I1122 07:37:19.107545 4856 scope.go:117] "RemoveContainer" containerID="3b896043efdc8c004a38ab6b7b5f5f48c16d9c632c413f3028ca037ccf425d7c" Nov 22 07:37:19 crc kubenswrapper[4856]: I1122 07:37:19.129939 4856 scope.go:117] "RemoveContainer" containerID="e3477f3418da229a71284b8471efbfa54d35d6398ff6c275fa37d3833c1d430c" Nov 22 07:37:19 crc kubenswrapper[4856]: I1122 07:37:19.180417 4856 scope.go:117] "RemoveContainer" containerID="4e140130f70e83959c9825437a4356abd59518f1bc7f588c31811c6ba07d3a8c" Nov 22 07:37:19 crc kubenswrapper[4856]: I1122 07:37:19.207009 4856 scope.go:117] "RemoveContainer" containerID="89cdd001df7801b445de99ac1cd0d1ad9f94f9868fc26d5e211218c30596f805" Nov 22 07:37:19 crc kubenswrapper[4856]: I1122 07:37:19.233063 4856 scope.go:117] "RemoveContainer" containerID="ffdd3ca38408b987d9d0ea61512a955ab061b2e9e99a4ac866fe731cbc23b7ff" Nov 22 07:37:20 crc kubenswrapper[4856]: I1122 07:37:20.514216 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:20 crc kubenswrapper[4856]: I1122 07:37:20.514336 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:21 crc kubenswrapper[4856]: I1122 07:37:21.555289 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bzzwb" podUID="1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" containerName="registry-server" probeResult="failure" output=< Nov 22 07:37:21 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 07:37:21 crc kubenswrapper[4856]: > Nov 22 07:37:30 crc kubenswrapper[4856]: I1122 07:37:30.555758 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:30 crc kubenswrapper[4856]: I1122 07:37:30.602859 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:30 crc kubenswrapper[4856]: I1122 07:37:30.785364 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bzzwb"] Nov 22 07:37:32 crc kubenswrapper[4856]: I1122 07:37:32.112761 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bzzwb" podUID="1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" containerName="registry-server" containerID="cri-o://a6f172e66adae7d8994272fc31c2796929876db4f56f8ae303c781d8984e4c78" gracePeriod=2 Nov 22 07:37:36 crc kubenswrapper[4856]: I1122 07:37:36.147946 4856 generic.go:334] "Generic (PLEG): container finished" podID="1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" containerID="a6f172e66adae7d8994272fc31c2796929876db4f56f8ae303c781d8984e4c78" exitCode=0 Nov 22 07:37:36 crc kubenswrapper[4856]: I1122 07:37:36.148034 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzzwb" event={"ID":"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15","Type":"ContainerDied","Data":"a6f172e66adae7d8994272fc31c2796929876db4f56f8ae303c781d8984e4c78"} Nov 22 07:37:36 crc kubenswrapper[4856]: I1122 07:37:36.403532 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:36 crc kubenswrapper[4856]: I1122 07:37:36.557938 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-utilities\") pod \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\" (UID: \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\") " Nov 22 07:37:36 crc kubenswrapper[4856]: I1122 07:37:36.558064 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qj58\" (UniqueName: \"kubernetes.io/projected/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-kube-api-access-9qj58\") pod \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\" (UID: \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\") " Nov 22 07:37:36 crc kubenswrapper[4856]: I1122 07:37:36.558107 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-catalog-content\") pod \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\" (UID: \"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15\") " Nov 22 07:37:36 crc kubenswrapper[4856]: I1122 07:37:36.558858 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-utilities" (OuterVolumeSpecName: "utilities") pod "1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" (UID: "1a828d7e-5947-4a1a-ab1e-07a7a16b4d15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:37:36 crc kubenswrapper[4856]: I1122 07:37:36.567068 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-kube-api-access-9qj58" (OuterVolumeSpecName: "kube-api-access-9qj58") pod "1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" (UID: "1a828d7e-5947-4a1a-ab1e-07a7a16b4d15"). InnerVolumeSpecName "kube-api-access-9qj58". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:37:36 crc kubenswrapper[4856]: I1122 07:37:36.659673 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qj58\" (UniqueName: \"kubernetes.io/projected/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-kube-api-access-9qj58\") on node \"crc\" DevicePath \"\"" Nov 22 07:37:36 crc kubenswrapper[4856]: I1122 07:37:36.659705 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:37:36 crc kubenswrapper[4856]: I1122 07:37:36.661849 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" (UID: "1a828d7e-5947-4a1a-ab1e-07a7a16b4d15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:37:36 crc kubenswrapper[4856]: I1122 07:37:36.761450 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:37:37 crc kubenswrapper[4856]: I1122 07:37:37.158567 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bzzwb" event={"ID":"1a828d7e-5947-4a1a-ab1e-07a7a16b4d15","Type":"ContainerDied","Data":"24fc62821276aa31261dbff70315ec52e5c614b54b2898fa45dac7d773e23ad2"} Nov 22 07:37:37 crc kubenswrapper[4856]: I1122 07:37:37.158619 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bzzwb" Nov 22 07:37:37 crc kubenswrapper[4856]: I1122 07:37:37.158635 4856 scope.go:117] "RemoveContainer" containerID="a6f172e66adae7d8994272fc31c2796929876db4f56f8ae303c781d8984e4c78" Nov 22 07:37:37 crc kubenswrapper[4856]: I1122 07:37:37.179046 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bzzwb"] Nov 22 07:37:37 crc kubenswrapper[4856]: I1122 07:37:37.181907 4856 scope.go:117] "RemoveContainer" containerID="c34dddd7fd959da8b4ea3d7a16e4f1d0ef1269bfd94ebdb3a1466838eac66912" Nov 22 07:37:37 crc kubenswrapper[4856]: I1122 07:37:37.186348 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bzzwb"] Nov 22 07:37:37 crc kubenswrapper[4856]: I1122 07:37:37.201617 4856 scope.go:117] "RemoveContainer" containerID="012574d092912c923f91d557526230151f8d41e9c5bcde26d225aab14ba86089" Nov 22 07:37:38 crc kubenswrapper[4856]: I1122 07:37:38.719090 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" path="/var/lib/kubelet/pods/1a828d7e-5947-4a1a-ab1e-07a7a16b4d15/volumes" Nov 22 07:37:59 crc kubenswrapper[4856]: I1122 07:37:59.754367 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:37:59 crc kubenswrapper[4856]: I1122 07:37:59.754901 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.712895 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tf2dt"] Nov 22 07:38:07 crc kubenswrapper[4856]: E1122 07:38:07.713813 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" containerName="extract-utilities" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.713831 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" containerName="extract-utilities" Nov 22 07:38:07 crc kubenswrapper[4856]: E1122 07:38:07.713948 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" containerName="registry-server" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.713961 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" containerName="registry-server" Nov 22 07:38:07 crc kubenswrapper[4856]: E1122 07:38:07.713975 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" containerName="extract-content" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.713984 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" containerName="extract-content" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.714184 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a828d7e-5947-4a1a-ab1e-07a7a16b4d15" containerName="registry-server" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.716954 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.722053 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tf2dt"] Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.817392 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xprfp\" (UniqueName: \"kubernetes.io/projected/ee03d935-6d6f-4d2d-ab4e-bc9e85256487-kube-api-access-xprfp\") pod \"community-operators-tf2dt\" (UID: \"ee03d935-6d6f-4d2d-ab4e-bc9e85256487\") " pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.817454 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee03d935-6d6f-4d2d-ab4e-bc9e85256487-utilities\") pod \"community-operators-tf2dt\" (UID: \"ee03d935-6d6f-4d2d-ab4e-bc9e85256487\") " pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.817654 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee03d935-6d6f-4d2d-ab4e-bc9e85256487-catalog-content\") pod \"community-operators-tf2dt\" (UID: \"ee03d935-6d6f-4d2d-ab4e-bc9e85256487\") " pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.919327 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xprfp\" (UniqueName: \"kubernetes.io/projected/ee03d935-6d6f-4d2d-ab4e-bc9e85256487-kube-api-access-xprfp\") pod \"community-operators-tf2dt\" (UID: \"ee03d935-6d6f-4d2d-ab4e-bc9e85256487\") " pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.919371 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee03d935-6d6f-4d2d-ab4e-bc9e85256487-utilities\") pod \"community-operators-tf2dt\" (UID: \"ee03d935-6d6f-4d2d-ab4e-bc9e85256487\") " pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.919427 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee03d935-6d6f-4d2d-ab4e-bc9e85256487-catalog-content\") pod \"community-operators-tf2dt\" (UID: \"ee03d935-6d6f-4d2d-ab4e-bc9e85256487\") " pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.919892 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee03d935-6d6f-4d2d-ab4e-bc9e85256487-catalog-content\") pod \"community-operators-tf2dt\" (UID: \"ee03d935-6d6f-4d2d-ab4e-bc9e85256487\") " pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.920464 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee03d935-6d6f-4d2d-ab4e-bc9e85256487-utilities\") pod \"community-operators-tf2dt\" (UID: \"ee03d935-6d6f-4d2d-ab4e-bc9e85256487\") " pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:07 crc kubenswrapper[4856]: I1122 07:38:07.938438 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xprfp\" (UniqueName: \"kubernetes.io/projected/ee03d935-6d6f-4d2d-ab4e-bc9e85256487-kube-api-access-xprfp\") pod \"community-operators-tf2dt\" (UID: \"ee03d935-6d6f-4d2d-ab4e-bc9e85256487\") " pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:08 crc kubenswrapper[4856]: I1122 07:38:08.042733 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:08 crc kubenswrapper[4856]: I1122 07:38:08.515174 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tf2dt"] Nov 22 07:38:09 crc kubenswrapper[4856]: I1122 07:38:09.379238 4856 generic.go:334] "Generic (PLEG): container finished" podID="ee03d935-6d6f-4d2d-ab4e-bc9e85256487" containerID="4e16d8cc17e31fb6e2d4aa0670b7e1077ac6ebffe30d81c021251c4d2f37a19c" exitCode=0 Nov 22 07:38:09 crc kubenswrapper[4856]: I1122 07:38:09.379333 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tf2dt" event={"ID":"ee03d935-6d6f-4d2d-ab4e-bc9e85256487","Type":"ContainerDied","Data":"4e16d8cc17e31fb6e2d4aa0670b7e1077ac6ebffe30d81c021251c4d2f37a19c"} Nov 22 07:38:09 crc kubenswrapper[4856]: I1122 07:38:09.379601 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tf2dt" event={"ID":"ee03d935-6d6f-4d2d-ab4e-bc9e85256487","Type":"ContainerStarted","Data":"610805835c0a84568d0db6761fc0bc08b0303869d69295f2c5ccc7d5640e7696"} Nov 22 07:38:18 crc kubenswrapper[4856]: I1122 07:38:18.454890 4856 generic.go:334] "Generic (PLEG): container finished" podID="ee03d935-6d6f-4d2d-ab4e-bc9e85256487" containerID="ee0ddba07078500b3728a24803232a1356ea9e4a6e68900a5b2fa0164bae0355" exitCode=0 Nov 22 07:38:18 crc kubenswrapper[4856]: I1122 07:38:18.454945 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tf2dt" event={"ID":"ee03d935-6d6f-4d2d-ab4e-bc9e85256487","Type":"ContainerDied","Data":"ee0ddba07078500b3728a24803232a1356ea9e4a6e68900a5b2fa0164bae0355"} Nov 22 07:38:19 crc kubenswrapper[4856]: I1122 07:38:19.329896 4856 scope.go:117] "RemoveContainer" containerID="02a270d659156bdef916a33cbab50d2c8c0cc0527187e2d9fcd2dc12495e6671" Nov 22 07:38:20 crc kubenswrapper[4856]: I1122 07:38:20.139618 4856 scope.go:117] "RemoveContainer" containerID="b68c3e9d5fec381205cff7840dff84ed802d1d3dd4294ad59eed929c11d88ac0" Nov 22 07:38:20 crc kubenswrapper[4856]: I1122 07:38:20.160571 4856 scope.go:117] "RemoveContainer" containerID="ee12aaf0afa1e8898092ae79bf6a5ca333cd078f19c65b37949306518a4fa5b2" Nov 22 07:38:20 crc kubenswrapper[4856]: I1122 07:38:20.194827 4856 scope.go:117] "RemoveContainer" containerID="b22af23b8eca911c39bf860e938113315fcb9f3dd60e8b97761359b25855b4a1" Nov 22 07:38:20 crc kubenswrapper[4856]: I1122 07:38:20.232206 4856 scope.go:117] "RemoveContainer" containerID="24259cf1c1f38f1bc7f64997b64b9ed69fb4bf62d123b79b4fadefd0f143056d" Nov 22 07:38:20 crc kubenswrapper[4856]: I1122 07:38:20.253757 4856 scope.go:117] "RemoveContainer" containerID="9c779f0b72d4f2c2ada9fd6dc8dc03ef2aab3227a1892368025f13f9dd006d57" Nov 22 07:38:20 crc kubenswrapper[4856]: I1122 07:38:20.272342 4856 scope.go:117] "RemoveContainer" containerID="889ab0aa1988eeb2448a9ab0bc42e314c5c9c7e3df09896245e4cd6f9448c8fb" Nov 22 07:38:21 crc kubenswrapper[4856]: I1122 07:38:21.483004 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tf2dt" event={"ID":"ee03d935-6d6f-4d2d-ab4e-bc9e85256487","Type":"ContainerStarted","Data":"a5bb2cd55b1ec69614c3fc9633b7194892823209556696d844f20de83cefc47b"} Nov 22 07:38:21 crc kubenswrapper[4856]: I1122 07:38:21.508148 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tf2dt" podStartSLOduration=3.748967405 podStartE2EDuration="14.5081297s" podCreationTimestamp="2025-11-22 07:38:07 +0000 UTC" firstStartedPulling="2025-11-22 07:38:09.380654562 +0000 UTC m=+2131.794047820" lastFinishedPulling="2025-11-22 07:38:20.139816857 +0000 UTC m=+2142.553210115" observedRunningTime="2025-11-22 07:38:21.506678291 +0000 UTC m=+2143.920071549" watchObservedRunningTime="2025-11-22 07:38:21.5081297 +0000 UTC m=+2143.921522958" Nov 22 07:38:28 crc kubenswrapper[4856]: I1122 07:38:28.043145 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:28 crc kubenswrapper[4856]: I1122 07:38:28.043707 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:28 crc kubenswrapper[4856]: I1122 07:38:28.087295 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:28 crc kubenswrapper[4856]: I1122 07:38:28.581406 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tf2dt" Nov 22 07:38:28 crc kubenswrapper[4856]: I1122 07:38:28.652555 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tf2dt"] Nov 22 07:38:28 crc kubenswrapper[4856]: I1122 07:38:28.693335 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-65fqc"] Nov 22 07:38:28 crc kubenswrapper[4856]: I1122 07:38:28.693650 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-65fqc" podUID="15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" containerName="registry-server" containerID="cri-o://7deb5ef4278accb7655743c98be0550a545d964e873af57bfc07fa0cee3e5d05" gracePeriod=2 Nov 22 07:38:29 crc kubenswrapper[4856]: I1122 07:38:29.754804 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:38:29 crc kubenswrapper[4856]: I1122 07:38:29.754867 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:38:30 crc kubenswrapper[4856]: E1122 07:38:30.474941 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7deb5ef4278accb7655743c98be0550a545d964e873af57bfc07fa0cee3e5d05 is running failed: container process not found" containerID="7deb5ef4278accb7655743c98be0550a545d964e873af57bfc07fa0cee3e5d05" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:38:30 crc kubenswrapper[4856]: E1122 07:38:30.475706 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7deb5ef4278accb7655743c98be0550a545d964e873af57bfc07fa0cee3e5d05 is running failed: container process not found" containerID="7deb5ef4278accb7655743c98be0550a545d964e873af57bfc07fa0cee3e5d05" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:38:30 crc kubenswrapper[4856]: E1122 07:38:30.476352 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7deb5ef4278accb7655743c98be0550a545d964e873af57bfc07fa0cee3e5d05 is running failed: container process not found" containerID="7deb5ef4278accb7655743c98be0550a545d964e873af57bfc07fa0cee3e5d05" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:38:30 crc kubenswrapper[4856]: E1122 07:38:30.476402 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 7deb5ef4278accb7655743c98be0550a545d964e873af57bfc07fa0cee3e5d05 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-65fqc" podUID="15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" containerName="registry-server" Nov 22 07:38:30 crc kubenswrapper[4856]: I1122 07:38:30.547260 4856 generic.go:334] "Generic (PLEG): container finished" podID="15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" containerID="7deb5ef4278accb7655743c98be0550a545d964e873af57bfc07fa0cee3e5d05" exitCode=0 Nov 22 07:38:30 crc kubenswrapper[4856]: I1122 07:38:30.547369 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fqc" event={"ID":"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc","Type":"ContainerDied","Data":"7deb5ef4278accb7655743c98be0550a545d964e873af57bfc07fa0cee3e5d05"} Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.089678 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.185129 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pmcb\" (UniqueName: \"kubernetes.io/projected/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-kube-api-access-7pmcb\") pod \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\" (UID: \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\") " Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.185282 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-utilities\") pod \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\" (UID: \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\") " Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.185443 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-catalog-content\") pod \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\" (UID: \"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc\") " Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.186004 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-utilities" (OuterVolumeSpecName: "utilities") pod "15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" (UID: "15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.191764 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-kube-api-access-7pmcb" (OuterVolumeSpecName: "kube-api-access-7pmcb") pod "15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" (UID: "15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc"). InnerVolumeSpecName "kube-api-access-7pmcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.287641 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.287678 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pmcb\" (UniqueName: \"kubernetes.io/projected/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-kube-api-access-7pmcb\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.293801 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" (UID: "15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.389414 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.573627 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-65fqc" event={"ID":"15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc","Type":"ContainerDied","Data":"a8cd6d8ed44b7b391a51485bca606151c535d3a1496aa6ad63439acc8e9d8326"} Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.573668 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-65fqc" Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.573697 4856 scope.go:117] "RemoveContainer" containerID="7deb5ef4278accb7655743c98be0550a545d964e873af57bfc07fa0cee3e5d05" Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.603270 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-65fqc"] Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.608498 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-65fqc"] Nov 22 07:38:33 crc kubenswrapper[4856]: I1122 07:38:33.627392 4856 scope.go:117] "RemoveContainer" containerID="c4b81cc20c736dfaa8755bdd0d37409fb5de3c16a16f5d73269c379208a166d2" Nov 22 07:38:34 crc kubenswrapper[4856]: I1122 07:38:34.021971 4856 scope.go:117] "RemoveContainer" containerID="bb21a190082d552f6fce36d6bc15c016cd0e681baf92e80b7487bf04d456b816" Nov 22 07:38:34 crc kubenswrapper[4856]: I1122 07:38:34.728779 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" path="/var/lib/kubelet/pods/15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc/volumes" Nov 22 07:38:59 crc kubenswrapper[4856]: I1122 07:38:59.754980 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:38:59 crc kubenswrapper[4856]: I1122 07:38:59.755639 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:38:59 crc kubenswrapper[4856]: I1122 07:38:59.755690 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:38:59 crc kubenswrapper[4856]: I1122 07:38:59.756322 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"390190f2e77b02ceb8fd2ed59e451cf120a15ca7d5e154142042d4828039a7b8"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:38:59 crc kubenswrapper[4856]: I1122 07:38:59.756373 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://390190f2e77b02ceb8fd2ed59e451cf120a15ca7d5e154142042d4828039a7b8" gracePeriod=600 Nov 22 07:39:00 crc kubenswrapper[4856]: I1122 07:39:00.797175 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="390190f2e77b02ceb8fd2ed59e451cf120a15ca7d5e154142042d4828039a7b8" exitCode=0 Nov 22 07:39:00 crc kubenswrapper[4856]: I1122 07:39:00.797232 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"390190f2e77b02ceb8fd2ed59e451cf120a15ca7d5e154142042d4828039a7b8"} Nov 22 07:39:00 crc kubenswrapper[4856]: I1122 07:39:00.797688 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d"} Nov 22 07:39:00 crc kubenswrapper[4856]: I1122 07:39:00.797726 4856 scope.go:117] "RemoveContainer" containerID="458523307c93e5fa2025ca0ab45e4a453b5c6607f1b564b519ffc9527b905167" Nov 22 07:39:20 crc kubenswrapper[4856]: I1122 07:39:20.362418 4856 scope.go:117] "RemoveContainer" containerID="d07bafd279ede63c481207e2d867b890ff02bb3ed145878df74e9c9bf2234f52" Nov 22 07:39:20 crc kubenswrapper[4856]: I1122 07:39:20.391105 4856 scope.go:117] "RemoveContainer" containerID="2be53b136ee6ae58f4111e796dc3dacfcd801bacdf0e16aa09eabf48d1ca897c" Nov 22 07:39:20 crc kubenswrapper[4856]: I1122 07:39:20.428403 4856 scope.go:117] "RemoveContainer" containerID="9741711f509fcaac60e11d8c80612fcee97889c09aee1bfdf5b301c894e0da33" Nov 22 07:39:20 crc kubenswrapper[4856]: I1122 07:39:20.464392 4856 scope.go:117] "RemoveContainer" containerID="7c3193dd655842750b4950a2f0999bd2a78e535166e5cf2551e4cdaf1b19f49e" Nov 22 07:39:20 crc kubenswrapper[4856]: I1122 07:39:20.486957 4856 scope.go:117] "RemoveContainer" containerID="512fcc1df0f14272bdaa8bdcc74bd190f573df805e1cf118ca7c673232d677b1" Nov 22 07:39:20 crc kubenswrapper[4856]: I1122 07:39:20.511408 4856 scope.go:117] "RemoveContainer" containerID="2dec82303da62aba427908f57f4ad7d03b32f004aa85bacfde2cdfa11f792f02" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.361937 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mr9zv"] Nov 22 07:39:21 crc kubenswrapper[4856]: E1122 07:39:21.362226 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" containerName="extract-utilities" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.362238 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" containerName="extract-utilities" Nov 22 07:39:21 crc kubenswrapper[4856]: E1122 07:39:21.362263 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" containerName="extract-content" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.362270 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" containerName="extract-content" Nov 22 07:39:21 crc kubenswrapper[4856]: E1122 07:39:21.362280 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" containerName="registry-server" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.362285 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" containerName="registry-server" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.362427 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="15cc2e9b-3d4b-4ea1-9067-6ee261fc0ebc" containerName="registry-server" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.363581 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.391226 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr9zv"] Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.472325 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62471c35-ef91-4891-b6cf-25362282d812-utilities\") pod \"redhat-marketplace-mr9zv\" (UID: \"62471c35-ef91-4891-b6cf-25362282d812\") " pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.472452 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5284\" (UniqueName: \"kubernetes.io/projected/62471c35-ef91-4891-b6cf-25362282d812-kube-api-access-h5284\") pod \"redhat-marketplace-mr9zv\" (UID: \"62471c35-ef91-4891-b6cf-25362282d812\") " pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.472652 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62471c35-ef91-4891-b6cf-25362282d812-catalog-content\") pod \"redhat-marketplace-mr9zv\" (UID: \"62471c35-ef91-4891-b6cf-25362282d812\") " pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.574334 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62471c35-ef91-4891-b6cf-25362282d812-utilities\") pod \"redhat-marketplace-mr9zv\" (UID: \"62471c35-ef91-4891-b6cf-25362282d812\") " pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.574893 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62471c35-ef91-4891-b6cf-25362282d812-utilities\") pod \"redhat-marketplace-mr9zv\" (UID: \"62471c35-ef91-4891-b6cf-25362282d812\") " pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.575970 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5284\" (UniqueName: \"kubernetes.io/projected/62471c35-ef91-4891-b6cf-25362282d812-kube-api-access-h5284\") pod \"redhat-marketplace-mr9zv\" (UID: \"62471c35-ef91-4891-b6cf-25362282d812\") " pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.576148 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62471c35-ef91-4891-b6cf-25362282d812-catalog-content\") pod \"redhat-marketplace-mr9zv\" (UID: \"62471c35-ef91-4891-b6cf-25362282d812\") " pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.576488 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62471c35-ef91-4891-b6cf-25362282d812-catalog-content\") pod \"redhat-marketplace-mr9zv\" (UID: \"62471c35-ef91-4891-b6cf-25362282d812\") " pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.596670 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5284\" (UniqueName: \"kubernetes.io/projected/62471c35-ef91-4891-b6cf-25362282d812-kube-api-access-h5284\") pod \"redhat-marketplace-mr9zv\" (UID: \"62471c35-ef91-4891-b6cf-25362282d812\") " pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:21 crc kubenswrapper[4856]: I1122 07:39:21.726639 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:22 crc kubenswrapper[4856]: I1122 07:39:22.009626 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr9zv"] Nov 22 07:39:22 crc kubenswrapper[4856]: I1122 07:39:22.154839 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr9zv" event={"ID":"62471c35-ef91-4891-b6cf-25362282d812","Type":"ContainerStarted","Data":"3a38774d15af5bd7ad5c5320a3982db6a566c272528cb9176cb7c84e5f9bb324"} Nov 22 07:39:23 crc kubenswrapper[4856]: I1122 07:39:23.163917 4856 generic.go:334] "Generic (PLEG): container finished" podID="62471c35-ef91-4891-b6cf-25362282d812" containerID="65d7b81e5b58df6cb8d488a1bb53645014f569c0a88228d705ddceed3cf5f5c9" exitCode=0 Nov 22 07:39:23 crc kubenswrapper[4856]: I1122 07:39:23.163962 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr9zv" event={"ID":"62471c35-ef91-4891-b6cf-25362282d812","Type":"ContainerDied","Data":"65d7b81e5b58df6cb8d488a1bb53645014f569c0a88228d705ddceed3cf5f5c9"} Nov 22 07:39:27 crc kubenswrapper[4856]: I1122 07:39:27.192617 4856 generic.go:334] "Generic (PLEG): container finished" podID="62471c35-ef91-4891-b6cf-25362282d812" containerID="4ea2e47d10d336c74c2e37763f362bf6370fe758acce036846bebf50d0c6bc10" exitCode=0 Nov 22 07:39:27 crc kubenswrapper[4856]: I1122 07:39:27.192740 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr9zv" event={"ID":"62471c35-ef91-4891-b6cf-25362282d812","Type":"ContainerDied","Data":"4ea2e47d10d336c74c2e37763f362bf6370fe758acce036846bebf50d0c6bc10"} Nov 22 07:39:28 crc kubenswrapper[4856]: I1122 07:39:28.203323 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr9zv" event={"ID":"62471c35-ef91-4891-b6cf-25362282d812","Type":"ContainerStarted","Data":"57056d5c9c32724ad49e0dd691cb3965aeb61064ce2366da0aefa72db81efaf2"} Nov 22 07:39:28 crc kubenswrapper[4856]: I1122 07:39:28.219996 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mr9zv" podStartSLOduration=2.421815452 podStartE2EDuration="7.219979356s" podCreationTimestamp="2025-11-22 07:39:21 +0000 UTC" firstStartedPulling="2025-11-22 07:39:23.165204726 +0000 UTC m=+2205.578597984" lastFinishedPulling="2025-11-22 07:39:27.96336863 +0000 UTC m=+2210.376761888" observedRunningTime="2025-11-22 07:39:28.219559324 +0000 UTC m=+2210.632952582" watchObservedRunningTime="2025-11-22 07:39:28.219979356 +0000 UTC m=+2210.633372614" Nov 22 07:39:31 crc kubenswrapper[4856]: I1122 07:39:31.727801 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:31 crc kubenswrapper[4856]: I1122 07:39:31.728201 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:31 crc kubenswrapper[4856]: I1122 07:39:31.768478 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:40 crc kubenswrapper[4856]: I1122 07:39:40.576444 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sgvnd"] Nov 22 07:39:40 crc kubenswrapper[4856]: I1122 07:39:40.578816 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:40 crc kubenswrapper[4856]: I1122 07:39:40.590669 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sgvnd"] Nov 22 07:39:40 crc kubenswrapper[4856]: I1122 07:39:40.741888 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-227x6\" (UniqueName: \"kubernetes.io/projected/c486ae91-b727-4072-b03c-cbf476ffc97f-kube-api-access-227x6\") pod \"certified-operators-sgvnd\" (UID: \"c486ae91-b727-4072-b03c-cbf476ffc97f\") " pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:40 crc kubenswrapper[4856]: I1122 07:39:40.741946 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c486ae91-b727-4072-b03c-cbf476ffc97f-catalog-content\") pod \"certified-operators-sgvnd\" (UID: \"c486ae91-b727-4072-b03c-cbf476ffc97f\") " pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:40 crc kubenswrapper[4856]: I1122 07:39:40.742027 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c486ae91-b727-4072-b03c-cbf476ffc97f-utilities\") pod \"certified-operators-sgvnd\" (UID: \"c486ae91-b727-4072-b03c-cbf476ffc97f\") " pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:40 crc kubenswrapper[4856]: I1122 07:39:40.842826 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-227x6\" (UniqueName: \"kubernetes.io/projected/c486ae91-b727-4072-b03c-cbf476ffc97f-kube-api-access-227x6\") pod \"certified-operators-sgvnd\" (UID: \"c486ae91-b727-4072-b03c-cbf476ffc97f\") " pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:40 crc kubenswrapper[4856]: I1122 07:39:40.842889 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c486ae91-b727-4072-b03c-cbf476ffc97f-catalog-content\") pod \"certified-operators-sgvnd\" (UID: \"c486ae91-b727-4072-b03c-cbf476ffc97f\") " pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:40 crc kubenswrapper[4856]: I1122 07:39:40.842950 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c486ae91-b727-4072-b03c-cbf476ffc97f-utilities\") pod \"certified-operators-sgvnd\" (UID: \"c486ae91-b727-4072-b03c-cbf476ffc97f\") " pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:40 crc kubenswrapper[4856]: I1122 07:39:40.843453 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c486ae91-b727-4072-b03c-cbf476ffc97f-utilities\") pod \"certified-operators-sgvnd\" (UID: \"c486ae91-b727-4072-b03c-cbf476ffc97f\") " pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:40 crc kubenswrapper[4856]: I1122 07:39:40.843602 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c486ae91-b727-4072-b03c-cbf476ffc97f-catalog-content\") pod \"certified-operators-sgvnd\" (UID: \"c486ae91-b727-4072-b03c-cbf476ffc97f\") " pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:40 crc kubenswrapper[4856]: I1122 07:39:40.872633 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-227x6\" (UniqueName: \"kubernetes.io/projected/c486ae91-b727-4072-b03c-cbf476ffc97f-kube-api-access-227x6\") pod \"certified-operators-sgvnd\" (UID: \"c486ae91-b727-4072-b03c-cbf476ffc97f\") " pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:40 crc kubenswrapper[4856]: I1122 07:39:40.907373 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:41 crc kubenswrapper[4856]: I1122 07:39:41.396443 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sgvnd"] Nov 22 07:39:41 crc kubenswrapper[4856]: W1122 07:39:41.400372 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc486ae91_b727_4072_b03c_cbf476ffc97f.slice/crio-2c0aab8c2a9991d01ae5b814f489c069dc0ec9cf0427ff9b7a9b22bdc3104a52 WatchSource:0}: Error finding container 2c0aab8c2a9991d01ae5b814f489c069dc0ec9cf0427ff9b7a9b22bdc3104a52: Status 404 returned error can't find the container with id 2c0aab8c2a9991d01ae5b814f489c069dc0ec9cf0427ff9b7a9b22bdc3104a52 Nov 22 07:39:41 crc kubenswrapper[4856]: I1122 07:39:41.799410 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:42 crc kubenswrapper[4856]: I1122 07:39:42.297937 4856 generic.go:334] "Generic (PLEG): container finished" podID="c486ae91-b727-4072-b03c-cbf476ffc97f" containerID="3d165e4c955c97624f69d03a1d06c054bbe30b65f384ec44b5d60a1a6966d654" exitCode=0 Nov 22 07:39:42 crc kubenswrapper[4856]: I1122 07:39:42.297983 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgvnd" event={"ID":"c486ae91-b727-4072-b03c-cbf476ffc97f","Type":"ContainerDied","Data":"3d165e4c955c97624f69d03a1d06c054bbe30b65f384ec44b5d60a1a6966d654"} Nov 22 07:39:42 crc kubenswrapper[4856]: I1122 07:39:42.298036 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgvnd" event={"ID":"c486ae91-b727-4072-b03c-cbf476ffc97f","Type":"ContainerStarted","Data":"2c0aab8c2a9991d01ae5b814f489c069dc0ec9cf0427ff9b7a9b22bdc3104a52"} Nov 22 07:39:43 crc kubenswrapper[4856]: I1122 07:39:43.755084 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr9zv"] Nov 22 07:39:43 crc kubenswrapper[4856]: I1122 07:39:43.756719 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mr9zv" podUID="62471c35-ef91-4891-b6cf-25362282d812" containerName="registry-server" containerID="cri-o://57056d5c9c32724ad49e0dd691cb3965aeb61064ce2366da0aefa72db81efaf2" gracePeriod=2 Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.135161 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.291140 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62471c35-ef91-4891-b6cf-25362282d812-catalog-content\") pod \"62471c35-ef91-4891-b6cf-25362282d812\" (UID: \"62471c35-ef91-4891-b6cf-25362282d812\") " Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.291233 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62471c35-ef91-4891-b6cf-25362282d812-utilities\") pod \"62471c35-ef91-4891-b6cf-25362282d812\" (UID: \"62471c35-ef91-4891-b6cf-25362282d812\") " Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.291307 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5284\" (UniqueName: \"kubernetes.io/projected/62471c35-ef91-4891-b6cf-25362282d812-kube-api-access-h5284\") pod \"62471c35-ef91-4891-b6cf-25362282d812\" (UID: \"62471c35-ef91-4891-b6cf-25362282d812\") " Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.292295 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62471c35-ef91-4891-b6cf-25362282d812-utilities" (OuterVolumeSpecName: "utilities") pod "62471c35-ef91-4891-b6cf-25362282d812" (UID: "62471c35-ef91-4891-b6cf-25362282d812"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.298728 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62471c35-ef91-4891-b6cf-25362282d812-kube-api-access-h5284" (OuterVolumeSpecName: "kube-api-access-h5284") pod "62471c35-ef91-4891-b6cf-25362282d812" (UID: "62471c35-ef91-4891-b6cf-25362282d812"). InnerVolumeSpecName "kube-api-access-h5284". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.313115 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62471c35-ef91-4891-b6cf-25362282d812-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62471c35-ef91-4891-b6cf-25362282d812" (UID: "62471c35-ef91-4891-b6cf-25362282d812"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.316546 4856 generic.go:334] "Generic (PLEG): container finished" podID="62471c35-ef91-4891-b6cf-25362282d812" containerID="57056d5c9c32724ad49e0dd691cb3965aeb61064ce2366da0aefa72db81efaf2" exitCode=0 Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.316623 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr9zv" event={"ID":"62471c35-ef91-4891-b6cf-25362282d812","Type":"ContainerDied","Data":"57056d5c9c32724ad49e0dd691cb3965aeb61064ce2366da0aefa72db81efaf2"} Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.316655 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mr9zv" event={"ID":"62471c35-ef91-4891-b6cf-25362282d812","Type":"ContainerDied","Data":"3a38774d15af5bd7ad5c5320a3982db6a566c272528cb9176cb7c84e5f9bb324"} Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.316673 4856 scope.go:117] "RemoveContainer" containerID="57056d5c9c32724ad49e0dd691cb3965aeb61064ce2366da0aefa72db81efaf2" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.316802 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mr9zv" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.320942 4856 generic.go:334] "Generic (PLEG): container finished" podID="c486ae91-b727-4072-b03c-cbf476ffc97f" containerID="d3846612886ae19993b5da7982fa40aa8bbb948944ca45f187c6e0c06e558547" exitCode=0 Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.320987 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgvnd" event={"ID":"c486ae91-b727-4072-b03c-cbf476ffc97f","Type":"ContainerDied","Data":"d3846612886ae19993b5da7982fa40aa8bbb948944ca45f187c6e0c06e558547"} Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.337108 4856 scope.go:117] "RemoveContainer" containerID="4ea2e47d10d336c74c2e37763f362bf6370fe758acce036846bebf50d0c6bc10" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.360922 4856 scope.go:117] "RemoveContainer" containerID="65d7b81e5b58df6cb8d488a1bb53645014f569c0a88228d705ddceed3cf5f5c9" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.363781 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr9zv"] Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.369620 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mr9zv"] Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.397812 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62471c35-ef91-4891-b6cf-25362282d812-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.397876 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62471c35-ef91-4891-b6cf-25362282d812-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.397899 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5284\" (UniqueName: \"kubernetes.io/projected/62471c35-ef91-4891-b6cf-25362282d812-kube-api-access-h5284\") on node \"crc\" DevicePath \"\"" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.400429 4856 scope.go:117] "RemoveContainer" containerID="57056d5c9c32724ad49e0dd691cb3965aeb61064ce2366da0aefa72db81efaf2" Nov 22 07:39:44 crc kubenswrapper[4856]: E1122 07:39:44.401088 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57056d5c9c32724ad49e0dd691cb3965aeb61064ce2366da0aefa72db81efaf2\": container with ID starting with 57056d5c9c32724ad49e0dd691cb3965aeb61064ce2366da0aefa72db81efaf2 not found: ID does not exist" containerID="57056d5c9c32724ad49e0dd691cb3965aeb61064ce2366da0aefa72db81efaf2" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.401173 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57056d5c9c32724ad49e0dd691cb3965aeb61064ce2366da0aefa72db81efaf2"} err="failed to get container status \"57056d5c9c32724ad49e0dd691cb3965aeb61064ce2366da0aefa72db81efaf2\": rpc error: code = NotFound desc = could not find container \"57056d5c9c32724ad49e0dd691cb3965aeb61064ce2366da0aefa72db81efaf2\": container with ID starting with 57056d5c9c32724ad49e0dd691cb3965aeb61064ce2366da0aefa72db81efaf2 not found: ID does not exist" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.401211 4856 scope.go:117] "RemoveContainer" containerID="4ea2e47d10d336c74c2e37763f362bf6370fe758acce036846bebf50d0c6bc10" Nov 22 07:39:44 crc kubenswrapper[4856]: E1122 07:39:44.402398 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ea2e47d10d336c74c2e37763f362bf6370fe758acce036846bebf50d0c6bc10\": container with ID starting with 4ea2e47d10d336c74c2e37763f362bf6370fe758acce036846bebf50d0c6bc10 not found: ID does not exist" containerID="4ea2e47d10d336c74c2e37763f362bf6370fe758acce036846bebf50d0c6bc10" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.402683 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ea2e47d10d336c74c2e37763f362bf6370fe758acce036846bebf50d0c6bc10"} err="failed to get container status \"4ea2e47d10d336c74c2e37763f362bf6370fe758acce036846bebf50d0c6bc10\": rpc error: code = NotFound desc = could not find container \"4ea2e47d10d336c74c2e37763f362bf6370fe758acce036846bebf50d0c6bc10\": container with ID starting with 4ea2e47d10d336c74c2e37763f362bf6370fe758acce036846bebf50d0c6bc10 not found: ID does not exist" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.402720 4856 scope.go:117] "RemoveContainer" containerID="65d7b81e5b58df6cb8d488a1bb53645014f569c0a88228d705ddceed3cf5f5c9" Nov 22 07:39:44 crc kubenswrapper[4856]: E1122 07:39:44.403313 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65d7b81e5b58df6cb8d488a1bb53645014f569c0a88228d705ddceed3cf5f5c9\": container with ID starting with 65d7b81e5b58df6cb8d488a1bb53645014f569c0a88228d705ddceed3cf5f5c9 not found: ID does not exist" containerID="65d7b81e5b58df6cb8d488a1bb53645014f569c0a88228d705ddceed3cf5f5c9" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.403344 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65d7b81e5b58df6cb8d488a1bb53645014f569c0a88228d705ddceed3cf5f5c9"} err="failed to get container status \"65d7b81e5b58df6cb8d488a1bb53645014f569c0a88228d705ddceed3cf5f5c9\": rpc error: code = NotFound desc = could not find container \"65d7b81e5b58df6cb8d488a1bb53645014f569c0a88228d705ddceed3cf5f5c9\": container with ID starting with 65d7b81e5b58df6cb8d488a1bb53645014f569c0a88228d705ddceed3cf5f5c9 not found: ID does not exist" Nov 22 07:39:44 crc kubenswrapper[4856]: I1122 07:39:44.720478 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62471c35-ef91-4891-b6cf-25362282d812" path="/var/lib/kubelet/pods/62471c35-ef91-4891-b6cf-25362282d812/volumes" Nov 22 07:39:45 crc kubenswrapper[4856]: I1122 07:39:45.331387 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgvnd" event={"ID":"c486ae91-b727-4072-b03c-cbf476ffc97f","Type":"ContainerStarted","Data":"6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3"} Nov 22 07:39:50 crc kubenswrapper[4856]: I1122 07:39:50.908149 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:50 crc kubenswrapper[4856]: I1122 07:39:50.908868 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:50 crc kubenswrapper[4856]: I1122 07:39:50.957897 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:50 crc kubenswrapper[4856]: I1122 07:39:50.986270 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sgvnd" podStartSLOduration=8.369020097 podStartE2EDuration="10.986243239s" podCreationTimestamp="2025-11-22 07:39:40 +0000 UTC" firstStartedPulling="2025-11-22 07:39:42.299342021 +0000 UTC m=+2224.712735279" lastFinishedPulling="2025-11-22 07:39:44.916565163 +0000 UTC m=+2227.329958421" observedRunningTime="2025-11-22 07:39:45.354563849 +0000 UTC m=+2227.767957127" watchObservedRunningTime="2025-11-22 07:39:50.986243239 +0000 UTC m=+2233.399636497" Nov 22 07:39:51 crc kubenswrapper[4856]: I1122 07:39:51.417150 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:39:59 crc kubenswrapper[4856]: I1122 07:39:59.755118 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sgvnd"] Nov 22 07:39:59 crc kubenswrapper[4856]: I1122 07:39:59.756045 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sgvnd" podUID="c486ae91-b727-4072-b03c-cbf476ffc97f" containerName="registry-server" containerID="cri-o://6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3" gracePeriod=2 Nov 22 07:40:00 crc kubenswrapper[4856]: E1122 07:40:00.908924 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3 is running failed: container process not found" containerID="6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:40:00 crc kubenswrapper[4856]: E1122 07:40:00.909858 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3 is running failed: container process not found" containerID="6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:40:00 crc kubenswrapper[4856]: E1122 07:40:00.910799 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3 is running failed: container process not found" containerID="6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3" cmd=["grpc_health_probe","-addr=:50051"] Nov 22 07:40:00 crc kubenswrapper[4856]: E1122 07:40:00.910878 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-sgvnd" podUID="c486ae91-b727-4072-b03c-cbf476ffc97f" containerName="registry-server" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.268571 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.339347 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c486ae91-b727-4072-b03c-cbf476ffc97f-utilities\") pod \"c486ae91-b727-4072-b03c-cbf476ffc97f\" (UID: \"c486ae91-b727-4072-b03c-cbf476ffc97f\") " Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.339403 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c486ae91-b727-4072-b03c-cbf476ffc97f-catalog-content\") pod \"c486ae91-b727-4072-b03c-cbf476ffc97f\" (UID: \"c486ae91-b727-4072-b03c-cbf476ffc97f\") " Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.340189 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c486ae91-b727-4072-b03c-cbf476ffc97f-utilities" (OuterVolumeSpecName: "utilities") pod "c486ae91-b727-4072-b03c-cbf476ffc97f" (UID: "c486ae91-b727-4072-b03c-cbf476ffc97f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.384612 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c486ae91-b727-4072-b03c-cbf476ffc97f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c486ae91-b727-4072-b03c-cbf476ffc97f" (UID: "c486ae91-b727-4072-b03c-cbf476ffc97f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.440044 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-227x6\" (UniqueName: \"kubernetes.io/projected/c486ae91-b727-4072-b03c-cbf476ffc97f-kube-api-access-227x6\") pod \"c486ae91-b727-4072-b03c-cbf476ffc97f\" (UID: \"c486ae91-b727-4072-b03c-cbf476ffc97f\") " Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.440316 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c486ae91-b727-4072-b03c-cbf476ffc97f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.440336 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c486ae91-b727-4072-b03c-cbf476ffc97f-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.444716 4856 generic.go:334] "Generic (PLEG): container finished" podID="c486ae91-b727-4072-b03c-cbf476ffc97f" containerID="6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3" exitCode=0 Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.444768 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgvnd" event={"ID":"c486ae91-b727-4072-b03c-cbf476ffc97f","Type":"ContainerDied","Data":"6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3"} Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.444808 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgvnd" event={"ID":"c486ae91-b727-4072-b03c-cbf476ffc97f","Type":"ContainerDied","Data":"2c0aab8c2a9991d01ae5b814f489c069dc0ec9cf0427ff9b7a9b22bdc3104a52"} Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.444819 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgvnd" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.444831 4856 scope.go:117] "RemoveContainer" containerID="6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.446835 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c486ae91-b727-4072-b03c-cbf476ffc97f-kube-api-access-227x6" (OuterVolumeSpecName: "kube-api-access-227x6") pod "c486ae91-b727-4072-b03c-cbf476ffc97f" (UID: "c486ae91-b727-4072-b03c-cbf476ffc97f"). InnerVolumeSpecName "kube-api-access-227x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.483629 4856 scope.go:117] "RemoveContainer" containerID="d3846612886ae19993b5da7982fa40aa8bbb948944ca45f187c6e0c06e558547" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.503999 4856 scope.go:117] "RemoveContainer" containerID="3d165e4c955c97624f69d03a1d06c054bbe30b65f384ec44b5d60a1a6966d654" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.533159 4856 scope.go:117] "RemoveContainer" containerID="6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3" Nov 22 07:40:01 crc kubenswrapper[4856]: E1122 07:40:01.533951 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3\": container with ID starting with 6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3 not found: ID does not exist" containerID="6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.534014 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3"} err="failed to get container status \"6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3\": rpc error: code = NotFound desc = could not find container \"6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3\": container with ID starting with 6876df68ffb6b89d5352d061c8b6698d7c73c9c287d6bd64fc3c46da11a2f1d3 not found: ID does not exist" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.534052 4856 scope.go:117] "RemoveContainer" containerID="d3846612886ae19993b5da7982fa40aa8bbb948944ca45f187c6e0c06e558547" Nov 22 07:40:01 crc kubenswrapper[4856]: E1122 07:40:01.535200 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3846612886ae19993b5da7982fa40aa8bbb948944ca45f187c6e0c06e558547\": container with ID starting with d3846612886ae19993b5da7982fa40aa8bbb948944ca45f187c6e0c06e558547 not found: ID does not exist" containerID="d3846612886ae19993b5da7982fa40aa8bbb948944ca45f187c6e0c06e558547" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.535234 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3846612886ae19993b5da7982fa40aa8bbb948944ca45f187c6e0c06e558547"} err="failed to get container status \"d3846612886ae19993b5da7982fa40aa8bbb948944ca45f187c6e0c06e558547\": rpc error: code = NotFound desc = could not find container \"d3846612886ae19993b5da7982fa40aa8bbb948944ca45f187c6e0c06e558547\": container with ID starting with d3846612886ae19993b5da7982fa40aa8bbb948944ca45f187c6e0c06e558547 not found: ID does not exist" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.535256 4856 scope.go:117] "RemoveContainer" containerID="3d165e4c955c97624f69d03a1d06c054bbe30b65f384ec44b5d60a1a6966d654" Nov 22 07:40:01 crc kubenswrapper[4856]: E1122 07:40:01.535800 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d165e4c955c97624f69d03a1d06c054bbe30b65f384ec44b5d60a1a6966d654\": container with ID starting with 3d165e4c955c97624f69d03a1d06c054bbe30b65f384ec44b5d60a1a6966d654 not found: ID does not exist" containerID="3d165e4c955c97624f69d03a1d06c054bbe30b65f384ec44b5d60a1a6966d654" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.535845 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d165e4c955c97624f69d03a1d06c054bbe30b65f384ec44b5d60a1a6966d654"} err="failed to get container status \"3d165e4c955c97624f69d03a1d06c054bbe30b65f384ec44b5d60a1a6966d654\": rpc error: code = NotFound desc = could not find container \"3d165e4c955c97624f69d03a1d06c054bbe30b65f384ec44b5d60a1a6966d654\": container with ID starting with 3d165e4c955c97624f69d03a1d06c054bbe30b65f384ec44b5d60a1a6966d654 not found: ID does not exist" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.541382 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-227x6\" (UniqueName: \"kubernetes.io/projected/c486ae91-b727-4072-b03c-cbf476ffc97f-kube-api-access-227x6\") on node \"crc\" DevicePath \"\"" Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.777427 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sgvnd"] Nov 22 07:40:01 crc kubenswrapper[4856]: I1122 07:40:01.782819 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sgvnd"] Nov 22 07:40:02 crc kubenswrapper[4856]: I1122 07:40:02.722051 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c486ae91-b727-4072-b03c-cbf476ffc97f" path="/var/lib/kubelet/pods/c486ae91-b727-4072-b03c-cbf476ffc97f/volumes" Nov 22 07:41:29 crc kubenswrapper[4856]: I1122 07:41:29.754417 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:41:29 crc kubenswrapper[4856]: I1122 07:41:29.754989 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:41:59 crc kubenswrapper[4856]: I1122 07:41:59.754486 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:41:59 crc kubenswrapper[4856]: I1122 07:41:59.755022 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:42:29 crc kubenswrapper[4856]: I1122 07:42:29.754282 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:42:29 crc kubenswrapper[4856]: I1122 07:42:29.755805 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:42:29 crc kubenswrapper[4856]: I1122 07:42:29.755872 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:42:29 crc kubenswrapper[4856]: I1122 07:42:29.756373 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:42:29 crc kubenswrapper[4856]: I1122 07:42:29.756432 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" gracePeriod=600 Nov 22 07:42:29 crc kubenswrapper[4856]: E1122 07:42:29.920765 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:42:30 crc kubenswrapper[4856]: I1122 07:42:30.524317 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" exitCode=0 Nov 22 07:42:30 crc kubenswrapper[4856]: I1122 07:42:30.524393 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d"} Nov 22 07:42:30 crc kubenswrapper[4856]: I1122 07:42:30.524742 4856 scope.go:117] "RemoveContainer" containerID="390190f2e77b02ceb8fd2ed59e451cf120a15ca7d5e154142042d4828039a7b8" Nov 22 07:42:30 crc kubenswrapper[4856]: I1122 07:42:30.525395 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:42:30 crc kubenswrapper[4856]: E1122 07:42:30.525650 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:42:42 crc kubenswrapper[4856]: I1122 07:42:42.710910 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:42:42 crc kubenswrapper[4856]: E1122 07:42:42.711670 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:42:56 crc kubenswrapper[4856]: I1122 07:42:56.711052 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:42:56 crc kubenswrapper[4856]: E1122 07:42:56.711828 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:43:08 crc kubenswrapper[4856]: I1122 07:43:08.713888 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:43:08 crc kubenswrapper[4856]: E1122 07:43:08.714732 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:43:21 crc kubenswrapper[4856]: I1122 07:43:21.710381 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:43:21 crc kubenswrapper[4856]: E1122 07:43:21.711550 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:43:33 crc kubenswrapper[4856]: I1122 07:43:33.709702 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:43:33 crc kubenswrapper[4856]: E1122 07:43:33.710178 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:43:45 crc kubenswrapper[4856]: I1122 07:43:45.709690 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:43:45 crc kubenswrapper[4856]: E1122 07:43:45.710372 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:44:00 crc kubenswrapper[4856]: I1122 07:44:00.709978 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:44:00 crc kubenswrapper[4856]: E1122 07:44:00.711283 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:44:15 crc kubenswrapper[4856]: I1122 07:44:15.709917 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:44:15 crc kubenswrapper[4856]: E1122 07:44:15.710565 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:44:30 crc kubenswrapper[4856]: I1122 07:44:30.709287 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:44:30 crc kubenswrapper[4856]: E1122 07:44:30.710078 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:44:43 crc kubenswrapper[4856]: I1122 07:44:43.710533 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:44:43 crc kubenswrapper[4856]: E1122 07:44:43.712427 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:44:57 crc kubenswrapper[4856]: I1122 07:44:57.709890 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:44:57 crc kubenswrapper[4856]: E1122 07:44:57.710614 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.160214 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg"] Nov 22 07:45:00 crc kubenswrapper[4856]: E1122 07:45:00.160665 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62471c35-ef91-4891-b6cf-25362282d812" containerName="registry-server" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.160683 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="62471c35-ef91-4891-b6cf-25362282d812" containerName="registry-server" Nov 22 07:45:00 crc kubenswrapper[4856]: E1122 07:45:00.160695 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c486ae91-b727-4072-b03c-cbf476ffc97f" containerName="extract-content" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.160700 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c486ae91-b727-4072-b03c-cbf476ffc97f" containerName="extract-content" Nov 22 07:45:00 crc kubenswrapper[4856]: E1122 07:45:00.160708 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62471c35-ef91-4891-b6cf-25362282d812" containerName="extract-utilities" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.160715 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="62471c35-ef91-4891-b6cf-25362282d812" containerName="extract-utilities" Nov 22 07:45:00 crc kubenswrapper[4856]: E1122 07:45:00.160732 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c486ae91-b727-4072-b03c-cbf476ffc97f" containerName="registry-server" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.160738 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c486ae91-b727-4072-b03c-cbf476ffc97f" containerName="registry-server" Nov 22 07:45:00 crc kubenswrapper[4856]: E1122 07:45:00.160749 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c486ae91-b727-4072-b03c-cbf476ffc97f" containerName="extract-utilities" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.160755 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c486ae91-b727-4072-b03c-cbf476ffc97f" containerName="extract-utilities" Nov 22 07:45:00 crc kubenswrapper[4856]: E1122 07:45:00.160767 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62471c35-ef91-4891-b6cf-25362282d812" containerName="extract-content" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.160772 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="62471c35-ef91-4891-b6cf-25362282d812" containerName="extract-content" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.160911 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c486ae91-b727-4072-b03c-cbf476ffc97f" containerName="registry-server" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.160938 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="62471c35-ef91-4891-b6cf-25362282d812" containerName="registry-server" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.161558 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.164760 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.165721 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f152037-3ab0-425a-9bec-a1f0c06dc808-secret-volume\") pod \"collect-profiles-29396625-hwmlg\" (UID: \"9f152037-3ab0-425a-9bec-a1f0c06dc808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.165775 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f152037-3ab0-425a-9bec-a1f0c06dc808-config-volume\") pod \"collect-profiles-29396625-hwmlg\" (UID: \"9f152037-3ab0-425a-9bec-a1f0c06dc808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.165798 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcfpb\" (UniqueName: \"kubernetes.io/projected/9f152037-3ab0-425a-9bec-a1f0c06dc808-kube-api-access-gcfpb\") pod \"collect-profiles-29396625-hwmlg\" (UID: \"9f152037-3ab0-425a-9bec-a1f0c06dc808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.166018 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.172755 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg"] Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.267497 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f152037-3ab0-425a-9bec-a1f0c06dc808-secret-volume\") pod \"collect-profiles-29396625-hwmlg\" (UID: \"9f152037-3ab0-425a-9bec-a1f0c06dc808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.267597 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f152037-3ab0-425a-9bec-a1f0c06dc808-config-volume\") pod \"collect-profiles-29396625-hwmlg\" (UID: \"9f152037-3ab0-425a-9bec-a1f0c06dc808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.267627 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcfpb\" (UniqueName: \"kubernetes.io/projected/9f152037-3ab0-425a-9bec-a1f0c06dc808-kube-api-access-gcfpb\") pod \"collect-profiles-29396625-hwmlg\" (UID: \"9f152037-3ab0-425a-9bec-a1f0c06dc808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.268925 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f152037-3ab0-425a-9bec-a1f0c06dc808-config-volume\") pod \"collect-profiles-29396625-hwmlg\" (UID: \"9f152037-3ab0-425a-9bec-a1f0c06dc808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.274198 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f152037-3ab0-425a-9bec-a1f0c06dc808-secret-volume\") pod \"collect-profiles-29396625-hwmlg\" (UID: \"9f152037-3ab0-425a-9bec-a1f0c06dc808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.290751 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcfpb\" (UniqueName: \"kubernetes.io/projected/9f152037-3ab0-425a-9bec-a1f0c06dc808-kube-api-access-gcfpb\") pod \"collect-profiles-29396625-hwmlg\" (UID: \"9f152037-3ab0-425a-9bec-a1f0c06dc808\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.517403 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" Nov 22 07:45:00 crc kubenswrapper[4856]: I1122 07:45:00.954367 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg"] Nov 22 07:45:01 crc kubenswrapper[4856]: I1122 07:45:01.591061 4856 generic.go:334] "Generic (PLEG): container finished" podID="9f152037-3ab0-425a-9bec-a1f0c06dc808" containerID="0715e32e4800a2c98cdde51b7576d1d29174b7256ce9850762932d568e3491db" exitCode=0 Nov 22 07:45:01 crc kubenswrapper[4856]: I1122 07:45:01.591113 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" event={"ID":"9f152037-3ab0-425a-9bec-a1f0c06dc808","Type":"ContainerDied","Data":"0715e32e4800a2c98cdde51b7576d1d29174b7256ce9850762932d568e3491db"} Nov 22 07:45:01 crc kubenswrapper[4856]: I1122 07:45:01.591151 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" event={"ID":"9f152037-3ab0-425a-9bec-a1f0c06dc808","Type":"ContainerStarted","Data":"4e70ece6ef55d3f939746e68acb4f95de10088394ac506a19cd51c3b0b935cee"} Nov 22 07:45:02 crc kubenswrapper[4856]: I1122 07:45:02.870108 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.004927 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcfpb\" (UniqueName: \"kubernetes.io/projected/9f152037-3ab0-425a-9bec-a1f0c06dc808-kube-api-access-gcfpb\") pod \"9f152037-3ab0-425a-9bec-a1f0c06dc808\" (UID: \"9f152037-3ab0-425a-9bec-a1f0c06dc808\") " Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.005057 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f152037-3ab0-425a-9bec-a1f0c06dc808-secret-volume\") pod \"9f152037-3ab0-425a-9bec-a1f0c06dc808\" (UID: \"9f152037-3ab0-425a-9bec-a1f0c06dc808\") " Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.005094 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f152037-3ab0-425a-9bec-a1f0c06dc808-config-volume\") pod \"9f152037-3ab0-425a-9bec-a1f0c06dc808\" (UID: \"9f152037-3ab0-425a-9bec-a1f0c06dc808\") " Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.006462 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f152037-3ab0-425a-9bec-a1f0c06dc808-config-volume" (OuterVolumeSpecName: "config-volume") pod "9f152037-3ab0-425a-9bec-a1f0c06dc808" (UID: "9f152037-3ab0-425a-9bec-a1f0c06dc808"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.011738 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f152037-3ab0-425a-9bec-a1f0c06dc808-kube-api-access-gcfpb" (OuterVolumeSpecName: "kube-api-access-gcfpb") pod "9f152037-3ab0-425a-9bec-a1f0c06dc808" (UID: "9f152037-3ab0-425a-9bec-a1f0c06dc808"). InnerVolumeSpecName "kube-api-access-gcfpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.017659 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f152037-3ab0-425a-9bec-a1f0c06dc808-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9f152037-3ab0-425a-9bec-a1f0c06dc808" (UID: "9f152037-3ab0-425a-9bec-a1f0c06dc808"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.106609 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcfpb\" (UniqueName: \"kubernetes.io/projected/9f152037-3ab0-425a-9bec-a1f0c06dc808-kube-api-access-gcfpb\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.106661 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9f152037-3ab0-425a-9bec-a1f0c06dc808-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.106675 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f152037-3ab0-425a-9bec-a1f0c06dc808-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.606221 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" event={"ID":"9f152037-3ab0-425a-9bec-a1f0c06dc808","Type":"ContainerDied","Data":"4e70ece6ef55d3f939746e68acb4f95de10088394ac506a19cd51c3b0b935cee"} Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.606268 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e70ece6ef55d3f939746e68acb4f95de10088394ac506a19cd51c3b0b935cee" Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.606279 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg" Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.945070 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th"] Nov 22 07:45:03 crc kubenswrapper[4856]: I1122 07:45:03.950822 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396580-dt9th"] Nov 22 07:45:04 crc kubenswrapper[4856]: I1122 07:45:04.722369 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d49e34-b412-49d0-8236-227ae0043102" path="/var/lib/kubelet/pods/20d49e34-b412-49d0-8236-227ae0043102/volumes" Nov 22 07:45:10 crc kubenswrapper[4856]: I1122 07:45:10.710008 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:45:10 crc kubenswrapper[4856]: E1122 07:45:10.710711 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:45:20 crc kubenswrapper[4856]: I1122 07:45:20.727796 4856 scope.go:117] "RemoveContainer" containerID="c1e682722299cb8414291959f0127dc13304bc425fb68cf227565881399a874f" Nov 22 07:45:25 crc kubenswrapper[4856]: I1122 07:45:25.709344 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:45:25 crc kubenswrapper[4856]: E1122 07:45:25.710068 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:45:38 crc kubenswrapper[4856]: I1122 07:45:38.713553 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:45:38 crc kubenswrapper[4856]: E1122 07:45:38.714326 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:45:52 crc kubenswrapper[4856]: I1122 07:45:52.710535 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:45:52 crc kubenswrapper[4856]: E1122 07:45:52.711360 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:46:07 crc kubenswrapper[4856]: I1122 07:46:07.709445 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:46:07 crc kubenswrapper[4856]: E1122 07:46:07.710350 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:46:22 crc kubenswrapper[4856]: I1122 07:46:22.711542 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:46:22 crc kubenswrapper[4856]: E1122 07:46:22.712444 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:46:33 crc kubenswrapper[4856]: I1122 07:46:33.710629 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:46:33 crc kubenswrapper[4856]: E1122 07:46:33.711181 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:46:48 crc kubenswrapper[4856]: I1122 07:46:48.713622 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:46:48 crc kubenswrapper[4856]: E1122 07:46:48.714432 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:47:03 crc kubenswrapper[4856]: I1122 07:47:03.710715 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:47:03 crc kubenswrapper[4856]: E1122 07:47:03.713448 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:47:16 crc kubenswrapper[4856]: I1122 07:47:16.709930 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:47:16 crc kubenswrapper[4856]: E1122 07:47:16.710759 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.085299 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n4ft7"] Nov 22 07:47:21 crc kubenswrapper[4856]: E1122 07:47:21.086039 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f152037-3ab0-425a-9bec-a1f0c06dc808" containerName="collect-profiles" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.086057 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f152037-3ab0-425a-9bec-a1f0c06dc808" containerName="collect-profiles" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.086242 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f152037-3ab0-425a-9bec-a1f0c06dc808" containerName="collect-profiles" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.087414 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.096775 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n4ft7"] Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.232289 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxr2t\" (UniqueName: \"kubernetes.io/projected/a990e45f-7c76-4cde-8029-7a419e28df44-kube-api-access-vxr2t\") pod \"redhat-operators-n4ft7\" (UID: \"a990e45f-7c76-4cde-8029-7a419e28df44\") " pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.232374 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a990e45f-7c76-4cde-8029-7a419e28df44-catalog-content\") pod \"redhat-operators-n4ft7\" (UID: \"a990e45f-7c76-4cde-8029-7a419e28df44\") " pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.232410 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a990e45f-7c76-4cde-8029-7a419e28df44-utilities\") pod \"redhat-operators-n4ft7\" (UID: \"a990e45f-7c76-4cde-8029-7a419e28df44\") " pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.333892 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxr2t\" (UniqueName: \"kubernetes.io/projected/a990e45f-7c76-4cde-8029-7a419e28df44-kube-api-access-vxr2t\") pod \"redhat-operators-n4ft7\" (UID: \"a990e45f-7c76-4cde-8029-7a419e28df44\") " pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.333960 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a990e45f-7c76-4cde-8029-7a419e28df44-catalog-content\") pod \"redhat-operators-n4ft7\" (UID: \"a990e45f-7c76-4cde-8029-7a419e28df44\") " pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.333992 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a990e45f-7c76-4cde-8029-7a419e28df44-utilities\") pod \"redhat-operators-n4ft7\" (UID: \"a990e45f-7c76-4cde-8029-7a419e28df44\") " pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.334557 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a990e45f-7c76-4cde-8029-7a419e28df44-utilities\") pod \"redhat-operators-n4ft7\" (UID: \"a990e45f-7c76-4cde-8029-7a419e28df44\") " pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.334678 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a990e45f-7c76-4cde-8029-7a419e28df44-catalog-content\") pod \"redhat-operators-n4ft7\" (UID: \"a990e45f-7c76-4cde-8029-7a419e28df44\") " pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.354422 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxr2t\" (UniqueName: \"kubernetes.io/projected/a990e45f-7c76-4cde-8029-7a419e28df44-kube-api-access-vxr2t\") pod \"redhat-operators-n4ft7\" (UID: \"a990e45f-7c76-4cde-8029-7a419e28df44\") " pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.421220 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:21 crc kubenswrapper[4856]: I1122 07:47:21.855197 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n4ft7"] Nov 22 07:47:22 crc kubenswrapper[4856]: I1122 07:47:22.052487 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4ft7" event={"ID":"a990e45f-7c76-4cde-8029-7a419e28df44","Type":"ContainerStarted","Data":"959dd81f0aa473c77995cd6407aec4ce617b4b1fc038987609d2ccf8bd4a2392"} Nov 22 07:47:23 crc kubenswrapper[4856]: I1122 07:47:23.062422 4856 generic.go:334] "Generic (PLEG): container finished" podID="a990e45f-7c76-4cde-8029-7a419e28df44" containerID="0c7ce12739d94d7859f31dce5c52248d3ddf63c5ec00abe9d5dcf568b6abd9f2" exitCode=0 Nov 22 07:47:23 crc kubenswrapper[4856]: I1122 07:47:23.062485 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4ft7" event={"ID":"a990e45f-7c76-4cde-8029-7a419e28df44","Type":"ContainerDied","Data":"0c7ce12739d94d7859f31dce5c52248d3ddf63c5ec00abe9d5dcf568b6abd9f2"} Nov 22 07:47:23 crc kubenswrapper[4856]: I1122 07:47:23.064539 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:47:26 crc kubenswrapper[4856]: I1122 07:47:26.085270 4856 generic.go:334] "Generic (PLEG): container finished" podID="a990e45f-7c76-4cde-8029-7a419e28df44" containerID="95384a09530ada56fcb72254516385594dabc1766034ce6f9c449d0469e02669" exitCode=0 Nov 22 07:47:26 crc kubenswrapper[4856]: I1122 07:47:26.085374 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4ft7" event={"ID":"a990e45f-7c76-4cde-8029-7a419e28df44","Type":"ContainerDied","Data":"95384a09530ada56fcb72254516385594dabc1766034ce6f9c449d0469e02669"} Nov 22 07:47:28 crc kubenswrapper[4856]: I1122 07:47:28.713593 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:47:28 crc kubenswrapper[4856]: E1122 07:47:28.716270 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:47:29 crc kubenswrapper[4856]: I1122 07:47:29.109962 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4ft7" event={"ID":"a990e45f-7c76-4cde-8029-7a419e28df44","Type":"ContainerStarted","Data":"a974157ab651689ce593b6aeae7a4c4a2b75f2b4dc03ea2f2e9ea28dea53c096"} Nov 22 07:47:29 crc kubenswrapper[4856]: I1122 07:47:29.129710 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n4ft7" podStartSLOduration=3.198258682 podStartE2EDuration="8.129688576s" podCreationTimestamp="2025-11-22 07:47:21 +0000 UTC" firstStartedPulling="2025-11-22 07:47:23.064267431 +0000 UTC m=+2685.477660699" lastFinishedPulling="2025-11-22 07:47:27.995697325 +0000 UTC m=+2690.409090593" observedRunningTime="2025-11-22 07:47:29.126616073 +0000 UTC m=+2691.540009351" watchObservedRunningTime="2025-11-22 07:47:29.129688576 +0000 UTC m=+2691.543081834" Nov 22 07:47:33 crc kubenswrapper[4856]: I1122 07:47:31.421839 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:33 crc kubenswrapper[4856]: I1122 07:47:31.422266 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:33 crc kubenswrapper[4856]: I1122 07:47:32.467644 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n4ft7" podUID="a990e45f-7c76-4cde-8029-7a419e28df44" containerName="registry-server" probeResult="failure" output=< Nov 22 07:47:33 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 07:47:33 crc kubenswrapper[4856]: > Nov 22 07:47:41 crc kubenswrapper[4856]: I1122 07:47:41.463750 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:41 crc kubenswrapper[4856]: I1122 07:47:41.511782 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:41 crc kubenswrapper[4856]: I1122 07:47:41.695831 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n4ft7"] Nov 22 07:47:41 crc kubenswrapper[4856]: I1122 07:47:41.710542 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:47:43 crc kubenswrapper[4856]: I1122 07:47:43.215576 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"1585a0d87eeecdd07bf8a6c54b454a378ff9c6404235b8b030917d82c75a75af"} Nov 22 07:47:43 crc kubenswrapper[4856]: I1122 07:47:43.215874 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n4ft7" podUID="a990e45f-7c76-4cde-8029-7a419e28df44" containerName="registry-server" containerID="cri-o://a974157ab651689ce593b6aeae7a4c4a2b75f2b4dc03ea2f2e9ea28dea53c096" gracePeriod=2 Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.203895 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.235663 4856 generic.go:334] "Generic (PLEG): container finished" podID="a990e45f-7c76-4cde-8029-7a419e28df44" containerID="a974157ab651689ce593b6aeae7a4c4a2b75f2b4dc03ea2f2e9ea28dea53c096" exitCode=0 Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.235734 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4ft7" event={"ID":"a990e45f-7c76-4cde-8029-7a419e28df44","Type":"ContainerDied","Data":"a974157ab651689ce593b6aeae7a4c4a2b75f2b4dc03ea2f2e9ea28dea53c096"} Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.235745 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n4ft7" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.235785 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n4ft7" event={"ID":"a990e45f-7c76-4cde-8029-7a419e28df44","Type":"ContainerDied","Data":"959dd81f0aa473c77995cd6407aec4ce617b4b1fc038987609d2ccf8bd4a2392"} Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.235817 4856 scope.go:117] "RemoveContainer" containerID="a974157ab651689ce593b6aeae7a4c4a2b75f2b4dc03ea2f2e9ea28dea53c096" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.273782 4856 scope.go:117] "RemoveContainer" containerID="95384a09530ada56fcb72254516385594dabc1766034ce6f9c449d0469e02669" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.275104 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a990e45f-7c76-4cde-8029-7a419e28df44-catalog-content\") pod \"a990e45f-7c76-4cde-8029-7a419e28df44\" (UID: \"a990e45f-7c76-4cde-8029-7a419e28df44\") " Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.275213 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxr2t\" (UniqueName: \"kubernetes.io/projected/a990e45f-7c76-4cde-8029-7a419e28df44-kube-api-access-vxr2t\") pod \"a990e45f-7c76-4cde-8029-7a419e28df44\" (UID: \"a990e45f-7c76-4cde-8029-7a419e28df44\") " Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.275261 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a990e45f-7c76-4cde-8029-7a419e28df44-utilities\") pod \"a990e45f-7c76-4cde-8029-7a419e28df44\" (UID: \"a990e45f-7c76-4cde-8029-7a419e28df44\") " Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.277585 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a990e45f-7c76-4cde-8029-7a419e28df44-utilities" (OuterVolumeSpecName: "utilities") pod "a990e45f-7c76-4cde-8029-7a419e28df44" (UID: "a990e45f-7c76-4cde-8029-7a419e28df44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.297497 4856 scope.go:117] "RemoveContainer" containerID="0c7ce12739d94d7859f31dce5c52248d3ddf63c5ec00abe9d5dcf568b6abd9f2" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.297856 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a990e45f-7c76-4cde-8029-7a419e28df44-kube-api-access-vxr2t" (OuterVolumeSpecName: "kube-api-access-vxr2t") pod "a990e45f-7c76-4cde-8029-7a419e28df44" (UID: "a990e45f-7c76-4cde-8029-7a419e28df44"). InnerVolumeSpecName "kube-api-access-vxr2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.347260 4856 scope.go:117] "RemoveContainer" containerID="a974157ab651689ce593b6aeae7a4c4a2b75f2b4dc03ea2f2e9ea28dea53c096" Nov 22 07:47:44 crc kubenswrapper[4856]: E1122 07:47:44.348039 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a974157ab651689ce593b6aeae7a4c4a2b75f2b4dc03ea2f2e9ea28dea53c096\": container with ID starting with a974157ab651689ce593b6aeae7a4c4a2b75f2b4dc03ea2f2e9ea28dea53c096 not found: ID does not exist" containerID="a974157ab651689ce593b6aeae7a4c4a2b75f2b4dc03ea2f2e9ea28dea53c096" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.348088 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a974157ab651689ce593b6aeae7a4c4a2b75f2b4dc03ea2f2e9ea28dea53c096"} err="failed to get container status \"a974157ab651689ce593b6aeae7a4c4a2b75f2b4dc03ea2f2e9ea28dea53c096\": rpc error: code = NotFound desc = could not find container \"a974157ab651689ce593b6aeae7a4c4a2b75f2b4dc03ea2f2e9ea28dea53c096\": container with ID starting with a974157ab651689ce593b6aeae7a4c4a2b75f2b4dc03ea2f2e9ea28dea53c096 not found: ID does not exist" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.348117 4856 scope.go:117] "RemoveContainer" containerID="95384a09530ada56fcb72254516385594dabc1766034ce6f9c449d0469e02669" Nov 22 07:47:44 crc kubenswrapper[4856]: E1122 07:47:44.348553 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95384a09530ada56fcb72254516385594dabc1766034ce6f9c449d0469e02669\": container with ID starting with 95384a09530ada56fcb72254516385594dabc1766034ce6f9c449d0469e02669 not found: ID does not exist" containerID="95384a09530ada56fcb72254516385594dabc1766034ce6f9c449d0469e02669" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.348584 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95384a09530ada56fcb72254516385594dabc1766034ce6f9c449d0469e02669"} err="failed to get container status \"95384a09530ada56fcb72254516385594dabc1766034ce6f9c449d0469e02669\": rpc error: code = NotFound desc = could not find container \"95384a09530ada56fcb72254516385594dabc1766034ce6f9c449d0469e02669\": container with ID starting with 95384a09530ada56fcb72254516385594dabc1766034ce6f9c449d0469e02669 not found: ID does not exist" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.348606 4856 scope.go:117] "RemoveContainer" containerID="0c7ce12739d94d7859f31dce5c52248d3ddf63c5ec00abe9d5dcf568b6abd9f2" Nov 22 07:47:44 crc kubenswrapper[4856]: E1122 07:47:44.349110 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c7ce12739d94d7859f31dce5c52248d3ddf63c5ec00abe9d5dcf568b6abd9f2\": container with ID starting with 0c7ce12739d94d7859f31dce5c52248d3ddf63c5ec00abe9d5dcf568b6abd9f2 not found: ID does not exist" containerID="0c7ce12739d94d7859f31dce5c52248d3ddf63c5ec00abe9d5dcf568b6abd9f2" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.349193 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c7ce12739d94d7859f31dce5c52248d3ddf63c5ec00abe9d5dcf568b6abd9f2"} err="failed to get container status \"0c7ce12739d94d7859f31dce5c52248d3ddf63c5ec00abe9d5dcf568b6abd9f2\": rpc error: code = NotFound desc = could not find container \"0c7ce12739d94d7859f31dce5c52248d3ddf63c5ec00abe9d5dcf568b6abd9f2\": container with ID starting with 0c7ce12739d94d7859f31dce5c52248d3ddf63c5ec00abe9d5dcf568b6abd9f2 not found: ID does not exist" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.378608 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxr2t\" (UniqueName: \"kubernetes.io/projected/a990e45f-7c76-4cde-8029-7a419e28df44-kube-api-access-vxr2t\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.378638 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a990e45f-7c76-4cde-8029-7a419e28df44-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.382645 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a990e45f-7c76-4cde-8029-7a419e28df44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a990e45f-7c76-4cde-8029-7a419e28df44" (UID: "a990e45f-7c76-4cde-8029-7a419e28df44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.479701 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a990e45f-7c76-4cde-8029-7a419e28df44-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.571647 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n4ft7"] Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.586652 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n4ft7"] Nov 22 07:47:44 crc kubenswrapper[4856]: I1122 07:47:44.719661 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a990e45f-7c76-4cde-8029-7a419e28df44" path="/var/lib/kubelet/pods/a990e45f-7c76-4cde-8029-7a419e28df44/volumes" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.349115 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-p6sbk"] Nov 22 07:48:20 crc kubenswrapper[4856]: E1122 07:48:20.350065 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a990e45f-7c76-4cde-8029-7a419e28df44" containerName="registry-server" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.350081 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a990e45f-7c76-4cde-8029-7a419e28df44" containerName="registry-server" Nov 22 07:48:20 crc kubenswrapper[4856]: E1122 07:48:20.350091 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a990e45f-7c76-4cde-8029-7a419e28df44" containerName="extract-utilities" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.350097 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a990e45f-7c76-4cde-8029-7a419e28df44" containerName="extract-utilities" Nov 22 07:48:20 crc kubenswrapper[4856]: E1122 07:48:20.350114 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a990e45f-7c76-4cde-8029-7a419e28df44" containerName="extract-content" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.350123 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a990e45f-7c76-4cde-8029-7a419e28df44" containerName="extract-content" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.350284 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a990e45f-7c76-4cde-8029-7a419e28df44" containerName="registry-server" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.351589 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.402673 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6sbk"] Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.417652 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e94d5aa-f22c-493a-a028-d58d52356f36-utilities\") pod \"community-operators-p6sbk\" (UID: \"2e94d5aa-f22c-493a-a028-d58d52356f36\") " pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.417870 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e94d5aa-f22c-493a-a028-d58d52356f36-catalog-content\") pod \"community-operators-p6sbk\" (UID: \"2e94d5aa-f22c-493a-a028-d58d52356f36\") " pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.418192 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hsjd\" (UniqueName: \"kubernetes.io/projected/2e94d5aa-f22c-493a-a028-d58d52356f36-kube-api-access-2hsjd\") pod \"community-operators-p6sbk\" (UID: \"2e94d5aa-f22c-493a-a028-d58d52356f36\") " pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.520371 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e94d5aa-f22c-493a-a028-d58d52356f36-catalog-content\") pod \"community-operators-p6sbk\" (UID: \"2e94d5aa-f22c-493a-a028-d58d52356f36\") " pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.520543 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hsjd\" (UniqueName: \"kubernetes.io/projected/2e94d5aa-f22c-493a-a028-d58d52356f36-kube-api-access-2hsjd\") pod \"community-operators-p6sbk\" (UID: \"2e94d5aa-f22c-493a-a028-d58d52356f36\") " pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.520604 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e94d5aa-f22c-493a-a028-d58d52356f36-utilities\") pod \"community-operators-p6sbk\" (UID: \"2e94d5aa-f22c-493a-a028-d58d52356f36\") " pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.521126 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e94d5aa-f22c-493a-a028-d58d52356f36-catalog-content\") pod \"community-operators-p6sbk\" (UID: \"2e94d5aa-f22c-493a-a028-d58d52356f36\") " pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.521154 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e94d5aa-f22c-493a-a028-d58d52356f36-utilities\") pod \"community-operators-p6sbk\" (UID: \"2e94d5aa-f22c-493a-a028-d58d52356f36\") " pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.543608 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hsjd\" (UniqueName: \"kubernetes.io/projected/2e94d5aa-f22c-493a-a028-d58d52356f36-kube-api-access-2hsjd\") pod \"community-operators-p6sbk\" (UID: \"2e94d5aa-f22c-493a-a028-d58d52356f36\") " pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:20 crc kubenswrapper[4856]: I1122 07:48:20.678783 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:21 crc kubenswrapper[4856]: I1122 07:48:21.172847 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-p6sbk"] Nov 22 07:48:21 crc kubenswrapper[4856]: I1122 07:48:21.529956 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6sbk" event={"ID":"2e94d5aa-f22c-493a-a028-d58d52356f36","Type":"ContainerStarted","Data":"370ce68fba7e87afaa525638f3a0f5cc0a4a7c57b07d4dde06967e711277457f"} Nov 22 07:48:22 crc kubenswrapper[4856]: I1122 07:48:22.539087 4856 generic.go:334] "Generic (PLEG): container finished" podID="2e94d5aa-f22c-493a-a028-d58d52356f36" containerID="eece22502151aedc348ca6b08be2bf6a8eae6527fc091edf4c9edb5320d09d13" exitCode=0 Nov 22 07:48:22 crc kubenswrapper[4856]: I1122 07:48:22.539156 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6sbk" event={"ID":"2e94d5aa-f22c-493a-a028-d58d52356f36","Type":"ContainerDied","Data":"eece22502151aedc348ca6b08be2bf6a8eae6527fc091edf4c9edb5320d09d13"} Nov 22 07:48:27 crc kubenswrapper[4856]: I1122 07:48:27.586241 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6sbk" event={"ID":"2e94d5aa-f22c-493a-a028-d58d52356f36","Type":"ContainerStarted","Data":"abdd7fdfe29c47041e627d427252b9520de10f8cde1c91da7290a6d8508bc679"} Nov 22 07:48:28 crc kubenswrapper[4856]: I1122 07:48:28.594202 4856 generic.go:334] "Generic (PLEG): container finished" podID="2e94d5aa-f22c-493a-a028-d58d52356f36" containerID="abdd7fdfe29c47041e627d427252b9520de10f8cde1c91da7290a6d8508bc679" exitCode=0 Nov 22 07:48:28 crc kubenswrapper[4856]: I1122 07:48:28.594269 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6sbk" event={"ID":"2e94d5aa-f22c-493a-a028-d58d52356f36","Type":"ContainerDied","Data":"abdd7fdfe29c47041e627d427252b9520de10f8cde1c91da7290a6d8508bc679"} Nov 22 07:48:29 crc kubenswrapper[4856]: I1122 07:48:29.602852 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6sbk" event={"ID":"2e94d5aa-f22c-493a-a028-d58d52356f36","Type":"ContainerStarted","Data":"c5ef0d4c872fa758e222f1fee441178a123133adbfc14c85100a1b6b4f7a64db"} Nov 22 07:48:29 crc kubenswrapper[4856]: I1122 07:48:29.622207 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-p6sbk" podStartSLOduration=3.111929942 podStartE2EDuration="9.622187767s" podCreationTimestamp="2025-11-22 07:48:20 +0000 UTC" firstStartedPulling="2025-11-22 07:48:22.54189205 +0000 UTC m=+2744.955285308" lastFinishedPulling="2025-11-22 07:48:29.052149875 +0000 UTC m=+2751.465543133" observedRunningTime="2025-11-22 07:48:29.62189208 +0000 UTC m=+2752.035285348" watchObservedRunningTime="2025-11-22 07:48:29.622187767 +0000 UTC m=+2752.035581025" Nov 22 07:48:30 crc kubenswrapper[4856]: I1122 07:48:30.679261 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:30 crc kubenswrapper[4856]: I1122 07:48:30.679699 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:31 crc kubenswrapper[4856]: I1122 07:48:31.720475 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-p6sbk" podUID="2e94d5aa-f22c-493a-a028-d58d52356f36" containerName="registry-server" probeResult="failure" output=< Nov 22 07:48:31 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 07:48:31 crc kubenswrapper[4856]: > Nov 22 07:48:40 crc kubenswrapper[4856]: I1122 07:48:40.723900 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:40 crc kubenswrapper[4856]: I1122 07:48:40.769595 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:40 crc kubenswrapper[4856]: I1122 07:48:40.958452 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6sbk"] Nov 22 07:48:42 crc kubenswrapper[4856]: I1122 07:48:42.706953 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-p6sbk" podUID="2e94d5aa-f22c-493a-a028-d58d52356f36" containerName="registry-server" containerID="cri-o://c5ef0d4c872fa758e222f1fee441178a123133adbfc14c85100a1b6b4f7a64db" gracePeriod=2 Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.609705 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.709674 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e94d5aa-f22c-493a-a028-d58d52356f36-catalog-content\") pod \"2e94d5aa-f22c-493a-a028-d58d52356f36\" (UID: \"2e94d5aa-f22c-493a-a028-d58d52356f36\") " Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.709893 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hsjd\" (UniqueName: \"kubernetes.io/projected/2e94d5aa-f22c-493a-a028-d58d52356f36-kube-api-access-2hsjd\") pod \"2e94d5aa-f22c-493a-a028-d58d52356f36\" (UID: \"2e94d5aa-f22c-493a-a028-d58d52356f36\") " Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.709947 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e94d5aa-f22c-493a-a028-d58d52356f36-utilities\") pod \"2e94d5aa-f22c-493a-a028-d58d52356f36\" (UID: \"2e94d5aa-f22c-493a-a028-d58d52356f36\") " Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.711153 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e94d5aa-f22c-493a-a028-d58d52356f36-utilities" (OuterVolumeSpecName: "utilities") pod "2e94d5aa-f22c-493a-a028-d58d52356f36" (UID: "2e94d5aa-f22c-493a-a028-d58d52356f36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.719112 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e94d5aa-f22c-493a-a028-d58d52356f36-kube-api-access-2hsjd" (OuterVolumeSpecName: "kube-api-access-2hsjd") pod "2e94d5aa-f22c-493a-a028-d58d52356f36" (UID: "2e94d5aa-f22c-493a-a028-d58d52356f36"). InnerVolumeSpecName "kube-api-access-2hsjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.723047 4856 generic.go:334] "Generic (PLEG): container finished" podID="2e94d5aa-f22c-493a-a028-d58d52356f36" containerID="c5ef0d4c872fa758e222f1fee441178a123133adbfc14c85100a1b6b4f7a64db" exitCode=0 Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.723123 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6sbk" event={"ID":"2e94d5aa-f22c-493a-a028-d58d52356f36","Type":"ContainerDied","Data":"c5ef0d4c872fa758e222f1fee441178a123133adbfc14c85100a1b6b4f7a64db"} Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.723139 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-p6sbk" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.723174 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-p6sbk" event={"ID":"2e94d5aa-f22c-493a-a028-d58d52356f36","Type":"ContainerDied","Data":"370ce68fba7e87afaa525638f3a0f5cc0a4a7c57b07d4dde06967e711277457f"} Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.723202 4856 scope.go:117] "RemoveContainer" containerID="c5ef0d4c872fa758e222f1fee441178a123133adbfc14c85100a1b6b4f7a64db" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.763521 4856 scope.go:117] "RemoveContainer" containerID="abdd7fdfe29c47041e627d427252b9520de10f8cde1c91da7290a6d8508bc679" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.768684 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e94d5aa-f22c-493a-a028-d58d52356f36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e94d5aa-f22c-493a-a028-d58d52356f36" (UID: "2e94d5aa-f22c-493a-a028-d58d52356f36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.785933 4856 scope.go:117] "RemoveContainer" containerID="eece22502151aedc348ca6b08be2bf6a8eae6527fc091edf4c9edb5320d09d13" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.812367 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e94d5aa-f22c-493a-a028-d58d52356f36-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.812401 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hsjd\" (UniqueName: \"kubernetes.io/projected/2e94d5aa-f22c-493a-a028-d58d52356f36-kube-api-access-2hsjd\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.812413 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e94d5aa-f22c-493a-a028-d58d52356f36-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.823989 4856 scope.go:117] "RemoveContainer" containerID="c5ef0d4c872fa758e222f1fee441178a123133adbfc14c85100a1b6b4f7a64db" Nov 22 07:48:43 crc kubenswrapper[4856]: E1122 07:48:43.827500 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5ef0d4c872fa758e222f1fee441178a123133adbfc14c85100a1b6b4f7a64db\": container with ID starting with c5ef0d4c872fa758e222f1fee441178a123133adbfc14c85100a1b6b4f7a64db not found: ID does not exist" containerID="c5ef0d4c872fa758e222f1fee441178a123133adbfc14c85100a1b6b4f7a64db" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.827589 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5ef0d4c872fa758e222f1fee441178a123133adbfc14c85100a1b6b4f7a64db"} err="failed to get container status \"c5ef0d4c872fa758e222f1fee441178a123133adbfc14c85100a1b6b4f7a64db\": rpc error: code = NotFound desc = could not find container \"c5ef0d4c872fa758e222f1fee441178a123133adbfc14c85100a1b6b4f7a64db\": container with ID starting with c5ef0d4c872fa758e222f1fee441178a123133adbfc14c85100a1b6b4f7a64db not found: ID does not exist" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.827725 4856 scope.go:117] "RemoveContainer" containerID="abdd7fdfe29c47041e627d427252b9520de10f8cde1c91da7290a6d8508bc679" Nov 22 07:48:43 crc kubenswrapper[4856]: E1122 07:48:43.828678 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abdd7fdfe29c47041e627d427252b9520de10f8cde1c91da7290a6d8508bc679\": container with ID starting with abdd7fdfe29c47041e627d427252b9520de10f8cde1c91da7290a6d8508bc679 not found: ID does not exist" containerID="abdd7fdfe29c47041e627d427252b9520de10f8cde1c91da7290a6d8508bc679" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.828721 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abdd7fdfe29c47041e627d427252b9520de10f8cde1c91da7290a6d8508bc679"} err="failed to get container status \"abdd7fdfe29c47041e627d427252b9520de10f8cde1c91da7290a6d8508bc679\": rpc error: code = NotFound desc = could not find container \"abdd7fdfe29c47041e627d427252b9520de10f8cde1c91da7290a6d8508bc679\": container with ID starting with abdd7fdfe29c47041e627d427252b9520de10f8cde1c91da7290a6d8508bc679 not found: ID does not exist" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.828740 4856 scope.go:117] "RemoveContainer" containerID="eece22502151aedc348ca6b08be2bf6a8eae6527fc091edf4c9edb5320d09d13" Nov 22 07:48:43 crc kubenswrapper[4856]: E1122 07:48:43.829295 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eece22502151aedc348ca6b08be2bf6a8eae6527fc091edf4c9edb5320d09d13\": container with ID starting with eece22502151aedc348ca6b08be2bf6a8eae6527fc091edf4c9edb5320d09d13 not found: ID does not exist" containerID="eece22502151aedc348ca6b08be2bf6a8eae6527fc091edf4c9edb5320d09d13" Nov 22 07:48:43 crc kubenswrapper[4856]: I1122 07:48:43.829321 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eece22502151aedc348ca6b08be2bf6a8eae6527fc091edf4c9edb5320d09d13"} err="failed to get container status \"eece22502151aedc348ca6b08be2bf6a8eae6527fc091edf4c9edb5320d09d13\": rpc error: code = NotFound desc = could not find container \"eece22502151aedc348ca6b08be2bf6a8eae6527fc091edf4c9edb5320d09d13\": container with ID starting with eece22502151aedc348ca6b08be2bf6a8eae6527fc091edf4c9edb5320d09d13 not found: ID does not exist" Nov 22 07:48:44 crc kubenswrapper[4856]: I1122 07:48:44.060181 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-p6sbk"] Nov 22 07:48:44 crc kubenswrapper[4856]: I1122 07:48:44.065876 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-p6sbk"] Nov 22 07:48:44 crc kubenswrapper[4856]: I1122 07:48:44.721026 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e94d5aa-f22c-493a-a028-d58d52356f36" path="/var/lib/kubelet/pods/2e94d5aa-f22c-493a-a028-d58d52356f36/volumes" Nov 22 07:49:28 crc kubenswrapper[4856]: I1122 07:49:28.847199 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rrp5f"] Nov 22 07:49:28 crc kubenswrapper[4856]: E1122 07:49:28.851028 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e94d5aa-f22c-493a-a028-d58d52356f36" containerName="extract-utilities" Nov 22 07:49:28 crc kubenswrapper[4856]: I1122 07:49:28.851071 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e94d5aa-f22c-493a-a028-d58d52356f36" containerName="extract-utilities" Nov 22 07:49:28 crc kubenswrapper[4856]: E1122 07:49:28.851101 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e94d5aa-f22c-493a-a028-d58d52356f36" containerName="registry-server" Nov 22 07:49:28 crc kubenswrapper[4856]: I1122 07:49:28.851112 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e94d5aa-f22c-493a-a028-d58d52356f36" containerName="registry-server" Nov 22 07:49:28 crc kubenswrapper[4856]: E1122 07:49:28.851125 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e94d5aa-f22c-493a-a028-d58d52356f36" containerName="extract-content" Nov 22 07:49:28 crc kubenswrapper[4856]: I1122 07:49:28.851135 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e94d5aa-f22c-493a-a028-d58d52356f36" containerName="extract-content" Nov 22 07:49:28 crc kubenswrapper[4856]: I1122 07:49:28.851413 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e94d5aa-f22c-493a-a028-d58d52356f36" containerName="registry-server" Nov 22 07:49:28 crc kubenswrapper[4856]: I1122 07:49:28.854974 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:28 crc kubenswrapper[4856]: I1122 07:49:28.876914 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrp5f"] Nov 22 07:49:28 crc kubenswrapper[4856]: I1122 07:49:28.923998 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376b8f0f-f9d1-4677-b6d2-05650839aafc-utilities\") pod \"redhat-marketplace-rrp5f\" (UID: \"376b8f0f-f9d1-4677-b6d2-05650839aafc\") " pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:28 crc kubenswrapper[4856]: I1122 07:49:28.924060 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntdp8\" (UniqueName: \"kubernetes.io/projected/376b8f0f-f9d1-4677-b6d2-05650839aafc-kube-api-access-ntdp8\") pod \"redhat-marketplace-rrp5f\" (UID: \"376b8f0f-f9d1-4677-b6d2-05650839aafc\") " pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:28 crc kubenswrapper[4856]: I1122 07:49:28.924084 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376b8f0f-f9d1-4677-b6d2-05650839aafc-catalog-content\") pod \"redhat-marketplace-rrp5f\" (UID: \"376b8f0f-f9d1-4677-b6d2-05650839aafc\") " pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:29 crc kubenswrapper[4856]: I1122 07:49:29.026311 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntdp8\" (UniqueName: \"kubernetes.io/projected/376b8f0f-f9d1-4677-b6d2-05650839aafc-kube-api-access-ntdp8\") pod \"redhat-marketplace-rrp5f\" (UID: \"376b8f0f-f9d1-4677-b6d2-05650839aafc\") " pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:29 crc kubenswrapper[4856]: I1122 07:49:29.026403 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376b8f0f-f9d1-4677-b6d2-05650839aafc-catalog-content\") pod \"redhat-marketplace-rrp5f\" (UID: \"376b8f0f-f9d1-4677-b6d2-05650839aafc\") " pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:29 crc kubenswrapper[4856]: I1122 07:49:29.026576 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376b8f0f-f9d1-4677-b6d2-05650839aafc-utilities\") pod \"redhat-marketplace-rrp5f\" (UID: \"376b8f0f-f9d1-4677-b6d2-05650839aafc\") " pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:29 crc kubenswrapper[4856]: I1122 07:49:29.027283 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376b8f0f-f9d1-4677-b6d2-05650839aafc-catalog-content\") pod \"redhat-marketplace-rrp5f\" (UID: \"376b8f0f-f9d1-4677-b6d2-05650839aafc\") " pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:29 crc kubenswrapper[4856]: I1122 07:49:29.027342 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376b8f0f-f9d1-4677-b6d2-05650839aafc-utilities\") pod \"redhat-marketplace-rrp5f\" (UID: \"376b8f0f-f9d1-4677-b6d2-05650839aafc\") " pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:29 crc kubenswrapper[4856]: I1122 07:49:29.047394 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntdp8\" (UniqueName: \"kubernetes.io/projected/376b8f0f-f9d1-4677-b6d2-05650839aafc-kube-api-access-ntdp8\") pod \"redhat-marketplace-rrp5f\" (UID: \"376b8f0f-f9d1-4677-b6d2-05650839aafc\") " pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:29 crc kubenswrapper[4856]: I1122 07:49:29.182857 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:29 crc kubenswrapper[4856]: I1122 07:49:29.415571 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrp5f"] Nov 22 07:49:30 crc kubenswrapper[4856]: I1122 07:49:30.074988 4856 generic.go:334] "Generic (PLEG): container finished" podID="376b8f0f-f9d1-4677-b6d2-05650839aafc" containerID="d12d8593934faf7543a9f837869039a8eba9a0a236b13c8ef5a55560be399142" exitCode=0 Nov 22 07:49:30 crc kubenswrapper[4856]: I1122 07:49:30.075037 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrp5f" event={"ID":"376b8f0f-f9d1-4677-b6d2-05650839aafc","Type":"ContainerDied","Data":"d12d8593934faf7543a9f837869039a8eba9a0a236b13c8ef5a55560be399142"} Nov 22 07:49:30 crc kubenswrapper[4856]: I1122 07:49:30.075443 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrp5f" event={"ID":"376b8f0f-f9d1-4677-b6d2-05650839aafc","Type":"ContainerStarted","Data":"e622de032d8fa8bfae036e53f0b5af81c964f8fa01f498ef2558e29e88249555"} Nov 22 07:49:31 crc kubenswrapper[4856]: I1122 07:49:31.089236 4856 generic.go:334] "Generic (PLEG): container finished" podID="376b8f0f-f9d1-4677-b6d2-05650839aafc" containerID="c2d10a251ae3b2bee9ace32b3dd1ee91ba684d9081ff5419327365f5babe9404" exitCode=0 Nov 22 07:49:31 crc kubenswrapper[4856]: I1122 07:49:31.089323 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrp5f" event={"ID":"376b8f0f-f9d1-4677-b6d2-05650839aafc","Type":"ContainerDied","Data":"c2d10a251ae3b2bee9ace32b3dd1ee91ba684d9081ff5419327365f5babe9404"} Nov 22 07:49:32 crc kubenswrapper[4856]: I1122 07:49:32.098938 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrp5f" event={"ID":"376b8f0f-f9d1-4677-b6d2-05650839aafc","Type":"ContainerStarted","Data":"2edd5507a10cc632fa3e063286aad24916a9f2304ec0eff52d575f8030e6df8d"} Nov 22 07:49:32 crc kubenswrapper[4856]: I1122 07:49:32.119412 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rrp5f" podStartSLOduration=2.649054992 podStartE2EDuration="4.119393961s" podCreationTimestamp="2025-11-22 07:49:28 +0000 UTC" firstStartedPulling="2025-11-22 07:49:30.07720814 +0000 UTC m=+2812.490601398" lastFinishedPulling="2025-11-22 07:49:31.547547109 +0000 UTC m=+2813.960940367" observedRunningTime="2025-11-22 07:49:32.11378653 +0000 UTC m=+2814.527179788" watchObservedRunningTime="2025-11-22 07:49:32.119393961 +0000 UTC m=+2814.532787219" Nov 22 07:49:39 crc kubenswrapper[4856]: I1122 07:49:39.183606 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:39 crc kubenswrapper[4856]: I1122 07:49:39.184625 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:39 crc kubenswrapper[4856]: I1122 07:49:39.234433 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:40 crc kubenswrapper[4856]: I1122 07:49:40.201727 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:40 crc kubenswrapper[4856]: I1122 07:49:40.242250 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrp5f"] Nov 22 07:49:42 crc kubenswrapper[4856]: I1122 07:49:42.170484 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rrp5f" podUID="376b8f0f-f9d1-4677-b6d2-05650839aafc" containerName="registry-server" containerID="cri-o://2edd5507a10cc632fa3e063286aad24916a9f2304ec0eff52d575f8030e6df8d" gracePeriod=2 Nov 22 07:49:42 crc kubenswrapper[4856]: I1122 07:49:42.530165 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:42 crc kubenswrapper[4856]: I1122 07:49:42.626840 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376b8f0f-f9d1-4677-b6d2-05650839aafc-utilities\") pod \"376b8f0f-f9d1-4677-b6d2-05650839aafc\" (UID: \"376b8f0f-f9d1-4677-b6d2-05650839aafc\") " Nov 22 07:49:42 crc kubenswrapper[4856]: I1122 07:49:42.627067 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntdp8\" (UniqueName: \"kubernetes.io/projected/376b8f0f-f9d1-4677-b6d2-05650839aafc-kube-api-access-ntdp8\") pod \"376b8f0f-f9d1-4677-b6d2-05650839aafc\" (UID: \"376b8f0f-f9d1-4677-b6d2-05650839aafc\") " Nov 22 07:49:42 crc kubenswrapper[4856]: I1122 07:49:42.627202 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376b8f0f-f9d1-4677-b6d2-05650839aafc-catalog-content\") pod \"376b8f0f-f9d1-4677-b6d2-05650839aafc\" (UID: \"376b8f0f-f9d1-4677-b6d2-05650839aafc\") " Nov 22 07:49:42 crc kubenswrapper[4856]: I1122 07:49:42.629257 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376b8f0f-f9d1-4677-b6d2-05650839aafc-utilities" (OuterVolumeSpecName: "utilities") pod "376b8f0f-f9d1-4677-b6d2-05650839aafc" (UID: "376b8f0f-f9d1-4677-b6d2-05650839aafc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:42 crc kubenswrapper[4856]: I1122 07:49:42.633613 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/376b8f0f-f9d1-4677-b6d2-05650839aafc-kube-api-access-ntdp8" (OuterVolumeSpecName: "kube-api-access-ntdp8") pod "376b8f0f-f9d1-4677-b6d2-05650839aafc" (UID: "376b8f0f-f9d1-4677-b6d2-05650839aafc"). InnerVolumeSpecName "kube-api-access-ntdp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:42 crc kubenswrapper[4856]: I1122 07:49:42.647153 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/376b8f0f-f9d1-4677-b6d2-05650839aafc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "376b8f0f-f9d1-4677-b6d2-05650839aafc" (UID: "376b8f0f-f9d1-4677-b6d2-05650839aafc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:42 crc kubenswrapper[4856]: I1122 07:49:42.729350 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/376b8f0f-f9d1-4677-b6d2-05650839aafc-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:42 crc kubenswrapper[4856]: I1122 07:49:42.729777 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntdp8\" (UniqueName: \"kubernetes.io/projected/376b8f0f-f9d1-4677-b6d2-05650839aafc-kube-api-access-ntdp8\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:42 crc kubenswrapper[4856]: I1122 07:49:42.729792 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/376b8f0f-f9d1-4677-b6d2-05650839aafc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.179990 4856 generic.go:334] "Generic (PLEG): container finished" podID="376b8f0f-f9d1-4677-b6d2-05650839aafc" containerID="2edd5507a10cc632fa3e063286aad24916a9f2304ec0eff52d575f8030e6df8d" exitCode=0 Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.180041 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrp5f" event={"ID":"376b8f0f-f9d1-4677-b6d2-05650839aafc","Type":"ContainerDied","Data":"2edd5507a10cc632fa3e063286aad24916a9f2304ec0eff52d575f8030e6df8d"} Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.180070 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rrp5f" event={"ID":"376b8f0f-f9d1-4677-b6d2-05650839aafc","Type":"ContainerDied","Data":"e622de032d8fa8bfae036e53f0b5af81c964f8fa01f498ef2558e29e88249555"} Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.180092 4856 scope.go:117] "RemoveContainer" containerID="2edd5507a10cc632fa3e063286aad24916a9f2304ec0eff52d575f8030e6df8d" Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.180235 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rrp5f" Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.203015 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrp5f"] Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.207891 4856 scope.go:117] "RemoveContainer" containerID="c2d10a251ae3b2bee9ace32b3dd1ee91ba684d9081ff5419327365f5babe9404" Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.215180 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rrp5f"] Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.228975 4856 scope.go:117] "RemoveContainer" containerID="d12d8593934faf7543a9f837869039a8eba9a0a236b13c8ef5a55560be399142" Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.258892 4856 scope.go:117] "RemoveContainer" containerID="2edd5507a10cc632fa3e063286aad24916a9f2304ec0eff52d575f8030e6df8d" Nov 22 07:49:43 crc kubenswrapper[4856]: E1122 07:49:43.259465 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2edd5507a10cc632fa3e063286aad24916a9f2304ec0eff52d575f8030e6df8d\": container with ID starting with 2edd5507a10cc632fa3e063286aad24916a9f2304ec0eff52d575f8030e6df8d not found: ID does not exist" containerID="2edd5507a10cc632fa3e063286aad24916a9f2304ec0eff52d575f8030e6df8d" Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.259550 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2edd5507a10cc632fa3e063286aad24916a9f2304ec0eff52d575f8030e6df8d"} err="failed to get container status \"2edd5507a10cc632fa3e063286aad24916a9f2304ec0eff52d575f8030e6df8d\": rpc error: code = NotFound desc = could not find container \"2edd5507a10cc632fa3e063286aad24916a9f2304ec0eff52d575f8030e6df8d\": container with ID starting with 2edd5507a10cc632fa3e063286aad24916a9f2304ec0eff52d575f8030e6df8d not found: ID does not exist" Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.259582 4856 scope.go:117] "RemoveContainer" containerID="c2d10a251ae3b2bee9ace32b3dd1ee91ba684d9081ff5419327365f5babe9404" Nov 22 07:49:43 crc kubenswrapper[4856]: E1122 07:49:43.260007 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2d10a251ae3b2bee9ace32b3dd1ee91ba684d9081ff5419327365f5babe9404\": container with ID starting with c2d10a251ae3b2bee9ace32b3dd1ee91ba684d9081ff5419327365f5babe9404 not found: ID does not exist" containerID="c2d10a251ae3b2bee9ace32b3dd1ee91ba684d9081ff5419327365f5babe9404" Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.260050 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2d10a251ae3b2bee9ace32b3dd1ee91ba684d9081ff5419327365f5babe9404"} err="failed to get container status \"c2d10a251ae3b2bee9ace32b3dd1ee91ba684d9081ff5419327365f5babe9404\": rpc error: code = NotFound desc = could not find container \"c2d10a251ae3b2bee9ace32b3dd1ee91ba684d9081ff5419327365f5babe9404\": container with ID starting with c2d10a251ae3b2bee9ace32b3dd1ee91ba684d9081ff5419327365f5babe9404 not found: ID does not exist" Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.260088 4856 scope.go:117] "RemoveContainer" containerID="d12d8593934faf7543a9f837869039a8eba9a0a236b13c8ef5a55560be399142" Nov 22 07:49:43 crc kubenswrapper[4856]: E1122 07:49:43.260665 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d12d8593934faf7543a9f837869039a8eba9a0a236b13c8ef5a55560be399142\": container with ID starting with d12d8593934faf7543a9f837869039a8eba9a0a236b13c8ef5a55560be399142 not found: ID does not exist" containerID="d12d8593934faf7543a9f837869039a8eba9a0a236b13c8ef5a55560be399142" Nov 22 07:49:43 crc kubenswrapper[4856]: I1122 07:49:43.260766 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d12d8593934faf7543a9f837869039a8eba9a0a236b13c8ef5a55560be399142"} err="failed to get container status \"d12d8593934faf7543a9f837869039a8eba9a0a236b13c8ef5a55560be399142\": rpc error: code = NotFound desc = could not find container \"d12d8593934faf7543a9f837869039a8eba9a0a236b13c8ef5a55560be399142\": container with ID starting with d12d8593934faf7543a9f837869039a8eba9a0a236b13c8ef5a55560be399142 not found: ID does not exist" Nov 22 07:49:44 crc kubenswrapper[4856]: I1122 07:49:44.719362 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="376b8f0f-f9d1-4677-b6d2-05650839aafc" path="/var/lib/kubelet/pods/376b8f0f-f9d1-4677-b6d2-05650839aafc/volumes" Nov 22 07:49:44 crc kubenswrapper[4856]: I1122 07:49:44.891550 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wgstt"] Nov 22 07:49:44 crc kubenswrapper[4856]: E1122 07:49:44.892313 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376b8f0f-f9d1-4677-b6d2-05650839aafc" containerName="registry-server" Nov 22 07:49:44 crc kubenswrapper[4856]: I1122 07:49:44.892347 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="376b8f0f-f9d1-4677-b6d2-05650839aafc" containerName="registry-server" Nov 22 07:49:44 crc kubenswrapper[4856]: E1122 07:49:44.892402 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376b8f0f-f9d1-4677-b6d2-05650839aafc" containerName="extract-utilities" Nov 22 07:49:44 crc kubenswrapper[4856]: I1122 07:49:44.892412 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="376b8f0f-f9d1-4677-b6d2-05650839aafc" containerName="extract-utilities" Nov 22 07:49:44 crc kubenswrapper[4856]: E1122 07:49:44.892428 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="376b8f0f-f9d1-4677-b6d2-05650839aafc" containerName="extract-content" Nov 22 07:49:44 crc kubenswrapper[4856]: I1122 07:49:44.892436 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="376b8f0f-f9d1-4677-b6d2-05650839aafc" containerName="extract-content" Nov 22 07:49:44 crc kubenswrapper[4856]: I1122 07:49:44.892653 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="376b8f0f-f9d1-4677-b6d2-05650839aafc" containerName="registry-server" Nov 22 07:49:44 crc kubenswrapper[4856]: I1122 07:49:44.893797 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:44 crc kubenswrapper[4856]: I1122 07:49:44.900742 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wgstt"] Nov 22 07:49:44 crc kubenswrapper[4856]: I1122 07:49:44.963345 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-utilities\") pod \"certified-operators-wgstt\" (UID: \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\") " pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:44 crc kubenswrapper[4856]: I1122 07:49:44.963418 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-catalog-content\") pod \"certified-operators-wgstt\" (UID: \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\") " pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:44 crc kubenswrapper[4856]: I1122 07:49:44.963473 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94672\" (UniqueName: \"kubernetes.io/projected/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-kube-api-access-94672\") pod \"certified-operators-wgstt\" (UID: \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\") " pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:45 crc kubenswrapper[4856]: I1122 07:49:45.064623 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94672\" (UniqueName: \"kubernetes.io/projected/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-kube-api-access-94672\") pod \"certified-operators-wgstt\" (UID: \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\") " pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:45 crc kubenswrapper[4856]: I1122 07:49:45.064697 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-utilities\") pod \"certified-operators-wgstt\" (UID: \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\") " pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:45 crc kubenswrapper[4856]: I1122 07:49:45.065147 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-utilities\") pod \"certified-operators-wgstt\" (UID: \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\") " pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:45 crc kubenswrapper[4856]: I1122 07:49:45.065301 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-catalog-content\") pod \"certified-operators-wgstt\" (UID: \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\") " pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:45 crc kubenswrapper[4856]: I1122 07:49:45.065567 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-catalog-content\") pod \"certified-operators-wgstt\" (UID: \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\") " pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:45 crc kubenswrapper[4856]: I1122 07:49:45.085300 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94672\" (UniqueName: \"kubernetes.io/projected/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-kube-api-access-94672\") pod \"certified-operators-wgstt\" (UID: \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\") " pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:45 crc kubenswrapper[4856]: I1122 07:49:45.220374 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:45 crc kubenswrapper[4856]: I1122 07:49:45.670343 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wgstt"] Nov 22 07:49:46 crc kubenswrapper[4856]: I1122 07:49:46.200936 4856 generic.go:334] "Generic (PLEG): container finished" podID="df1c8611-50eb-4e4c-adc4-58cd3dbe011f" containerID="7813fc345c273bc84871963ce26c43ac988007ad46c1c11a9c79e15569b9ffc7" exitCode=0 Nov 22 07:49:46 crc kubenswrapper[4856]: I1122 07:49:46.200974 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgstt" event={"ID":"df1c8611-50eb-4e4c-adc4-58cd3dbe011f","Type":"ContainerDied","Data":"7813fc345c273bc84871963ce26c43ac988007ad46c1c11a9c79e15569b9ffc7"} Nov 22 07:49:46 crc kubenswrapper[4856]: I1122 07:49:46.200998 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgstt" event={"ID":"df1c8611-50eb-4e4c-adc4-58cd3dbe011f","Type":"ContainerStarted","Data":"dde196b362a49b926119d72b2e3a92117dad9bd71bdd505e40951a7e5901172d"} Nov 22 07:49:47 crc kubenswrapper[4856]: I1122 07:49:47.216986 4856 generic.go:334] "Generic (PLEG): container finished" podID="df1c8611-50eb-4e4c-adc4-58cd3dbe011f" containerID="d023df1316124c335c39793d9943ef1927ba48e2061de1a982a53a38c15759c3" exitCode=0 Nov 22 07:49:47 crc kubenswrapper[4856]: I1122 07:49:47.217054 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgstt" event={"ID":"df1c8611-50eb-4e4c-adc4-58cd3dbe011f","Type":"ContainerDied","Data":"d023df1316124c335c39793d9943ef1927ba48e2061de1a982a53a38c15759c3"} Nov 22 07:49:48 crc kubenswrapper[4856]: I1122 07:49:48.227629 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgstt" event={"ID":"df1c8611-50eb-4e4c-adc4-58cd3dbe011f","Type":"ContainerStarted","Data":"b38e159e6f824c2745e58c2dfa4e5bc7852e4166767ade6deff6bbf888a7b5b0"} Nov 22 07:49:48 crc kubenswrapper[4856]: I1122 07:49:48.245012 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wgstt" podStartSLOduration=2.822888419 podStartE2EDuration="4.244991469s" podCreationTimestamp="2025-11-22 07:49:44 +0000 UTC" firstStartedPulling="2025-11-22 07:49:46.203200709 +0000 UTC m=+2828.616593967" lastFinishedPulling="2025-11-22 07:49:47.625303759 +0000 UTC m=+2830.038697017" observedRunningTime="2025-11-22 07:49:48.243283493 +0000 UTC m=+2830.656676751" watchObservedRunningTime="2025-11-22 07:49:48.244991469 +0000 UTC m=+2830.658384727" Nov 22 07:49:55 crc kubenswrapper[4856]: I1122 07:49:55.220759 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:55 crc kubenswrapper[4856]: I1122 07:49:55.221429 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:55 crc kubenswrapper[4856]: I1122 07:49:55.273085 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:55 crc kubenswrapper[4856]: I1122 07:49:55.330832 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:55 crc kubenswrapper[4856]: I1122 07:49:55.511599 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wgstt"] Nov 22 07:49:57 crc kubenswrapper[4856]: I1122 07:49:57.294823 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wgstt" podUID="df1c8611-50eb-4e4c-adc4-58cd3dbe011f" containerName="registry-server" containerID="cri-o://b38e159e6f824c2745e58c2dfa4e5bc7852e4166767ade6deff6bbf888a7b5b0" gracePeriod=2 Nov 22 07:49:57 crc kubenswrapper[4856]: I1122 07:49:57.719131 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:57 crc kubenswrapper[4856]: I1122 07:49:57.758862 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-utilities\") pod \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\" (UID: \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\") " Nov 22 07:49:57 crc kubenswrapper[4856]: I1122 07:49:57.758997 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-catalog-content\") pod \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\" (UID: \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\") " Nov 22 07:49:57 crc kubenswrapper[4856]: I1122 07:49:57.759104 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94672\" (UniqueName: \"kubernetes.io/projected/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-kube-api-access-94672\") pod \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\" (UID: \"df1c8611-50eb-4e4c-adc4-58cd3dbe011f\") " Nov 22 07:49:57 crc kubenswrapper[4856]: I1122 07:49:57.760051 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-utilities" (OuterVolumeSpecName: "utilities") pod "df1c8611-50eb-4e4c-adc4-58cd3dbe011f" (UID: "df1c8611-50eb-4e4c-adc4-58cd3dbe011f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:57 crc kubenswrapper[4856]: I1122 07:49:57.765639 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-kube-api-access-94672" (OuterVolumeSpecName: "kube-api-access-94672") pod "df1c8611-50eb-4e4c-adc4-58cd3dbe011f" (UID: "df1c8611-50eb-4e4c-adc4-58cd3dbe011f"). InnerVolumeSpecName "kube-api-access-94672". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:49:57 crc kubenswrapper[4856]: I1122 07:49:57.861567 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94672\" (UniqueName: \"kubernetes.io/projected/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-kube-api-access-94672\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:57 crc kubenswrapper[4856]: I1122 07:49:57.861606 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.303834 4856 generic.go:334] "Generic (PLEG): container finished" podID="df1c8611-50eb-4e4c-adc4-58cd3dbe011f" containerID="b38e159e6f824c2745e58c2dfa4e5bc7852e4166767ade6deff6bbf888a7b5b0" exitCode=0 Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.303914 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgstt" Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.303930 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgstt" event={"ID":"df1c8611-50eb-4e4c-adc4-58cd3dbe011f","Type":"ContainerDied","Data":"b38e159e6f824c2745e58c2dfa4e5bc7852e4166767ade6deff6bbf888a7b5b0"} Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.304380 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgstt" event={"ID":"df1c8611-50eb-4e4c-adc4-58cd3dbe011f","Type":"ContainerDied","Data":"dde196b362a49b926119d72b2e3a92117dad9bd71bdd505e40951a7e5901172d"} Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.304397 4856 scope.go:117] "RemoveContainer" containerID="b38e159e6f824c2745e58c2dfa4e5bc7852e4166767ade6deff6bbf888a7b5b0" Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.320679 4856 scope.go:117] "RemoveContainer" containerID="d023df1316124c335c39793d9943ef1927ba48e2061de1a982a53a38c15759c3" Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.350429 4856 scope.go:117] "RemoveContainer" containerID="7813fc345c273bc84871963ce26c43ac988007ad46c1c11a9c79e15569b9ffc7" Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.370783 4856 scope.go:117] "RemoveContainer" containerID="b38e159e6f824c2745e58c2dfa4e5bc7852e4166767ade6deff6bbf888a7b5b0" Nov 22 07:49:58 crc kubenswrapper[4856]: E1122 07:49:58.371364 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b38e159e6f824c2745e58c2dfa4e5bc7852e4166767ade6deff6bbf888a7b5b0\": container with ID starting with b38e159e6f824c2745e58c2dfa4e5bc7852e4166767ade6deff6bbf888a7b5b0 not found: ID does not exist" containerID="b38e159e6f824c2745e58c2dfa4e5bc7852e4166767ade6deff6bbf888a7b5b0" Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.371400 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b38e159e6f824c2745e58c2dfa4e5bc7852e4166767ade6deff6bbf888a7b5b0"} err="failed to get container status \"b38e159e6f824c2745e58c2dfa4e5bc7852e4166767ade6deff6bbf888a7b5b0\": rpc error: code = NotFound desc = could not find container \"b38e159e6f824c2745e58c2dfa4e5bc7852e4166767ade6deff6bbf888a7b5b0\": container with ID starting with b38e159e6f824c2745e58c2dfa4e5bc7852e4166767ade6deff6bbf888a7b5b0 not found: ID does not exist" Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.371421 4856 scope.go:117] "RemoveContainer" containerID="d023df1316124c335c39793d9943ef1927ba48e2061de1a982a53a38c15759c3" Nov 22 07:49:58 crc kubenswrapper[4856]: E1122 07:49:58.371884 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d023df1316124c335c39793d9943ef1927ba48e2061de1a982a53a38c15759c3\": container with ID starting with d023df1316124c335c39793d9943ef1927ba48e2061de1a982a53a38c15759c3 not found: ID does not exist" containerID="d023df1316124c335c39793d9943ef1927ba48e2061de1a982a53a38c15759c3" Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.371916 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d023df1316124c335c39793d9943ef1927ba48e2061de1a982a53a38c15759c3"} err="failed to get container status \"d023df1316124c335c39793d9943ef1927ba48e2061de1a982a53a38c15759c3\": rpc error: code = NotFound desc = could not find container \"d023df1316124c335c39793d9943ef1927ba48e2061de1a982a53a38c15759c3\": container with ID starting with d023df1316124c335c39793d9943ef1927ba48e2061de1a982a53a38c15759c3 not found: ID does not exist" Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.371933 4856 scope.go:117] "RemoveContainer" containerID="7813fc345c273bc84871963ce26c43ac988007ad46c1c11a9c79e15569b9ffc7" Nov 22 07:49:58 crc kubenswrapper[4856]: E1122 07:49:58.372391 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7813fc345c273bc84871963ce26c43ac988007ad46c1c11a9c79e15569b9ffc7\": container with ID starting with 7813fc345c273bc84871963ce26c43ac988007ad46c1c11a9c79e15569b9ffc7 not found: ID does not exist" containerID="7813fc345c273bc84871963ce26c43ac988007ad46c1c11a9c79e15569b9ffc7" Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.372425 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7813fc345c273bc84871963ce26c43ac988007ad46c1c11a9c79e15569b9ffc7"} err="failed to get container status \"7813fc345c273bc84871963ce26c43ac988007ad46c1c11a9c79e15569b9ffc7\": rpc error: code = NotFound desc = could not find container \"7813fc345c273bc84871963ce26c43ac988007ad46c1c11a9c79e15569b9ffc7\": container with ID starting with 7813fc345c273bc84871963ce26c43ac988007ad46c1c11a9c79e15569b9ffc7 not found: ID does not exist" Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.533820 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df1c8611-50eb-4e4c-adc4-58cd3dbe011f" (UID: "df1c8611-50eb-4e4c-adc4-58cd3dbe011f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.572234 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df1c8611-50eb-4e4c-adc4-58cd3dbe011f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.643546 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wgstt"] Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.651373 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wgstt"] Nov 22 07:49:58 crc kubenswrapper[4856]: I1122 07:49:58.720614 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df1c8611-50eb-4e4c-adc4-58cd3dbe011f" path="/var/lib/kubelet/pods/df1c8611-50eb-4e4c-adc4-58cd3dbe011f/volumes" Nov 22 07:49:59 crc kubenswrapper[4856]: I1122 07:49:59.754094 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:49:59 crc kubenswrapper[4856]: I1122 07:49:59.754171 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:50:29 crc kubenswrapper[4856]: I1122 07:50:29.754646 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:50:29 crc kubenswrapper[4856]: I1122 07:50:29.755656 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:50:59 crc kubenswrapper[4856]: I1122 07:50:59.754421 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:50:59 crc kubenswrapper[4856]: I1122 07:50:59.754947 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:50:59 crc kubenswrapper[4856]: I1122 07:50:59.754986 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:50:59 crc kubenswrapper[4856]: I1122 07:50:59.755404 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1585a0d87eeecdd07bf8a6c54b454a378ff9c6404235b8b030917d82c75a75af"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:50:59 crc kubenswrapper[4856]: I1122 07:50:59.755460 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://1585a0d87eeecdd07bf8a6c54b454a378ff9c6404235b8b030917d82c75a75af" gracePeriod=600 Nov 22 07:51:00 crc kubenswrapper[4856]: I1122 07:51:00.793547 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="1585a0d87eeecdd07bf8a6c54b454a378ff9c6404235b8b030917d82c75a75af" exitCode=0 Nov 22 07:51:00 crc kubenswrapper[4856]: I1122 07:51:00.793626 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"1585a0d87eeecdd07bf8a6c54b454a378ff9c6404235b8b030917d82c75a75af"} Nov 22 07:51:00 crc kubenswrapper[4856]: I1122 07:51:00.794149 4856 scope.go:117] "RemoveContainer" containerID="73bf7d5ac038769e89102eb173792ae052ec0ea4db487f914ced034cb8ffdb5d" Nov 22 07:51:01 crc kubenswrapper[4856]: I1122 07:51:01.810886 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3"} Nov 22 07:53:29 crc kubenswrapper[4856]: I1122 07:53:29.754666 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:53:29 crc kubenswrapper[4856]: I1122 07:53:29.755434 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:53:59 crc kubenswrapper[4856]: I1122 07:53:59.754500 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:53:59 crc kubenswrapper[4856]: I1122 07:53:59.755401 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:54:29 crc kubenswrapper[4856]: I1122 07:54:29.754027 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 07:54:29 crc kubenswrapper[4856]: I1122 07:54:29.754932 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 07:54:29 crc kubenswrapper[4856]: I1122 07:54:29.754992 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 07:54:29 crc kubenswrapper[4856]: I1122 07:54:29.755894 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 07:54:29 crc kubenswrapper[4856]: I1122 07:54:29.755957 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" gracePeriod=600 Nov 22 07:54:29 crc kubenswrapper[4856]: E1122 07:54:29.882315 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:54:29 crc kubenswrapper[4856]: I1122 07:54:29.901843 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" exitCode=0 Nov 22 07:54:29 crc kubenswrapper[4856]: I1122 07:54:29.901893 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3"} Nov 22 07:54:29 crc kubenswrapper[4856]: I1122 07:54:29.901933 4856 scope.go:117] "RemoveContainer" containerID="1585a0d87eeecdd07bf8a6c54b454a378ff9c6404235b8b030917d82c75a75af" Nov 22 07:54:29 crc kubenswrapper[4856]: I1122 07:54:29.902253 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:54:29 crc kubenswrapper[4856]: E1122 07:54:29.902446 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:54:41 crc kubenswrapper[4856]: I1122 07:54:41.709291 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:54:41 crc kubenswrapper[4856]: E1122 07:54:41.710076 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:54:56 crc kubenswrapper[4856]: I1122 07:54:56.709541 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:54:56 crc kubenswrapper[4856]: E1122 07:54:56.710637 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:55:07 crc kubenswrapper[4856]: I1122 07:55:07.709611 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:55:07 crc kubenswrapper[4856]: E1122 07:55:07.710534 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:55:22 crc kubenswrapper[4856]: I1122 07:55:22.712423 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:55:22 crc kubenswrapper[4856]: E1122 07:55:22.713549 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:55:35 crc kubenswrapper[4856]: I1122 07:55:35.710007 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:55:35 crc kubenswrapper[4856]: E1122 07:55:35.710970 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:55:47 crc kubenswrapper[4856]: I1122 07:55:47.710204 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:55:47 crc kubenswrapper[4856]: E1122 07:55:47.711065 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:56:01 crc kubenswrapper[4856]: I1122 07:56:01.710410 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:56:01 crc kubenswrapper[4856]: E1122 07:56:01.711531 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:56:12 crc kubenswrapper[4856]: I1122 07:56:12.710283 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:56:12 crc kubenswrapper[4856]: E1122 07:56:12.711290 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:56:26 crc kubenswrapper[4856]: I1122 07:56:26.709814 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:56:26 crc kubenswrapper[4856]: E1122 07:56:26.711094 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:56:38 crc kubenswrapper[4856]: I1122 07:56:38.714043 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:56:38 crc kubenswrapper[4856]: E1122 07:56:38.714936 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:56:52 crc kubenswrapper[4856]: I1122 07:56:52.711434 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:56:52 crc kubenswrapper[4856]: E1122 07:56:52.712652 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:57:04 crc kubenswrapper[4856]: I1122 07:57:04.712106 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:57:04 crc kubenswrapper[4856]: E1122 07:57:04.713255 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:57:16 crc kubenswrapper[4856]: I1122 07:57:16.709816 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:57:16 crc kubenswrapper[4856]: E1122 07:57:16.712276 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:57:29 crc kubenswrapper[4856]: I1122 07:57:29.709883 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:57:29 crc kubenswrapper[4856]: E1122 07:57:29.710700 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:57:44 crc kubenswrapper[4856]: I1122 07:57:44.709684 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:57:44 crc kubenswrapper[4856]: E1122 07:57:44.710823 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:57:55 crc kubenswrapper[4856]: I1122 07:57:55.710193 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:57:55 crc kubenswrapper[4856]: E1122 07:57:55.710988 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:58:09 crc kubenswrapper[4856]: I1122 07:58:09.711289 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:58:09 crc kubenswrapper[4856]: E1122 07:58:09.712569 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:58:19 crc kubenswrapper[4856]: I1122 07:58:19.948870 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-svrmq"] Nov 22 07:58:19 crc kubenswrapper[4856]: E1122 07:58:19.950560 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1c8611-50eb-4e4c-adc4-58cd3dbe011f" containerName="extract-utilities" Nov 22 07:58:19 crc kubenswrapper[4856]: I1122 07:58:19.950594 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1c8611-50eb-4e4c-adc4-58cd3dbe011f" containerName="extract-utilities" Nov 22 07:58:19 crc kubenswrapper[4856]: E1122 07:58:19.950668 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1c8611-50eb-4e4c-adc4-58cd3dbe011f" containerName="registry-server" Nov 22 07:58:19 crc kubenswrapper[4856]: I1122 07:58:19.950685 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1c8611-50eb-4e4c-adc4-58cd3dbe011f" containerName="registry-server" Nov 22 07:58:19 crc kubenswrapper[4856]: E1122 07:58:19.950714 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1c8611-50eb-4e4c-adc4-58cd3dbe011f" containerName="extract-content" Nov 22 07:58:19 crc kubenswrapper[4856]: I1122 07:58:19.950732 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1c8611-50eb-4e4c-adc4-58cd3dbe011f" containerName="extract-content" Nov 22 07:58:19 crc kubenswrapper[4856]: I1122 07:58:19.951067 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1c8611-50eb-4e4c-adc4-58cd3dbe011f" containerName="registry-server" Nov 22 07:58:19 crc kubenswrapper[4856]: I1122 07:58:19.959376 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:19 crc kubenswrapper[4856]: I1122 07:58:19.970756 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-svrmq"] Nov 22 07:58:20 crc kubenswrapper[4856]: I1122 07:58:20.018832 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnf7d\" (UniqueName: \"kubernetes.io/projected/9700f515-9be7-4004-bb5e-e9382f0fcf2f-kube-api-access-mnf7d\") pod \"redhat-operators-svrmq\" (UID: \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\") " pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:20 crc kubenswrapper[4856]: I1122 07:58:20.019534 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9700f515-9be7-4004-bb5e-e9382f0fcf2f-catalog-content\") pod \"redhat-operators-svrmq\" (UID: \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\") " pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:20 crc kubenswrapper[4856]: I1122 07:58:20.020326 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9700f515-9be7-4004-bb5e-e9382f0fcf2f-utilities\") pod \"redhat-operators-svrmq\" (UID: \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\") " pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:20 crc kubenswrapper[4856]: I1122 07:58:20.121937 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9700f515-9be7-4004-bb5e-e9382f0fcf2f-utilities\") pod \"redhat-operators-svrmq\" (UID: \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\") " pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:20 crc kubenswrapper[4856]: I1122 07:58:20.122221 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnf7d\" (UniqueName: \"kubernetes.io/projected/9700f515-9be7-4004-bb5e-e9382f0fcf2f-kube-api-access-mnf7d\") pod \"redhat-operators-svrmq\" (UID: \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\") " pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:20 crc kubenswrapper[4856]: I1122 07:58:20.122258 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9700f515-9be7-4004-bb5e-e9382f0fcf2f-catalog-content\") pod \"redhat-operators-svrmq\" (UID: \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\") " pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:20 crc kubenswrapper[4856]: I1122 07:58:20.123431 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9700f515-9be7-4004-bb5e-e9382f0fcf2f-utilities\") pod \"redhat-operators-svrmq\" (UID: \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\") " pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:20 crc kubenswrapper[4856]: I1122 07:58:20.123480 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9700f515-9be7-4004-bb5e-e9382f0fcf2f-catalog-content\") pod \"redhat-operators-svrmq\" (UID: \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\") " pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:20 crc kubenswrapper[4856]: I1122 07:58:20.155381 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnf7d\" (UniqueName: \"kubernetes.io/projected/9700f515-9be7-4004-bb5e-e9382f0fcf2f-kube-api-access-mnf7d\") pod \"redhat-operators-svrmq\" (UID: \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\") " pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:20 crc kubenswrapper[4856]: I1122 07:58:20.334397 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:20 crc kubenswrapper[4856]: I1122 07:58:20.610329 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-svrmq"] Nov 22 07:58:20 crc kubenswrapper[4856]: I1122 07:58:20.799386 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svrmq" event={"ID":"9700f515-9be7-4004-bb5e-e9382f0fcf2f","Type":"ContainerStarted","Data":"a9327c456fd19d25d0ba4a91564d6e251d1b0b94928f18f8813afda3f6a19d7b"} Nov 22 07:58:21 crc kubenswrapper[4856]: I1122 07:58:21.810867 4856 generic.go:334] "Generic (PLEG): container finished" podID="9700f515-9be7-4004-bb5e-e9382f0fcf2f" containerID="6cc600ceebc10b095839251401733864ca25578cdd8351fb38ff60ebb6ff7e91" exitCode=0 Nov 22 07:58:21 crc kubenswrapper[4856]: I1122 07:58:21.810960 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svrmq" event={"ID":"9700f515-9be7-4004-bb5e-e9382f0fcf2f","Type":"ContainerDied","Data":"6cc600ceebc10b095839251401733864ca25578cdd8351fb38ff60ebb6ff7e91"} Nov 22 07:58:21 crc kubenswrapper[4856]: I1122 07:58:21.823607 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 07:58:23 crc kubenswrapper[4856]: I1122 07:58:23.709712 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:58:23 crc kubenswrapper[4856]: E1122 07:58:23.710295 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:58:23 crc kubenswrapper[4856]: I1122 07:58:23.825962 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svrmq" event={"ID":"9700f515-9be7-4004-bb5e-e9382f0fcf2f","Type":"ContainerStarted","Data":"da622c911bf97c9997aee37e5e2186d52afbe229780f427dccac5e87814d7114"} Nov 22 07:58:24 crc kubenswrapper[4856]: I1122 07:58:24.834021 4856 generic.go:334] "Generic (PLEG): container finished" podID="9700f515-9be7-4004-bb5e-e9382f0fcf2f" containerID="da622c911bf97c9997aee37e5e2186d52afbe229780f427dccac5e87814d7114" exitCode=0 Nov 22 07:58:24 crc kubenswrapper[4856]: I1122 07:58:24.834071 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svrmq" event={"ID":"9700f515-9be7-4004-bb5e-e9382f0fcf2f","Type":"ContainerDied","Data":"da622c911bf97c9997aee37e5e2186d52afbe229780f427dccac5e87814d7114"} Nov 22 07:58:28 crc kubenswrapper[4856]: I1122 07:58:28.876466 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svrmq" event={"ID":"9700f515-9be7-4004-bb5e-e9382f0fcf2f","Type":"ContainerStarted","Data":"5f75f0cd7104a3eab768fd04aac607c3b1b3377ff5bf1594628176aada53349e"} Nov 22 07:58:28 crc kubenswrapper[4856]: I1122 07:58:28.901799 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-svrmq" podStartSLOduration=4.183596069 podStartE2EDuration="9.901784757s" podCreationTimestamp="2025-11-22 07:58:19 +0000 UTC" firstStartedPulling="2025-11-22 07:58:21.823339841 +0000 UTC m=+3344.236733099" lastFinishedPulling="2025-11-22 07:58:27.541528529 +0000 UTC m=+3349.954921787" observedRunningTime="2025-11-22 07:58:28.900947815 +0000 UTC m=+3351.314341073" watchObservedRunningTime="2025-11-22 07:58:28.901784757 +0000 UTC m=+3351.315178005" Nov 22 07:58:30 crc kubenswrapper[4856]: I1122 07:58:30.334323 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:30 crc kubenswrapper[4856]: I1122 07:58:30.335572 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:31 crc kubenswrapper[4856]: I1122 07:58:31.375606 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-svrmq" podUID="9700f515-9be7-4004-bb5e-e9382f0fcf2f" containerName="registry-server" probeResult="failure" output=< Nov 22 07:58:31 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 07:58:31 crc kubenswrapper[4856]: > Nov 22 07:58:38 crc kubenswrapper[4856]: I1122 07:58:38.715914 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:58:38 crc kubenswrapper[4856]: E1122 07:58:38.718543 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:58:40 crc kubenswrapper[4856]: I1122 07:58:40.393242 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:40 crc kubenswrapper[4856]: I1122 07:58:40.450055 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:40 crc kubenswrapper[4856]: I1122 07:58:40.631666 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-svrmq"] Nov 22 07:58:42 crc kubenswrapper[4856]: I1122 07:58:41.999587 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-svrmq" podUID="9700f515-9be7-4004-bb5e-e9382f0fcf2f" containerName="registry-server" containerID="cri-o://5f75f0cd7104a3eab768fd04aac607c3b1b3377ff5bf1594628176aada53349e" gracePeriod=2 Nov 22 07:58:43 crc kubenswrapper[4856]: I1122 07:58:43.036303 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xs9vt"] Nov 22 07:58:43 crc kubenswrapper[4856]: I1122 07:58:43.038364 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:58:43 crc kubenswrapper[4856]: I1122 07:58:43.051826 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xs9vt"] Nov 22 07:58:43 crc kubenswrapper[4856]: I1122 07:58:43.220449 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8d2b0d-0927-4606-97ac-f19afd39b157-utilities\") pod \"community-operators-xs9vt\" (UID: \"cc8d2b0d-0927-4606-97ac-f19afd39b157\") " pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:58:43 crc kubenswrapper[4856]: I1122 07:58:43.220968 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj5j7\" (UniqueName: \"kubernetes.io/projected/cc8d2b0d-0927-4606-97ac-f19afd39b157-kube-api-access-pj5j7\") pod \"community-operators-xs9vt\" (UID: \"cc8d2b0d-0927-4606-97ac-f19afd39b157\") " pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:58:43 crc kubenswrapper[4856]: I1122 07:58:43.221035 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8d2b0d-0927-4606-97ac-f19afd39b157-catalog-content\") pod \"community-operators-xs9vt\" (UID: \"cc8d2b0d-0927-4606-97ac-f19afd39b157\") " pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:58:43 crc kubenswrapper[4856]: I1122 07:58:43.322370 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8d2b0d-0927-4606-97ac-f19afd39b157-catalog-content\") pod \"community-operators-xs9vt\" (UID: \"cc8d2b0d-0927-4606-97ac-f19afd39b157\") " pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:58:43 crc kubenswrapper[4856]: I1122 07:58:43.322447 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8d2b0d-0927-4606-97ac-f19afd39b157-utilities\") pod \"community-operators-xs9vt\" (UID: \"cc8d2b0d-0927-4606-97ac-f19afd39b157\") " pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:58:43 crc kubenswrapper[4856]: I1122 07:58:43.322535 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj5j7\" (UniqueName: \"kubernetes.io/projected/cc8d2b0d-0927-4606-97ac-f19afd39b157-kube-api-access-pj5j7\") pod \"community-operators-xs9vt\" (UID: \"cc8d2b0d-0927-4606-97ac-f19afd39b157\") " pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:58:43 crc kubenswrapper[4856]: I1122 07:58:43.956236 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8d2b0d-0927-4606-97ac-f19afd39b157-catalog-content\") pod \"community-operators-xs9vt\" (UID: \"cc8d2b0d-0927-4606-97ac-f19afd39b157\") " pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:58:43 crc kubenswrapper[4856]: I1122 07:58:43.957216 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj5j7\" (UniqueName: \"kubernetes.io/projected/cc8d2b0d-0927-4606-97ac-f19afd39b157-kube-api-access-pj5j7\") pod \"community-operators-xs9vt\" (UID: \"cc8d2b0d-0927-4606-97ac-f19afd39b157\") " pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:58:44 crc kubenswrapper[4856]: I1122 07:58:44.674060 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8d2b0d-0927-4606-97ac-f19afd39b157-utilities\") pod \"community-operators-xs9vt\" (UID: \"cc8d2b0d-0927-4606-97ac-f19afd39b157\") " pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:58:44 crc kubenswrapper[4856]: I1122 07:58:44.825325 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:58:45 crc kubenswrapper[4856]: I1122 07:58:45.029653 4856 generic.go:334] "Generic (PLEG): container finished" podID="9700f515-9be7-4004-bb5e-e9382f0fcf2f" containerID="5f75f0cd7104a3eab768fd04aac607c3b1b3377ff5bf1594628176aada53349e" exitCode=0 Nov 22 07:58:45 crc kubenswrapper[4856]: I1122 07:58:45.029713 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svrmq" event={"ID":"9700f515-9be7-4004-bb5e-e9382f0fcf2f","Type":"ContainerDied","Data":"5f75f0cd7104a3eab768fd04aac607c3b1b3377ff5bf1594628176aada53349e"} Nov 22 07:58:45 crc kubenswrapper[4856]: I1122 07:58:45.955063 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:46 crc kubenswrapper[4856]: I1122 07:58:46.041946 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svrmq" event={"ID":"9700f515-9be7-4004-bb5e-e9382f0fcf2f","Type":"ContainerDied","Data":"a9327c456fd19d25d0ba4a91564d6e251d1b0b94928f18f8813afda3f6a19d7b"} Nov 22 07:58:46 crc kubenswrapper[4856]: I1122 07:58:46.042000 4856 scope.go:117] "RemoveContainer" containerID="5f75f0cd7104a3eab768fd04aac607c3b1b3377ff5bf1594628176aada53349e" Nov 22 07:58:46 crc kubenswrapper[4856]: I1122 07:58:46.042050 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svrmq" Nov 22 07:58:46 crc kubenswrapper[4856]: I1122 07:58:46.069099 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9700f515-9be7-4004-bb5e-e9382f0fcf2f-utilities\") pod \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\" (UID: \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\") " Nov 22 07:58:46 crc kubenswrapper[4856]: I1122 07:58:46.069300 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9700f515-9be7-4004-bb5e-e9382f0fcf2f-catalog-content\") pod \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\" (UID: \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\") " Nov 22 07:58:46 crc kubenswrapper[4856]: I1122 07:58:46.069366 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnf7d\" (UniqueName: \"kubernetes.io/projected/9700f515-9be7-4004-bb5e-e9382f0fcf2f-kube-api-access-mnf7d\") pod \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\" (UID: \"9700f515-9be7-4004-bb5e-e9382f0fcf2f\") " Nov 22 07:58:46 crc kubenswrapper[4856]: I1122 07:58:46.070047 4856 scope.go:117] "RemoveContainer" containerID="da622c911bf97c9997aee37e5e2186d52afbe229780f427dccac5e87814d7114" Nov 22 07:58:46 crc kubenswrapper[4856]: I1122 07:58:46.071139 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9700f515-9be7-4004-bb5e-e9382f0fcf2f-utilities" (OuterVolumeSpecName: "utilities") pod "9700f515-9be7-4004-bb5e-e9382f0fcf2f" (UID: "9700f515-9be7-4004-bb5e-e9382f0fcf2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:58:46 crc kubenswrapper[4856]: I1122 07:58:46.079637 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9700f515-9be7-4004-bb5e-e9382f0fcf2f-kube-api-access-mnf7d" (OuterVolumeSpecName: "kube-api-access-mnf7d") pod "9700f515-9be7-4004-bb5e-e9382f0fcf2f" (UID: "9700f515-9be7-4004-bb5e-e9382f0fcf2f"). InnerVolumeSpecName "kube-api-access-mnf7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:58:46 crc kubenswrapper[4856]: I1122 07:58:46.094907 4856 scope.go:117] "RemoveContainer" containerID="6cc600ceebc10b095839251401733864ca25578cdd8351fb38ff60ebb6ff7e91" Nov 22 07:58:46 crc kubenswrapper[4856]: I1122 07:58:46.128668 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xs9vt"] Nov 22 07:58:46 crc kubenswrapper[4856]: I1122 07:58:46.172621 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnf7d\" (UniqueName: \"kubernetes.io/projected/9700f515-9be7-4004-bb5e-e9382f0fcf2f-kube-api-access-mnf7d\") on node \"crc\" DevicePath \"\"" Nov 22 07:58:46 crc kubenswrapper[4856]: I1122 07:58:46.172668 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9700f515-9be7-4004-bb5e-e9382f0fcf2f-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:58:47 crc kubenswrapper[4856]: I1122 07:58:47.054563 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs9vt" event={"ID":"cc8d2b0d-0927-4606-97ac-f19afd39b157","Type":"ContainerStarted","Data":"91fa2bcc0bcdbbcff40ff813a332eaa7b9d704e6e5de46877b3a120b8e1eb4e0"} Nov 22 07:58:48 crc kubenswrapper[4856]: I1122 07:58:48.020235 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9700f515-9be7-4004-bb5e-e9382f0fcf2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9700f515-9be7-4004-bb5e-e9382f0fcf2f" (UID: "9700f515-9be7-4004-bb5e-e9382f0fcf2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:58:48 crc kubenswrapper[4856]: I1122 07:58:48.069350 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs9vt" event={"ID":"cc8d2b0d-0927-4606-97ac-f19afd39b157","Type":"ContainerStarted","Data":"2b6921dd9d78e62de4ffe84e101ea9cc95a81549814a48920afaec20a1a92562"} Nov 22 07:58:48 crc kubenswrapper[4856]: I1122 07:58:48.106444 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9700f515-9be7-4004-bb5e-e9382f0fcf2f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:58:48 crc kubenswrapper[4856]: I1122 07:58:48.185332 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-svrmq"] Nov 22 07:58:48 crc kubenswrapper[4856]: I1122 07:58:48.193122 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-svrmq"] Nov 22 07:58:48 crc kubenswrapper[4856]: I1122 07:58:48.719093 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9700f515-9be7-4004-bb5e-e9382f0fcf2f" path="/var/lib/kubelet/pods/9700f515-9be7-4004-bb5e-e9382f0fcf2f/volumes" Nov 22 07:58:49 crc kubenswrapper[4856]: I1122 07:58:49.083412 4856 generic.go:334] "Generic (PLEG): container finished" podID="cc8d2b0d-0927-4606-97ac-f19afd39b157" containerID="2b6921dd9d78e62de4ffe84e101ea9cc95a81549814a48920afaec20a1a92562" exitCode=0 Nov 22 07:58:49 crc kubenswrapper[4856]: I1122 07:58:49.083590 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs9vt" event={"ID":"cc8d2b0d-0927-4606-97ac-f19afd39b157","Type":"ContainerDied","Data":"2b6921dd9d78e62de4ffe84e101ea9cc95a81549814a48920afaec20a1a92562"} Nov 22 07:58:53 crc kubenswrapper[4856]: I1122 07:58:53.128875 4856 generic.go:334] "Generic (PLEG): container finished" podID="cc8d2b0d-0927-4606-97ac-f19afd39b157" containerID="95636a1a0e4c9eba2629559475abbf631223daaca3dfc925f2104fa8ec142212" exitCode=0 Nov 22 07:58:53 crc kubenswrapper[4856]: I1122 07:58:53.129010 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs9vt" event={"ID":"cc8d2b0d-0927-4606-97ac-f19afd39b157","Type":"ContainerDied","Data":"95636a1a0e4c9eba2629559475abbf631223daaca3dfc925f2104fa8ec142212"} Nov 22 07:58:53 crc kubenswrapper[4856]: I1122 07:58:53.709469 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:58:53 crc kubenswrapper[4856]: E1122 07:58:53.709860 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:58:56 crc kubenswrapper[4856]: I1122 07:58:56.173695 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs9vt" event={"ID":"cc8d2b0d-0927-4606-97ac-f19afd39b157","Type":"ContainerStarted","Data":"bd74bb1a1e902128b9fbbab39028c8562d8cf31d0a2e183e4a866edbafb09c70"} Nov 22 07:58:56 crc kubenswrapper[4856]: I1122 07:58:56.211616 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xs9vt" podStartSLOduration=6.9134047039999995 podStartE2EDuration="13.211590753s" podCreationTimestamp="2025-11-22 07:58:43 +0000 UTC" firstStartedPulling="2025-11-22 07:58:49.087369284 +0000 UTC m=+3371.500762572" lastFinishedPulling="2025-11-22 07:58:55.385555353 +0000 UTC m=+3377.798948621" observedRunningTime="2025-11-22 07:58:56.201436979 +0000 UTC m=+3378.614830237" watchObservedRunningTime="2025-11-22 07:58:56.211590753 +0000 UTC m=+3378.624984011" Nov 22 07:59:04 crc kubenswrapper[4856]: I1122 07:59:04.709676 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:59:04 crc kubenswrapper[4856]: E1122 07:59:04.710723 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:59:04 crc kubenswrapper[4856]: I1122 07:59:04.826424 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:59:04 crc kubenswrapper[4856]: I1122 07:59:04.826753 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:59:04 crc kubenswrapper[4856]: I1122 07:59:04.878069 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:59:05 crc kubenswrapper[4856]: I1122 07:59:05.321237 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:59:05 crc kubenswrapper[4856]: I1122 07:59:05.376837 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xs9vt"] Nov 22 07:59:07 crc kubenswrapper[4856]: I1122 07:59:07.270934 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xs9vt" podUID="cc8d2b0d-0927-4606-97ac-f19afd39b157" containerName="registry-server" containerID="cri-o://bd74bb1a1e902128b9fbbab39028c8562d8cf31d0a2e183e4a866edbafb09c70" gracePeriod=2 Nov 22 07:59:08 crc kubenswrapper[4856]: I1122 07:59:08.305401 4856 generic.go:334] "Generic (PLEG): container finished" podID="cc8d2b0d-0927-4606-97ac-f19afd39b157" containerID="bd74bb1a1e902128b9fbbab39028c8562d8cf31d0a2e183e4a866edbafb09c70" exitCode=0 Nov 22 07:59:08 crc kubenswrapper[4856]: I1122 07:59:08.305914 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs9vt" event={"ID":"cc8d2b0d-0927-4606-97ac-f19afd39b157","Type":"ContainerDied","Data":"bd74bb1a1e902128b9fbbab39028c8562d8cf31d0a2e183e4a866edbafb09c70"} Nov 22 07:59:08 crc kubenswrapper[4856]: I1122 07:59:08.380948 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:59:08 crc kubenswrapper[4856]: I1122 07:59:08.568322 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8d2b0d-0927-4606-97ac-f19afd39b157-catalog-content\") pod \"cc8d2b0d-0927-4606-97ac-f19afd39b157\" (UID: \"cc8d2b0d-0927-4606-97ac-f19afd39b157\") " Nov 22 07:59:08 crc kubenswrapper[4856]: I1122 07:59:08.568387 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8d2b0d-0927-4606-97ac-f19afd39b157-utilities\") pod \"cc8d2b0d-0927-4606-97ac-f19afd39b157\" (UID: \"cc8d2b0d-0927-4606-97ac-f19afd39b157\") " Nov 22 07:59:08 crc kubenswrapper[4856]: I1122 07:59:08.568442 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj5j7\" (UniqueName: \"kubernetes.io/projected/cc8d2b0d-0927-4606-97ac-f19afd39b157-kube-api-access-pj5j7\") pod \"cc8d2b0d-0927-4606-97ac-f19afd39b157\" (UID: \"cc8d2b0d-0927-4606-97ac-f19afd39b157\") " Nov 22 07:59:08 crc kubenswrapper[4856]: I1122 07:59:08.569754 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8d2b0d-0927-4606-97ac-f19afd39b157-utilities" (OuterVolumeSpecName: "utilities") pod "cc8d2b0d-0927-4606-97ac-f19afd39b157" (UID: "cc8d2b0d-0927-4606-97ac-f19afd39b157"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:59:08 crc kubenswrapper[4856]: I1122 07:59:08.574368 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc8d2b0d-0927-4606-97ac-f19afd39b157-kube-api-access-pj5j7" (OuterVolumeSpecName: "kube-api-access-pj5j7") pod "cc8d2b0d-0927-4606-97ac-f19afd39b157" (UID: "cc8d2b0d-0927-4606-97ac-f19afd39b157"). InnerVolumeSpecName "kube-api-access-pj5j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 07:59:08 crc kubenswrapper[4856]: I1122 07:59:08.618679 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc8d2b0d-0927-4606-97ac-f19afd39b157-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc8d2b0d-0927-4606-97ac-f19afd39b157" (UID: "cc8d2b0d-0927-4606-97ac-f19afd39b157"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 07:59:08 crc kubenswrapper[4856]: I1122 07:59:08.669809 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc8d2b0d-0927-4606-97ac-f19afd39b157-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 07:59:08 crc kubenswrapper[4856]: I1122 07:59:08.669840 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc8d2b0d-0927-4606-97ac-f19afd39b157-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 07:59:08 crc kubenswrapper[4856]: I1122 07:59:08.669850 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj5j7\" (UniqueName: \"kubernetes.io/projected/cc8d2b0d-0927-4606-97ac-f19afd39b157-kube-api-access-pj5j7\") on node \"crc\" DevicePath \"\"" Nov 22 07:59:09 crc kubenswrapper[4856]: I1122 07:59:09.318886 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs9vt" event={"ID":"cc8d2b0d-0927-4606-97ac-f19afd39b157","Type":"ContainerDied","Data":"91fa2bcc0bcdbbcff40ff813a332eaa7b9d704e6e5de46877b3a120b8e1eb4e0"} Nov 22 07:59:09 crc kubenswrapper[4856]: I1122 07:59:09.318954 4856 scope.go:117] "RemoveContainer" containerID="bd74bb1a1e902128b9fbbab39028c8562d8cf31d0a2e183e4a866edbafb09c70" Nov 22 07:59:09 crc kubenswrapper[4856]: I1122 07:59:09.320286 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs9vt" Nov 22 07:59:09 crc kubenswrapper[4856]: I1122 07:59:09.343035 4856 scope.go:117] "RemoveContainer" containerID="95636a1a0e4c9eba2629559475abbf631223daaca3dfc925f2104fa8ec142212" Nov 22 07:59:09 crc kubenswrapper[4856]: I1122 07:59:09.347897 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xs9vt"] Nov 22 07:59:09 crc kubenswrapper[4856]: I1122 07:59:09.353852 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xs9vt"] Nov 22 07:59:09 crc kubenswrapper[4856]: I1122 07:59:09.365689 4856 scope.go:117] "RemoveContainer" containerID="2b6921dd9d78e62de4ffe84e101ea9cc95a81549814a48920afaec20a1a92562" Nov 22 07:59:10 crc kubenswrapper[4856]: I1122 07:59:10.717929 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc8d2b0d-0927-4606-97ac-f19afd39b157" path="/var/lib/kubelet/pods/cc8d2b0d-0927-4606-97ac-f19afd39b157/volumes" Nov 22 07:59:15 crc kubenswrapper[4856]: I1122 07:59:15.710367 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:59:15 crc kubenswrapper[4856]: E1122 07:59:15.711084 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:59:29 crc kubenswrapper[4856]: I1122 07:59:29.710502 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:59:29 crc kubenswrapper[4856]: E1122 07:59:29.711756 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 07:59:42 crc kubenswrapper[4856]: I1122 07:59:42.712891 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 07:59:43 crc kubenswrapper[4856]: I1122 07:59:43.592314 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"cc0742e64ac1595b971c241e6fe2ab4963b365d6b87410e4e2179a202de989f7"} Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.191945 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6"] Nov 22 08:00:00 crc kubenswrapper[4856]: E1122 08:00:00.192994 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9700f515-9be7-4004-bb5e-e9382f0fcf2f" containerName="extract-utilities" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.193012 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9700f515-9be7-4004-bb5e-e9382f0fcf2f" containerName="extract-utilities" Nov 22 08:00:00 crc kubenswrapper[4856]: E1122 08:00:00.193029 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9700f515-9be7-4004-bb5e-e9382f0fcf2f" containerName="extract-content" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.193035 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9700f515-9be7-4004-bb5e-e9382f0fcf2f" containerName="extract-content" Nov 22 08:00:00 crc kubenswrapper[4856]: E1122 08:00:00.193054 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8d2b0d-0927-4606-97ac-f19afd39b157" containerName="extract-content" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.193067 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8d2b0d-0927-4606-97ac-f19afd39b157" containerName="extract-content" Nov 22 08:00:00 crc kubenswrapper[4856]: E1122 08:00:00.193082 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9700f515-9be7-4004-bb5e-e9382f0fcf2f" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.193089 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9700f515-9be7-4004-bb5e-e9382f0fcf2f" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4856]: E1122 08:00:00.193108 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8d2b0d-0927-4606-97ac-f19afd39b157" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.193120 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8d2b0d-0927-4606-97ac-f19afd39b157" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4856]: E1122 08:00:00.193139 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc8d2b0d-0927-4606-97ac-f19afd39b157" containerName="extract-utilities" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.193147 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc8d2b0d-0927-4606-97ac-f19afd39b157" containerName="extract-utilities" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.193338 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc8d2b0d-0927-4606-97ac-f19afd39b157" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.193360 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9700f515-9be7-4004-bb5e-e9382f0fcf2f" containerName="registry-server" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.194011 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.196883 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.196886 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.207699 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6"] Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.289502 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c44a36f1-1a04-4522-a338-6161608fbdc4-secret-volume\") pod \"collect-profiles-29396640-rfxl6\" (UID: \"c44a36f1-1a04-4522-a338-6161608fbdc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.290008 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c44a36f1-1a04-4522-a338-6161608fbdc4-config-volume\") pod \"collect-profiles-29396640-rfxl6\" (UID: \"c44a36f1-1a04-4522-a338-6161608fbdc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.290087 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8d8r\" (UniqueName: \"kubernetes.io/projected/c44a36f1-1a04-4522-a338-6161608fbdc4-kube-api-access-h8d8r\") pod \"collect-profiles-29396640-rfxl6\" (UID: \"c44a36f1-1a04-4522-a338-6161608fbdc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.391315 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c44a36f1-1a04-4522-a338-6161608fbdc4-secret-volume\") pod \"collect-profiles-29396640-rfxl6\" (UID: \"c44a36f1-1a04-4522-a338-6161608fbdc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.391388 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c44a36f1-1a04-4522-a338-6161608fbdc4-config-volume\") pod \"collect-profiles-29396640-rfxl6\" (UID: \"c44a36f1-1a04-4522-a338-6161608fbdc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.391413 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8d8r\" (UniqueName: \"kubernetes.io/projected/c44a36f1-1a04-4522-a338-6161608fbdc4-kube-api-access-h8d8r\") pod \"collect-profiles-29396640-rfxl6\" (UID: \"c44a36f1-1a04-4522-a338-6161608fbdc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.393633 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c44a36f1-1a04-4522-a338-6161608fbdc4-config-volume\") pod \"collect-profiles-29396640-rfxl6\" (UID: \"c44a36f1-1a04-4522-a338-6161608fbdc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.408032 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c44a36f1-1a04-4522-a338-6161608fbdc4-secret-volume\") pod \"collect-profiles-29396640-rfxl6\" (UID: \"c44a36f1-1a04-4522-a338-6161608fbdc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.410479 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8d8r\" (UniqueName: \"kubernetes.io/projected/c44a36f1-1a04-4522-a338-6161608fbdc4-kube-api-access-h8d8r\") pod \"collect-profiles-29396640-rfxl6\" (UID: \"c44a36f1-1a04-4522-a338-6161608fbdc4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.528179 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" Nov 22 08:00:00 crc kubenswrapper[4856]: I1122 08:00:00.963803 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6"] Nov 22 08:00:00 crc kubenswrapper[4856]: W1122 08:00:00.966254 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc44a36f1_1a04_4522_a338_6161608fbdc4.slice/crio-0892d299f0bd5ff76f30b694b3f4bcc53bf9543607ea06a50053a214283df191 WatchSource:0}: Error finding container 0892d299f0bd5ff76f30b694b3f4bcc53bf9543607ea06a50053a214283df191: Status 404 returned error can't find the container with id 0892d299f0bd5ff76f30b694b3f4bcc53bf9543607ea06a50053a214283df191 Nov 22 08:00:01 crc kubenswrapper[4856]: I1122 08:00:01.750227 4856 generic.go:334] "Generic (PLEG): container finished" podID="c44a36f1-1a04-4522-a338-6161608fbdc4" containerID="443a82891cbcb452272df1904faa8139afa45b83db12b19d18783b444c183faa" exitCode=0 Nov 22 08:00:01 crc kubenswrapper[4856]: I1122 08:00:01.750345 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" event={"ID":"c44a36f1-1a04-4522-a338-6161608fbdc4","Type":"ContainerDied","Data":"443a82891cbcb452272df1904faa8139afa45b83db12b19d18783b444c183faa"} Nov 22 08:00:01 crc kubenswrapper[4856]: I1122 08:00:01.750610 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" event={"ID":"c44a36f1-1a04-4522-a338-6161608fbdc4","Type":"ContainerStarted","Data":"0892d299f0bd5ff76f30b694b3f4bcc53bf9543607ea06a50053a214283df191"} Nov 22 08:00:03 crc kubenswrapper[4856]: I1122 08:00:03.046415 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" Nov 22 08:00:03 crc kubenswrapper[4856]: I1122 08:00:03.134384 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c44a36f1-1a04-4522-a338-6161608fbdc4-config-volume\") pod \"c44a36f1-1a04-4522-a338-6161608fbdc4\" (UID: \"c44a36f1-1a04-4522-a338-6161608fbdc4\") " Nov 22 08:00:03 crc kubenswrapper[4856]: I1122 08:00:03.134572 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c44a36f1-1a04-4522-a338-6161608fbdc4-secret-volume\") pod \"c44a36f1-1a04-4522-a338-6161608fbdc4\" (UID: \"c44a36f1-1a04-4522-a338-6161608fbdc4\") " Nov 22 08:00:03 crc kubenswrapper[4856]: I1122 08:00:03.134693 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8d8r\" (UniqueName: \"kubernetes.io/projected/c44a36f1-1a04-4522-a338-6161608fbdc4-kube-api-access-h8d8r\") pod \"c44a36f1-1a04-4522-a338-6161608fbdc4\" (UID: \"c44a36f1-1a04-4522-a338-6161608fbdc4\") " Nov 22 08:00:03 crc kubenswrapper[4856]: I1122 08:00:03.135798 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44a36f1-1a04-4522-a338-6161608fbdc4-config-volume" (OuterVolumeSpecName: "config-volume") pod "c44a36f1-1a04-4522-a338-6161608fbdc4" (UID: "c44a36f1-1a04-4522-a338-6161608fbdc4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:00:03 crc kubenswrapper[4856]: I1122 08:00:03.140114 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c44a36f1-1a04-4522-a338-6161608fbdc4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c44a36f1-1a04-4522-a338-6161608fbdc4" (UID: "c44a36f1-1a04-4522-a338-6161608fbdc4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:00:03 crc kubenswrapper[4856]: I1122 08:00:03.140184 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44a36f1-1a04-4522-a338-6161608fbdc4-kube-api-access-h8d8r" (OuterVolumeSpecName: "kube-api-access-h8d8r") pod "c44a36f1-1a04-4522-a338-6161608fbdc4" (UID: "c44a36f1-1a04-4522-a338-6161608fbdc4"). InnerVolumeSpecName "kube-api-access-h8d8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:00:03 crc kubenswrapper[4856]: I1122 08:00:03.236461 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c44a36f1-1a04-4522-a338-6161608fbdc4-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:03 crc kubenswrapper[4856]: I1122 08:00:03.236507 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8d8r\" (UniqueName: \"kubernetes.io/projected/c44a36f1-1a04-4522-a338-6161608fbdc4-kube-api-access-h8d8r\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:03 crc kubenswrapper[4856]: I1122 08:00:03.236590 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c44a36f1-1a04-4522-a338-6161608fbdc4-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:03 crc kubenswrapper[4856]: I1122 08:00:03.767886 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" event={"ID":"c44a36f1-1a04-4522-a338-6161608fbdc4","Type":"ContainerDied","Data":"0892d299f0bd5ff76f30b694b3f4bcc53bf9543607ea06a50053a214283df191"} Nov 22 08:00:03 crc kubenswrapper[4856]: I1122 08:00:03.768317 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0892d299f0bd5ff76f30b694b3f4bcc53bf9543607ea06a50053a214283df191" Nov 22 08:00:03 crc kubenswrapper[4856]: I1122 08:00:03.767997 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6" Nov 22 08:00:04 crc kubenswrapper[4856]: I1122 08:00:04.133655 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77"] Nov 22 08:00:04 crc kubenswrapper[4856]: I1122 08:00:04.144759 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396595-bcg77"] Nov 22 08:00:04 crc kubenswrapper[4856]: I1122 08:00:04.721485 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="424fa6a2-20eb-46ab-b5df-b67dc5dd211a" path="/var/lib/kubelet/pods/424fa6a2-20eb-46ab-b5df-b67dc5dd211a/volumes" Nov 22 08:00:12 crc kubenswrapper[4856]: I1122 08:00:12.803309 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rdgjw"] Nov 22 08:00:12 crc kubenswrapper[4856]: E1122 08:00:12.804759 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44a36f1-1a04-4522-a338-6161608fbdc4" containerName="collect-profiles" Nov 22 08:00:12 crc kubenswrapper[4856]: I1122 08:00:12.804779 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44a36f1-1a04-4522-a338-6161608fbdc4" containerName="collect-profiles" Nov 22 08:00:12 crc kubenswrapper[4856]: I1122 08:00:12.804990 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44a36f1-1a04-4522-a338-6161608fbdc4" containerName="collect-profiles" Nov 22 08:00:12 crc kubenswrapper[4856]: I1122 08:00:12.809712 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:12 crc kubenswrapper[4856]: I1122 08:00:12.816789 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdgjw"] Nov 22 08:00:12 crc kubenswrapper[4856]: I1122 08:00:12.905047 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d08f07-64cd-4109-b672-f2f57e2c63f4-catalog-content\") pod \"redhat-marketplace-rdgjw\" (UID: \"45d08f07-64cd-4109-b672-f2f57e2c63f4\") " pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:12 crc kubenswrapper[4856]: I1122 08:00:12.905101 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46g9g\" (UniqueName: \"kubernetes.io/projected/45d08f07-64cd-4109-b672-f2f57e2c63f4-kube-api-access-46g9g\") pod \"redhat-marketplace-rdgjw\" (UID: \"45d08f07-64cd-4109-b672-f2f57e2c63f4\") " pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:12 crc kubenswrapper[4856]: I1122 08:00:12.905218 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d08f07-64cd-4109-b672-f2f57e2c63f4-utilities\") pod \"redhat-marketplace-rdgjw\" (UID: \"45d08f07-64cd-4109-b672-f2f57e2c63f4\") " pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:13 crc kubenswrapper[4856]: I1122 08:00:13.007370 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46g9g\" (UniqueName: \"kubernetes.io/projected/45d08f07-64cd-4109-b672-f2f57e2c63f4-kube-api-access-46g9g\") pod \"redhat-marketplace-rdgjw\" (UID: \"45d08f07-64cd-4109-b672-f2f57e2c63f4\") " pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:13 crc kubenswrapper[4856]: I1122 08:00:13.007581 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d08f07-64cd-4109-b672-f2f57e2c63f4-utilities\") pod \"redhat-marketplace-rdgjw\" (UID: \"45d08f07-64cd-4109-b672-f2f57e2c63f4\") " pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:13 crc kubenswrapper[4856]: I1122 08:00:13.007611 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d08f07-64cd-4109-b672-f2f57e2c63f4-catalog-content\") pod \"redhat-marketplace-rdgjw\" (UID: \"45d08f07-64cd-4109-b672-f2f57e2c63f4\") " pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:13 crc kubenswrapper[4856]: I1122 08:00:13.008111 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d08f07-64cd-4109-b672-f2f57e2c63f4-catalog-content\") pod \"redhat-marketplace-rdgjw\" (UID: \"45d08f07-64cd-4109-b672-f2f57e2c63f4\") " pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:13 crc kubenswrapper[4856]: I1122 08:00:13.008174 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d08f07-64cd-4109-b672-f2f57e2c63f4-utilities\") pod \"redhat-marketplace-rdgjw\" (UID: \"45d08f07-64cd-4109-b672-f2f57e2c63f4\") " pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:13 crc kubenswrapper[4856]: I1122 08:00:13.028893 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46g9g\" (UniqueName: \"kubernetes.io/projected/45d08f07-64cd-4109-b672-f2f57e2c63f4-kube-api-access-46g9g\") pod \"redhat-marketplace-rdgjw\" (UID: \"45d08f07-64cd-4109-b672-f2f57e2c63f4\") " pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:13 crc kubenswrapper[4856]: I1122 08:00:13.191895 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:14 crc kubenswrapper[4856]: I1122 08:00:14.023677 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdgjw"] Nov 22 08:00:14 crc kubenswrapper[4856]: I1122 08:00:14.894751 4856 generic.go:334] "Generic (PLEG): container finished" podID="45d08f07-64cd-4109-b672-f2f57e2c63f4" containerID="42e7f2a1ad2e675570f837bce5e908d40978c6724f2d5f9607bd30c5db0ed670" exitCode=0 Nov 22 08:00:14 crc kubenswrapper[4856]: I1122 08:00:14.894848 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdgjw" event={"ID":"45d08f07-64cd-4109-b672-f2f57e2c63f4","Type":"ContainerDied","Data":"42e7f2a1ad2e675570f837bce5e908d40978c6724f2d5f9607bd30c5db0ed670"} Nov 22 08:00:14 crc kubenswrapper[4856]: I1122 08:00:14.895118 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdgjw" event={"ID":"45d08f07-64cd-4109-b672-f2f57e2c63f4","Type":"ContainerStarted","Data":"f2b69dc39c45b038fab84b9ef41b6ce84e8f812203a6728a5675fd15267d255f"} Nov 22 08:00:16 crc kubenswrapper[4856]: I1122 08:00:16.915269 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdgjw" event={"ID":"45d08f07-64cd-4109-b672-f2f57e2c63f4","Type":"ContainerStarted","Data":"d8118c6d73c5e804a7c221ec2c5d2ba8b8240973107ebd44af2ba6182c14a8b5"} Nov 22 08:00:17 crc kubenswrapper[4856]: I1122 08:00:17.925406 4856 generic.go:334] "Generic (PLEG): container finished" podID="45d08f07-64cd-4109-b672-f2f57e2c63f4" containerID="d8118c6d73c5e804a7c221ec2c5d2ba8b8240973107ebd44af2ba6182c14a8b5" exitCode=0 Nov 22 08:00:17 crc kubenswrapper[4856]: I1122 08:00:17.925480 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdgjw" event={"ID":"45d08f07-64cd-4109-b672-f2f57e2c63f4","Type":"ContainerDied","Data":"d8118c6d73c5e804a7c221ec2c5d2ba8b8240973107ebd44af2ba6182c14a8b5"} Nov 22 08:00:18 crc kubenswrapper[4856]: I1122 08:00:18.935994 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdgjw" event={"ID":"45d08f07-64cd-4109-b672-f2f57e2c63f4","Type":"ContainerStarted","Data":"b488465fa3afaf0633b5f1f07f870beba5d894df5a6308462d065f97ab5bb893"} Nov 22 08:00:19 crc kubenswrapper[4856]: I1122 08:00:19.965086 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rdgjw" podStartSLOduration=4.40977455 podStartE2EDuration="7.965063666s" podCreationTimestamp="2025-11-22 08:00:12 +0000 UTC" firstStartedPulling="2025-11-22 08:00:14.897018962 +0000 UTC m=+3457.310412220" lastFinishedPulling="2025-11-22 08:00:18.452308078 +0000 UTC m=+3460.865701336" observedRunningTime="2025-11-22 08:00:19.959549368 +0000 UTC m=+3462.372942646" watchObservedRunningTime="2025-11-22 08:00:19.965063666 +0000 UTC m=+3462.378456914" Nov 22 08:00:21 crc kubenswrapper[4856]: I1122 08:00:21.148433 4856 scope.go:117] "RemoveContainer" containerID="dec5788054cab634606129d0e0d30843dc7cc305e4d705f334185cd54a09a44d" Nov 22 08:00:23 crc kubenswrapper[4856]: I1122 08:00:23.192492 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:23 crc kubenswrapper[4856]: I1122 08:00:23.193613 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:23 crc kubenswrapper[4856]: I1122 08:00:23.257161 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:24 crc kubenswrapper[4856]: I1122 08:00:24.017215 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:24 crc kubenswrapper[4856]: I1122 08:00:24.075445 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdgjw"] Nov 22 08:00:25 crc kubenswrapper[4856]: I1122 08:00:25.988327 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rdgjw" podUID="45d08f07-64cd-4109-b672-f2f57e2c63f4" containerName="registry-server" containerID="cri-o://b488465fa3afaf0633b5f1f07f870beba5d894df5a6308462d065f97ab5bb893" gracePeriod=2 Nov 22 08:00:26 crc kubenswrapper[4856]: I1122 08:00:26.606952 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:26 crc kubenswrapper[4856]: I1122 08:00:26.781256 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d08f07-64cd-4109-b672-f2f57e2c63f4-utilities\") pod \"45d08f07-64cd-4109-b672-f2f57e2c63f4\" (UID: \"45d08f07-64cd-4109-b672-f2f57e2c63f4\") " Nov 22 08:00:26 crc kubenswrapper[4856]: I1122 08:00:26.781387 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46g9g\" (UniqueName: \"kubernetes.io/projected/45d08f07-64cd-4109-b672-f2f57e2c63f4-kube-api-access-46g9g\") pod \"45d08f07-64cd-4109-b672-f2f57e2c63f4\" (UID: \"45d08f07-64cd-4109-b672-f2f57e2c63f4\") " Nov 22 08:00:26 crc kubenswrapper[4856]: I1122 08:00:26.781493 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d08f07-64cd-4109-b672-f2f57e2c63f4-catalog-content\") pod \"45d08f07-64cd-4109-b672-f2f57e2c63f4\" (UID: \"45d08f07-64cd-4109-b672-f2f57e2c63f4\") " Nov 22 08:00:26 crc kubenswrapper[4856]: I1122 08:00:26.782218 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d08f07-64cd-4109-b672-f2f57e2c63f4-utilities" (OuterVolumeSpecName: "utilities") pod "45d08f07-64cd-4109-b672-f2f57e2c63f4" (UID: "45d08f07-64cd-4109-b672-f2f57e2c63f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:00:26 crc kubenswrapper[4856]: I1122 08:00:26.783885 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d08f07-64cd-4109-b672-f2f57e2c63f4-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:26 crc kubenswrapper[4856]: I1122 08:00:26.794356 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d08f07-64cd-4109-b672-f2f57e2c63f4-kube-api-access-46g9g" (OuterVolumeSpecName: "kube-api-access-46g9g") pod "45d08f07-64cd-4109-b672-f2f57e2c63f4" (UID: "45d08f07-64cd-4109-b672-f2f57e2c63f4"). InnerVolumeSpecName "kube-api-access-46g9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:00:26 crc kubenswrapper[4856]: I1122 08:00:26.802275 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d08f07-64cd-4109-b672-f2f57e2c63f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45d08f07-64cd-4109-b672-f2f57e2c63f4" (UID: "45d08f07-64cd-4109-b672-f2f57e2c63f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:00:26 crc kubenswrapper[4856]: I1122 08:00:26.886985 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46g9g\" (UniqueName: \"kubernetes.io/projected/45d08f07-64cd-4109-b672-f2f57e2c63f4-kube-api-access-46g9g\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:26 crc kubenswrapper[4856]: I1122 08:00:26.887019 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d08f07-64cd-4109-b672-f2f57e2c63f4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.000928 4856 generic.go:334] "Generic (PLEG): container finished" podID="45d08f07-64cd-4109-b672-f2f57e2c63f4" containerID="b488465fa3afaf0633b5f1f07f870beba5d894df5a6308462d065f97ab5bb893" exitCode=0 Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.001002 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdgjw" event={"ID":"45d08f07-64cd-4109-b672-f2f57e2c63f4","Type":"ContainerDied","Data":"b488465fa3afaf0633b5f1f07f870beba5d894df5a6308462d065f97ab5bb893"} Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.001483 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdgjw" event={"ID":"45d08f07-64cd-4109-b672-f2f57e2c63f4","Type":"ContainerDied","Data":"f2b69dc39c45b038fab84b9ef41b6ce84e8f812203a6728a5675fd15267d255f"} Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.001529 4856 scope.go:117] "RemoveContainer" containerID="b488465fa3afaf0633b5f1f07f870beba5d894df5a6308462d065f97ab5bb893" Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.001022 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdgjw" Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.025205 4856 scope.go:117] "RemoveContainer" containerID="d8118c6d73c5e804a7c221ec2c5d2ba8b8240973107ebd44af2ba6182c14a8b5" Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.043858 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdgjw"] Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.052659 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdgjw"] Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.062769 4856 scope.go:117] "RemoveContainer" containerID="42e7f2a1ad2e675570f837bce5e908d40978c6724f2d5f9607bd30c5db0ed670" Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.081325 4856 scope.go:117] "RemoveContainer" containerID="b488465fa3afaf0633b5f1f07f870beba5d894df5a6308462d065f97ab5bb893" Nov 22 08:00:27 crc kubenswrapper[4856]: E1122 08:00:27.081948 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b488465fa3afaf0633b5f1f07f870beba5d894df5a6308462d065f97ab5bb893\": container with ID starting with b488465fa3afaf0633b5f1f07f870beba5d894df5a6308462d065f97ab5bb893 not found: ID does not exist" containerID="b488465fa3afaf0633b5f1f07f870beba5d894df5a6308462d065f97ab5bb893" Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.082003 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b488465fa3afaf0633b5f1f07f870beba5d894df5a6308462d065f97ab5bb893"} err="failed to get container status \"b488465fa3afaf0633b5f1f07f870beba5d894df5a6308462d065f97ab5bb893\": rpc error: code = NotFound desc = could not find container \"b488465fa3afaf0633b5f1f07f870beba5d894df5a6308462d065f97ab5bb893\": container with ID starting with b488465fa3afaf0633b5f1f07f870beba5d894df5a6308462d065f97ab5bb893 not found: ID does not exist" Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.082042 4856 scope.go:117] "RemoveContainer" containerID="d8118c6d73c5e804a7c221ec2c5d2ba8b8240973107ebd44af2ba6182c14a8b5" Nov 22 08:00:27 crc kubenswrapper[4856]: E1122 08:00:27.082569 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8118c6d73c5e804a7c221ec2c5d2ba8b8240973107ebd44af2ba6182c14a8b5\": container with ID starting with d8118c6d73c5e804a7c221ec2c5d2ba8b8240973107ebd44af2ba6182c14a8b5 not found: ID does not exist" containerID="d8118c6d73c5e804a7c221ec2c5d2ba8b8240973107ebd44af2ba6182c14a8b5" Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.082616 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8118c6d73c5e804a7c221ec2c5d2ba8b8240973107ebd44af2ba6182c14a8b5"} err="failed to get container status \"d8118c6d73c5e804a7c221ec2c5d2ba8b8240973107ebd44af2ba6182c14a8b5\": rpc error: code = NotFound desc = could not find container \"d8118c6d73c5e804a7c221ec2c5d2ba8b8240973107ebd44af2ba6182c14a8b5\": container with ID starting with d8118c6d73c5e804a7c221ec2c5d2ba8b8240973107ebd44af2ba6182c14a8b5 not found: ID does not exist" Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.082650 4856 scope.go:117] "RemoveContainer" containerID="42e7f2a1ad2e675570f837bce5e908d40978c6724f2d5f9607bd30c5db0ed670" Nov 22 08:00:27 crc kubenswrapper[4856]: E1122 08:00:27.083146 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e7f2a1ad2e675570f837bce5e908d40978c6724f2d5f9607bd30c5db0ed670\": container with ID starting with 42e7f2a1ad2e675570f837bce5e908d40978c6724f2d5f9607bd30c5db0ed670 not found: ID does not exist" containerID="42e7f2a1ad2e675570f837bce5e908d40978c6724f2d5f9607bd30c5db0ed670" Nov 22 08:00:27 crc kubenswrapper[4856]: I1122 08:00:27.083305 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e7f2a1ad2e675570f837bce5e908d40978c6724f2d5f9607bd30c5db0ed670"} err="failed to get container status \"42e7f2a1ad2e675570f837bce5e908d40978c6724f2d5f9607bd30c5db0ed670\": rpc error: code = NotFound desc = could not find container \"42e7f2a1ad2e675570f837bce5e908d40978c6724f2d5f9607bd30c5db0ed670\": container with ID starting with 42e7f2a1ad2e675570f837bce5e908d40978c6724f2d5f9607bd30c5db0ed670 not found: ID does not exist" Nov 22 08:00:28 crc kubenswrapper[4856]: I1122 08:00:28.720656 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d08f07-64cd-4109-b672-f2f57e2c63f4" path="/var/lib/kubelet/pods/45d08f07-64cd-4109-b672-f2f57e2c63f4/volumes" Nov 22 08:01:59 crc kubenswrapper[4856]: I1122 08:01:59.754756 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:01:59 crc kubenswrapper[4856]: I1122 08:01:59.755449 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:02:29 crc kubenswrapper[4856]: I1122 08:02:29.754765 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:02:29 crc kubenswrapper[4856]: I1122 08:02:29.755788 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:02:59 crc kubenswrapper[4856]: I1122 08:02:59.754211 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:02:59 crc kubenswrapper[4856]: I1122 08:02:59.754764 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:02:59 crc kubenswrapper[4856]: I1122 08:02:59.754810 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 08:02:59 crc kubenswrapper[4856]: I1122 08:02:59.755526 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc0742e64ac1595b971c241e6fe2ab4963b365d6b87410e4e2179a202de989f7"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:02:59 crc kubenswrapper[4856]: I1122 08:02:59.755593 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://cc0742e64ac1595b971c241e6fe2ab4963b365d6b87410e4e2179a202de989f7" gracePeriod=600 Nov 22 08:03:00 crc kubenswrapper[4856]: I1122 08:03:00.184886 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="cc0742e64ac1595b971c241e6fe2ab4963b365d6b87410e4e2179a202de989f7" exitCode=0 Nov 22 08:03:00 crc kubenswrapper[4856]: I1122 08:03:00.184961 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"cc0742e64ac1595b971c241e6fe2ab4963b365d6b87410e4e2179a202de989f7"} Nov 22 08:03:00 crc kubenswrapper[4856]: I1122 08:03:00.185470 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3"} Nov 22 08:03:00 crc kubenswrapper[4856]: I1122 08:03:00.185528 4856 scope.go:117] "RemoveContainer" containerID="aaf78d52e83597fdf9b5dafec122d2d466d7905fcd722576f380ae3b995699e3" Nov 22 08:04:06 crc kubenswrapper[4856]: I1122 08:04:06.485906 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-k5p5q" podUID="53a40c3b-f70f-4fa6-80f9-bbbc6ef4a5f0" containerName="registry-server" probeResult="failure" output=< Nov 22 08:04:06 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 08:04:06 crc kubenswrapper[4856]: > Nov 22 08:04:06 crc kubenswrapper[4856]: I1122 08:04:06.906796 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-k5p5q" podUID="53a40c3b-f70f-4fa6-80f9-bbbc6ef4a5f0" containerName="registry-server" probeResult="failure" output=< Nov 22 08:04:06 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 08:04:06 crc kubenswrapper[4856]: > Nov 22 08:05:29 crc kubenswrapper[4856]: I1122 08:05:29.754354 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:05:29 crc kubenswrapper[4856]: I1122 08:05:29.755407 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:05:59 crc kubenswrapper[4856]: I1122 08:05:59.753975 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:05:59 crc kubenswrapper[4856]: I1122 08:05:59.754662 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:06:29 crc kubenswrapper[4856]: I1122 08:06:29.755084 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:06:29 crc kubenswrapper[4856]: I1122 08:06:29.755664 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:06:29 crc kubenswrapper[4856]: I1122 08:06:29.755716 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 08:06:29 crc kubenswrapper[4856]: I1122 08:06:29.756459 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:06:29 crc kubenswrapper[4856]: I1122 08:06:29.756528 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" gracePeriod=600 Nov 22 08:06:30 crc kubenswrapper[4856]: I1122 08:06:30.016056 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" exitCode=0 Nov 22 08:06:30 crc kubenswrapper[4856]: I1122 08:06:30.016106 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3"} Nov 22 08:06:30 crc kubenswrapper[4856]: I1122 08:06:30.016152 4856 scope.go:117] "RemoveContainer" containerID="cc0742e64ac1595b971c241e6fe2ab4963b365d6b87410e4e2179a202de989f7" Nov 22 08:06:30 crc kubenswrapper[4856]: E1122 08:06:30.383339 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:06:31 crc kubenswrapper[4856]: I1122 08:06:31.025685 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:06:31 crc kubenswrapper[4856]: E1122 08:06:31.025902 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:06:46 crc kubenswrapper[4856]: I1122 08:06:46.711092 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:06:46 crc kubenswrapper[4856]: E1122 08:06:46.712386 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:06:57 crc kubenswrapper[4856]: I1122 08:06:57.710125 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:06:57 crc kubenswrapper[4856]: E1122 08:06:57.710928 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:07:10 crc kubenswrapper[4856]: I1122 08:07:10.724388 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:07:10 crc kubenswrapper[4856]: E1122 08:07:10.728944 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:07:22 crc kubenswrapper[4856]: I1122 08:07:22.710456 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:07:22 crc kubenswrapper[4856]: E1122 08:07:22.711319 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:07:37 crc kubenswrapper[4856]: I1122 08:07:37.709694 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:07:37 crc kubenswrapper[4856]: E1122 08:07:37.710444 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:07:52 crc kubenswrapper[4856]: I1122 08:07:52.709956 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:07:52 crc kubenswrapper[4856]: E1122 08:07:52.710861 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:08:07 crc kubenswrapper[4856]: I1122 08:08:07.710157 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:08:07 crc kubenswrapper[4856]: E1122 08:08:07.711006 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:08:20 crc kubenswrapper[4856]: I1122 08:08:20.709412 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:08:20 crc kubenswrapper[4856]: E1122 08:08:20.710170 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:08:33 crc kubenswrapper[4856]: I1122 08:08:33.710395 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:08:33 crc kubenswrapper[4856]: E1122 08:08:33.711721 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:08:46 crc kubenswrapper[4856]: I1122 08:08:46.709951 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:08:46 crc kubenswrapper[4856]: E1122 08:08:46.710628 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:08:57 crc kubenswrapper[4856]: I1122 08:08:57.709851 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:08:57 crc kubenswrapper[4856]: E1122 08:08:57.710803 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:09:01 crc kubenswrapper[4856]: I1122 08:09:01.857772 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ftn9p"] Nov 22 08:09:01 crc kubenswrapper[4856]: E1122 08:09:01.858453 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d08f07-64cd-4109-b672-f2f57e2c63f4" containerName="extract-utilities" Nov 22 08:09:01 crc kubenswrapper[4856]: I1122 08:09:01.858470 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d08f07-64cd-4109-b672-f2f57e2c63f4" containerName="extract-utilities" Nov 22 08:09:01 crc kubenswrapper[4856]: E1122 08:09:01.858489 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d08f07-64cd-4109-b672-f2f57e2c63f4" containerName="registry-server" Nov 22 08:09:01 crc kubenswrapper[4856]: I1122 08:09:01.858497 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d08f07-64cd-4109-b672-f2f57e2c63f4" containerName="registry-server" Nov 22 08:09:01 crc kubenswrapper[4856]: E1122 08:09:01.858550 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d08f07-64cd-4109-b672-f2f57e2c63f4" containerName="extract-content" Nov 22 08:09:01 crc kubenswrapper[4856]: I1122 08:09:01.858559 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d08f07-64cd-4109-b672-f2f57e2c63f4" containerName="extract-content" Nov 22 08:09:01 crc kubenswrapper[4856]: I1122 08:09:01.858736 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d08f07-64cd-4109-b672-f2f57e2c63f4" containerName="registry-server" Nov 22 08:09:01 crc kubenswrapper[4856]: I1122 08:09:01.860022 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:01 crc kubenswrapper[4856]: I1122 08:09:01.870900 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftn9p"] Nov 22 08:09:01 crc kubenswrapper[4856]: I1122 08:09:01.952593 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41e54cac-c849-42be-bec3-0a5f46b07d94-catalog-content\") pod \"community-operators-ftn9p\" (UID: \"41e54cac-c849-42be-bec3-0a5f46b07d94\") " pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:01 crc kubenswrapper[4856]: I1122 08:09:01.952658 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2xtm\" (UniqueName: \"kubernetes.io/projected/41e54cac-c849-42be-bec3-0a5f46b07d94-kube-api-access-m2xtm\") pod \"community-operators-ftn9p\" (UID: \"41e54cac-c849-42be-bec3-0a5f46b07d94\") " pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:01 crc kubenswrapper[4856]: I1122 08:09:01.952836 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41e54cac-c849-42be-bec3-0a5f46b07d94-utilities\") pod \"community-operators-ftn9p\" (UID: \"41e54cac-c849-42be-bec3-0a5f46b07d94\") " pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.046182 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z6rc9"] Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.047657 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.053952 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41e54cac-c849-42be-bec3-0a5f46b07d94-catalog-content\") pod \"community-operators-ftn9p\" (UID: \"41e54cac-c849-42be-bec3-0a5f46b07d94\") " pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.054009 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2xtm\" (UniqueName: \"kubernetes.io/projected/41e54cac-c849-42be-bec3-0a5f46b07d94-kube-api-access-m2xtm\") pod \"community-operators-ftn9p\" (UID: \"41e54cac-c849-42be-bec3-0a5f46b07d94\") " pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.054069 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41e54cac-c849-42be-bec3-0a5f46b07d94-utilities\") pod \"community-operators-ftn9p\" (UID: \"41e54cac-c849-42be-bec3-0a5f46b07d94\") " pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.054498 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41e54cac-c849-42be-bec3-0a5f46b07d94-catalog-content\") pod \"community-operators-ftn9p\" (UID: \"41e54cac-c849-42be-bec3-0a5f46b07d94\") " pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.054588 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41e54cac-c849-42be-bec3-0a5f46b07d94-utilities\") pod \"community-operators-ftn9p\" (UID: \"41e54cac-c849-42be-bec3-0a5f46b07d94\") " pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.059488 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z6rc9"] Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.077647 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2xtm\" (UniqueName: \"kubernetes.io/projected/41e54cac-c849-42be-bec3-0a5f46b07d94-kube-api-access-m2xtm\") pod \"community-operators-ftn9p\" (UID: \"41e54cac-c849-42be-bec3-0a5f46b07d94\") " pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.155611 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-utilities\") pod \"certified-operators-z6rc9\" (UID: \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\") " pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.155669 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npxmv\" (UniqueName: \"kubernetes.io/projected/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-kube-api-access-npxmv\") pod \"certified-operators-z6rc9\" (UID: \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\") " pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.155711 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-catalog-content\") pod \"certified-operators-z6rc9\" (UID: \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\") " pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.184445 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.257044 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-utilities\") pod \"certified-operators-z6rc9\" (UID: \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\") " pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.257461 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npxmv\" (UniqueName: \"kubernetes.io/projected/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-kube-api-access-npxmv\") pod \"certified-operators-z6rc9\" (UID: \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\") " pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.257567 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-catalog-content\") pod \"certified-operators-z6rc9\" (UID: \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\") " pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.258156 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-catalog-content\") pod \"certified-operators-z6rc9\" (UID: \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\") " pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.258451 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-utilities\") pod \"certified-operators-z6rc9\" (UID: \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\") " pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.290450 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npxmv\" (UniqueName: \"kubernetes.io/projected/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-kube-api-access-npxmv\") pod \"certified-operators-z6rc9\" (UID: \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\") " pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.366417 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.747278 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftn9p"] Nov 22 08:09:02 crc kubenswrapper[4856]: I1122 08:09:02.928367 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z6rc9"] Nov 22 08:09:03 crc kubenswrapper[4856]: I1122 08:09:03.185379 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6rc9" event={"ID":"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e","Type":"ContainerStarted","Data":"03fffe8600d3284d7aeace04c4053d9d2efbbc4cb4064d1db92a0f0f512a1670"} Nov 22 08:09:03 crc kubenswrapper[4856]: I1122 08:09:03.186500 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftn9p" event={"ID":"41e54cac-c849-42be-bec3-0a5f46b07d94","Type":"ContainerStarted","Data":"bfdfbed102d65fc227cd582f1e9dedf89592f5fb5062decd5e914093c3f6f6b4"} Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.196428 4856 generic.go:334] "Generic (PLEG): container finished" podID="c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" containerID="0dbedec96bd33294434c48fe5f703dbf71355b45eb74706f3ca7ec5487f9b90f" exitCode=0 Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.196502 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6rc9" event={"ID":"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e","Type":"ContainerDied","Data":"0dbedec96bd33294434c48fe5f703dbf71355b45eb74706f3ca7ec5487f9b90f"} Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.198699 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.198914 4856 generic.go:334] "Generic (PLEG): container finished" podID="41e54cac-c849-42be-bec3-0a5f46b07d94" containerID="bde762f89d652780070c92f01e36c2683a22b6dafb2a7aeb329597d7245a493d" exitCode=0 Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.199031 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftn9p" event={"ID":"41e54cac-c849-42be-bec3-0a5f46b07d94","Type":"ContainerDied","Data":"bde762f89d652780070c92f01e36c2683a22b6dafb2a7aeb329597d7245a493d"} Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.460717 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d5k52"] Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.462342 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.467257 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d5k52"] Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.595424 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-catalog-content\") pod \"redhat-operators-d5k52\" (UID: \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\") " pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.595588 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8zpj\" (UniqueName: \"kubernetes.io/projected/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-kube-api-access-k8zpj\") pod \"redhat-operators-d5k52\" (UID: \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\") " pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.595632 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-utilities\") pod \"redhat-operators-d5k52\" (UID: \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\") " pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.697402 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8zpj\" (UniqueName: \"kubernetes.io/projected/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-kube-api-access-k8zpj\") pod \"redhat-operators-d5k52\" (UID: \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\") " pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.697657 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-utilities\") pod \"redhat-operators-d5k52\" (UID: \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\") " pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.697821 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-catalog-content\") pod \"redhat-operators-d5k52\" (UID: \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\") " pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.698218 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-utilities\") pod \"redhat-operators-d5k52\" (UID: \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\") " pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.698499 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-catalog-content\") pod \"redhat-operators-d5k52\" (UID: \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\") " pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.719164 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8zpj\" (UniqueName: \"kubernetes.io/projected/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-kube-api-access-k8zpj\") pod \"redhat-operators-d5k52\" (UID: \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\") " pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:04 crc kubenswrapper[4856]: I1122 08:09:04.821364 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:05 crc kubenswrapper[4856]: I1122 08:09:05.208418 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftn9p" event={"ID":"41e54cac-c849-42be-bec3-0a5f46b07d94","Type":"ContainerStarted","Data":"d092c92438030d0237aaf58296f4b42b1eb382d4a3c03da4c885f4e2da08f1b8"} Nov 22 08:09:05 crc kubenswrapper[4856]: I1122 08:09:05.210904 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6rc9" event={"ID":"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e","Type":"ContainerStarted","Data":"7297decce50938ff2a64b05342b707bede0827284a31056b682033d89d87d378"} Nov 22 08:09:05 crc kubenswrapper[4856]: I1122 08:09:05.282290 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d5k52"] Nov 22 08:09:06 crc kubenswrapper[4856]: I1122 08:09:06.218825 4856 generic.go:334] "Generic (PLEG): container finished" podID="41e54cac-c849-42be-bec3-0a5f46b07d94" containerID="d092c92438030d0237aaf58296f4b42b1eb382d4a3c03da4c885f4e2da08f1b8" exitCode=0 Nov 22 08:09:06 crc kubenswrapper[4856]: I1122 08:09:06.219787 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftn9p" event={"ID":"41e54cac-c849-42be-bec3-0a5f46b07d94","Type":"ContainerDied","Data":"d092c92438030d0237aaf58296f4b42b1eb382d4a3c03da4c885f4e2da08f1b8"} Nov 22 08:09:06 crc kubenswrapper[4856]: I1122 08:09:06.220966 4856 generic.go:334] "Generic (PLEG): container finished" podID="cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" containerID="7531a52e273d11d0bdf36c205638bda09cf5c3c4259d33f945970e4c46e31469" exitCode=0 Nov 22 08:09:06 crc kubenswrapper[4856]: I1122 08:09:06.221245 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5k52" event={"ID":"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873","Type":"ContainerDied","Data":"7531a52e273d11d0bdf36c205638bda09cf5c3c4259d33f945970e4c46e31469"} Nov 22 08:09:06 crc kubenswrapper[4856]: I1122 08:09:06.221351 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5k52" event={"ID":"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873","Type":"ContainerStarted","Data":"e478ce678e286b3b7a45ec74614ce445dd50d4f78c7fe20e092e0670fcae11b2"} Nov 22 08:09:06 crc kubenswrapper[4856]: I1122 08:09:06.224404 4856 generic.go:334] "Generic (PLEG): container finished" podID="c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" containerID="7297decce50938ff2a64b05342b707bede0827284a31056b682033d89d87d378" exitCode=0 Nov 22 08:09:06 crc kubenswrapper[4856]: I1122 08:09:06.224433 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6rc9" event={"ID":"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e","Type":"ContainerDied","Data":"7297decce50938ff2a64b05342b707bede0827284a31056b682033d89d87d378"} Nov 22 08:09:07 crc kubenswrapper[4856]: I1122 08:09:07.234740 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftn9p" event={"ID":"41e54cac-c849-42be-bec3-0a5f46b07d94","Type":"ContainerStarted","Data":"43ba1e7d925f62e8effc35c77ea8e6209cc27c06b1b96f7504d7b5bf5bf43926"} Nov 22 08:09:07 crc kubenswrapper[4856]: I1122 08:09:07.237874 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5k52" event={"ID":"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873","Type":"ContainerStarted","Data":"9a919d0ffbd3030e1d5bff15d1d6299667bdccbffc53b5afa9f687a590c989dd"} Nov 22 08:09:07 crc kubenswrapper[4856]: I1122 08:09:07.240019 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6rc9" event={"ID":"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e","Type":"ContainerStarted","Data":"9fde54ff50491a8c08b2b00454858a51e6712701d88d7948aed9a1a7e1c58015"} Nov 22 08:09:07 crc kubenswrapper[4856]: I1122 08:09:07.258828 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ftn9p" podStartSLOduration=3.73481816 podStartE2EDuration="6.258805952s" podCreationTimestamp="2025-11-22 08:09:01 +0000 UTC" firstStartedPulling="2025-11-22 08:09:04.201830209 +0000 UTC m=+3986.615223487" lastFinishedPulling="2025-11-22 08:09:06.725818021 +0000 UTC m=+3989.139211279" observedRunningTime="2025-11-22 08:09:07.253682485 +0000 UTC m=+3989.667075763" watchObservedRunningTime="2025-11-22 08:09:07.258805952 +0000 UTC m=+3989.672199210" Nov 22 08:09:07 crc kubenswrapper[4856]: I1122 08:09:07.273591 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z6rc9" podStartSLOduration=2.8408645310000002 podStartE2EDuration="5.273568929s" podCreationTimestamp="2025-11-22 08:09:02 +0000 UTC" firstStartedPulling="2025-11-22 08:09:04.198265854 +0000 UTC m=+3986.611659132" lastFinishedPulling="2025-11-22 08:09:06.630970272 +0000 UTC m=+3989.044363530" observedRunningTime="2025-11-22 08:09:07.272325726 +0000 UTC m=+3989.685719004" watchObservedRunningTime="2025-11-22 08:09:07.273568929 +0000 UTC m=+3989.686962187" Nov 22 08:09:08 crc kubenswrapper[4856]: I1122 08:09:08.248605 4856 generic.go:334] "Generic (PLEG): container finished" podID="cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" containerID="9a919d0ffbd3030e1d5bff15d1d6299667bdccbffc53b5afa9f687a590c989dd" exitCode=0 Nov 22 08:09:08 crc kubenswrapper[4856]: I1122 08:09:08.248743 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5k52" event={"ID":"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873","Type":"ContainerDied","Data":"9a919d0ffbd3030e1d5bff15d1d6299667bdccbffc53b5afa9f687a590c989dd"} Nov 22 08:09:09 crc kubenswrapper[4856]: I1122 08:09:09.259264 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5k52" event={"ID":"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873","Type":"ContainerStarted","Data":"619444fa6f75e2da2ce6cf60ffd49aa035df635a4643e03086fc9ddf99582a30"} Nov 22 08:09:09 crc kubenswrapper[4856]: I1122 08:09:09.279449 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d5k52" podStartSLOduration=2.436113546 podStartE2EDuration="5.279434278s" podCreationTimestamp="2025-11-22 08:09:04 +0000 UTC" firstStartedPulling="2025-11-22 08:09:06.222316831 +0000 UTC m=+3988.635710089" lastFinishedPulling="2025-11-22 08:09:09.065637563 +0000 UTC m=+3991.479030821" observedRunningTime="2025-11-22 08:09:09.277342382 +0000 UTC m=+3991.690735640" watchObservedRunningTime="2025-11-22 08:09:09.279434278 +0000 UTC m=+3991.692827536" Nov 22 08:09:12 crc kubenswrapper[4856]: I1122 08:09:12.185560 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:12 crc kubenswrapper[4856]: I1122 08:09:12.186032 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:12 crc kubenswrapper[4856]: I1122 08:09:12.228155 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:12 crc kubenswrapper[4856]: I1122 08:09:12.329125 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:12 crc kubenswrapper[4856]: I1122 08:09:12.367983 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:12 crc kubenswrapper[4856]: I1122 08:09:12.368041 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:12 crc kubenswrapper[4856]: I1122 08:09:12.406866 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:12 crc kubenswrapper[4856]: I1122 08:09:12.710428 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:09:12 crc kubenswrapper[4856]: E1122 08:09:12.710746 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:09:12 crc kubenswrapper[4856]: I1122 08:09:12.845191 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftn9p"] Nov 22 08:09:13 crc kubenswrapper[4856]: I1122 08:09:13.338944 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:14 crc kubenswrapper[4856]: I1122 08:09:14.296679 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ftn9p" podUID="41e54cac-c849-42be-bec3-0a5f46b07d94" containerName="registry-server" containerID="cri-o://43ba1e7d925f62e8effc35c77ea8e6209cc27c06b1b96f7504d7b5bf5bf43926" gracePeriod=2 Nov 22 08:09:14 crc kubenswrapper[4856]: I1122 08:09:14.821700 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:14 crc kubenswrapper[4856]: I1122 08:09:14.821753 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:14 crc kubenswrapper[4856]: I1122 08:09:14.865716 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.039813 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z6rc9"] Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.304728 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z6rc9" podUID="c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" containerName="registry-server" containerID="cri-o://9fde54ff50491a8c08b2b00454858a51e6712701d88d7948aed9a1a7e1c58015" gracePeriod=2 Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.347168 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.851939 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.859290 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.980241 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-utilities\") pod \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\" (UID: \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\") " Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.980309 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41e54cac-c849-42be-bec3-0a5f46b07d94-catalog-content\") pod \"41e54cac-c849-42be-bec3-0a5f46b07d94\" (UID: \"41e54cac-c849-42be-bec3-0a5f46b07d94\") " Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.980329 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41e54cac-c849-42be-bec3-0a5f46b07d94-utilities\") pod \"41e54cac-c849-42be-bec3-0a5f46b07d94\" (UID: \"41e54cac-c849-42be-bec3-0a5f46b07d94\") " Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.980396 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-catalog-content\") pod \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\" (UID: \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\") " Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.980434 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2xtm\" (UniqueName: \"kubernetes.io/projected/41e54cac-c849-42be-bec3-0a5f46b07d94-kube-api-access-m2xtm\") pod \"41e54cac-c849-42be-bec3-0a5f46b07d94\" (UID: \"41e54cac-c849-42be-bec3-0a5f46b07d94\") " Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.980504 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npxmv\" (UniqueName: \"kubernetes.io/projected/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-kube-api-access-npxmv\") pod \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\" (UID: \"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e\") " Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.981228 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-utilities" (OuterVolumeSpecName: "utilities") pod "c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" (UID: "c3cd2c25-e9d8-4921-bc8b-80e83b6e668e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.982726 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41e54cac-c849-42be-bec3-0a5f46b07d94-utilities" (OuterVolumeSpecName: "utilities") pod "41e54cac-c849-42be-bec3-0a5f46b07d94" (UID: "41e54cac-c849-42be-bec3-0a5f46b07d94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.992157 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41e54cac-c849-42be-bec3-0a5f46b07d94-kube-api-access-m2xtm" (OuterVolumeSpecName: "kube-api-access-m2xtm") pod "41e54cac-c849-42be-bec3-0a5f46b07d94" (UID: "41e54cac-c849-42be-bec3-0a5f46b07d94"). InnerVolumeSpecName "kube-api-access-m2xtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:09:15 crc kubenswrapper[4856]: I1122 08:09:15.992232 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-kube-api-access-npxmv" (OuterVolumeSpecName: "kube-api-access-npxmv") pod "c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" (UID: "c3cd2c25-e9d8-4921-bc8b-80e83b6e668e"). InnerVolumeSpecName "kube-api-access-npxmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.040186 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41e54cac-c849-42be-bec3-0a5f46b07d94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41e54cac-c849-42be-bec3-0a5f46b07d94" (UID: "41e54cac-c849-42be-bec3-0a5f46b07d94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.041871 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" (UID: "c3cd2c25-e9d8-4921-bc8b-80e83b6e668e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.082006 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.082042 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2xtm\" (UniqueName: \"kubernetes.io/projected/41e54cac-c849-42be-bec3-0a5f46b07d94-kube-api-access-m2xtm\") on node \"crc\" DevicePath \"\"" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.082053 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npxmv\" (UniqueName: \"kubernetes.io/projected/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-kube-api-access-npxmv\") on node \"crc\" DevicePath \"\"" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.082064 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.082073 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41e54cac-c849-42be-bec3-0a5f46b07d94-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.082081 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41e54cac-c849-42be-bec3-0a5f46b07d94-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.316371 4856 generic.go:334] "Generic (PLEG): container finished" podID="c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" containerID="9fde54ff50491a8c08b2b00454858a51e6712701d88d7948aed9a1a7e1c58015" exitCode=0 Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.316450 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6rc9" event={"ID":"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e","Type":"ContainerDied","Data":"9fde54ff50491a8c08b2b00454858a51e6712701d88d7948aed9a1a7e1c58015"} Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.316483 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z6rc9" event={"ID":"c3cd2c25-e9d8-4921-bc8b-80e83b6e668e","Type":"ContainerDied","Data":"03fffe8600d3284d7aeace04c4053d9d2efbbc4cb4064d1db92a0f0f512a1670"} Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.316533 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z6rc9" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.316548 4856 scope.go:117] "RemoveContainer" containerID="9fde54ff50491a8c08b2b00454858a51e6712701d88d7948aed9a1a7e1c58015" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.320905 4856 generic.go:334] "Generic (PLEG): container finished" podID="41e54cac-c849-42be-bec3-0a5f46b07d94" containerID="43ba1e7d925f62e8effc35c77ea8e6209cc27c06b1b96f7504d7b5bf5bf43926" exitCode=0 Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.321768 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftn9p" event={"ID":"41e54cac-c849-42be-bec3-0a5f46b07d94","Type":"ContainerDied","Data":"43ba1e7d925f62e8effc35c77ea8e6209cc27c06b1b96f7504d7b5bf5bf43926"} Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.321839 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftn9p" event={"ID":"41e54cac-c849-42be-bec3-0a5f46b07d94","Type":"ContainerDied","Data":"bfdfbed102d65fc227cd582f1e9dedf89592f5fb5062decd5e914093c3f6f6b4"} Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.322273 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftn9p" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.348498 4856 scope.go:117] "RemoveContainer" containerID="7297decce50938ff2a64b05342b707bede0827284a31056b682033d89d87d378" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.367458 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftn9p"] Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.373949 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ftn9p"] Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.390692 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z6rc9"] Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.395743 4856 scope.go:117] "RemoveContainer" containerID="0dbedec96bd33294434c48fe5f703dbf71355b45eb74706f3ca7ec5487f9b90f" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.398025 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z6rc9"] Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.415759 4856 scope.go:117] "RemoveContainer" containerID="9fde54ff50491a8c08b2b00454858a51e6712701d88d7948aed9a1a7e1c58015" Nov 22 08:09:16 crc kubenswrapper[4856]: E1122 08:09:16.416337 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fde54ff50491a8c08b2b00454858a51e6712701d88d7948aed9a1a7e1c58015\": container with ID starting with 9fde54ff50491a8c08b2b00454858a51e6712701d88d7948aed9a1a7e1c58015 not found: ID does not exist" containerID="9fde54ff50491a8c08b2b00454858a51e6712701d88d7948aed9a1a7e1c58015" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.416372 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fde54ff50491a8c08b2b00454858a51e6712701d88d7948aed9a1a7e1c58015"} err="failed to get container status \"9fde54ff50491a8c08b2b00454858a51e6712701d88d7948aed9a1a7e1c58015\": rpc error: code = NotFound desc = could not find container \"9fde54ff50491a8c08b2b00454858a51e6712701d88d7948aed9a1a7e1c58015\": container with ID starting with 9fde54ff50491a8c08b2b00454858a51e6712701d88d7948aed9a1a7e1c58015 not found: ID does not exist" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.416396 4856 scope.go:117] "RemoveContainer" containerID="7297decce50938ff2a64b05342b707bede0827284a31056b682033d89d87d378" Nov 22 08:09:16 crc kubenswrapper[4856]: E1122 08:09:16.416781 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7297decce50938ff2a64b05342b707bede0827284a31056b682033d89d87d378\": container with ID starting with 7297decce50938ff2a64b05342b707bede0827284a31056b682033d89d87d378 not found: ID does not exist" containerID="7297decce50938ff2a64b05342b707bede0827284a31056b682033d89d87d378" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.416806 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7297decce50938ff2a64b05342b707bede0827284a31056b682033d89d87d378"} err="failed to get container status \"7297decce50938ff2a64b05342b707bede0827284a31056b682033d89d87d378\": rpc error: code = NotFound desc = could not find container \"7297decce50938ff2a64b05342b707bede0827284a31056b682033d89d87d378\": container with ID starting with 7297decce50938ff2a64b05342b707bede0827284a31056b682033d89d87d378 not found: ID does not exist" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.416819 4856 scope.go:117] "RemoveContainer" containerID="0dbedec96bd33294434c48fe5f703dbf71355b45eb74706f3ca7ec5487f9b90f" Nov 22 08:09:16 crc kubenswrapper[4856]: E1122 08:09:16.417786 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dbedec96bd33294434c48fe5f703dbf71355b45eb74706f3ca7ec5487f9b90f\": container with ID starting with 0dbedec96bd33294434c48fe5f703dbf71355b45eb74706f3ca7ec5487f9b90f not found: ID does not exist" containerID="0dbedec96bd33294434c48fe5f703dbf71355b45eb74706f3ca7ec5487f9b90f" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.417908 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dbedec96bd33294434c48fe5f703dbf71355b45eb74706f3ca7ec5487f9b90f"} err="failed to get container status \"0dbedec96bd33294434c48fe5f703dbf71355b45eb74706f3ca7ec5487f9b90f\": rpc error: code = NotFound desc = could not find container \"0dbedec96bd33294434c48fe5f703dbf71355b45eb74706f3ca7ec5487f9b90f\": container with ID starting with 0dbedec96bd33294434c48fe5f703dbf71355b45eb74706f3ca7ec5487f9b90f not found: ID does not exist" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.417997 4856 scope.go:117] "RemoveContainer" containerID="43ba1e7d925f62e8effc35c77ea8e6209cc27c06b1b96f7504d7b5bf5bf43926" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.440633 4856 scope.go:117] "RemoveContainer" containerID="d092c92438030d0237aaf58296f4b42b1eb382d4a3c03da4c885f4e2da08f1b8" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.457053 4856 scope.go:117] "RemoveContainer" containerID="bde762f89d652780070c92f01e36c2683a22b6dafb2a7aeb329597d7245a493d" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.483854 4856 scope.go:117] "RemoveContainer" containerID="43ba1e7d925f62e8effc35c77ea8e6209cc27c06b1b96f7504d7b5bf5bf43926" Nov 22 08:09:16 crc kubenswrapper[4856]: E1122 08:09:16.484930 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43ba1e7d925f62e8effc35c77ea8e6209cc27c06b1b96f7504d7b5bf5bf43926\": container with ID starting with 43ba1e7d925f62e8effc35c77ea8e6209cc27c06b1b96f7504d7b5bf5bf43926 not found: ID does not exist" containerID="43ba1e7d925f62e8effc35c77ea8e6209cc27c06b1b96f7504d7b5bf5bf43926" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.485012 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43ba1e7d925f62e8effc35c77ea8e6209cc27c06b1b96f7504d7b5bf5bf43926"} err="failed to get container status \"43ba1e7d925f62e8effc35c77ea8e6209cc27c06b1b96f7504d7b5bf5bf43926\": rpc error: code = NotFound desc = could not find container \"43ba1e7d925f62e8effc35c77ea8e6209cc27c06b1b96f7504d7b5bf5bf43926\": container with ID starting with 43ba1e7d925f62e8effc35c77ea8e6209cc27c06b1b96f7504d7b5bf5bf43926 not found: ID does not exist" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.485044 4856 scope.go:117] "RemoveContainer" containerID="d092c92438030d0237aaf58296f4b42b1eb382d4a3c03da4c885f4e2da08f1b8" Nov 22 08:09:16 crc kubenswrapper[4856]: E1122 08:09:16.486124 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d092c92438030d0237aaf58296f4b42b1eb382d4a3c03da4c885f4e2da08f1b8\": container with ID starting with d092c92438030d0237aaf58296f4b42b1eb382d4a3c03da4c885f4e2da08f1b8 not found: ID does not exist" containerID="d092c92438030d0237aaf58296f4b42b1eb382d4a3c03da4c885f4e2da08f1b8" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.486168 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d092c92438030d0237aaf58296f4b42b1eb382d4a3c03da4c885f4e2da08f1b8"} err="failed to get container status \"d092c92438030d0237aaf58296f4b42b1eb382d4a3c03da4c885f4e2da08f1b8\": rpc error: code = NotFound desc = could not find container \"d092c92438030d0237aaf58296f4b42b1eb382d4a3c03da4c885f4e2da08f1b8\": container with ID starting with d092c92438030d0237aaf58296f4b42b1eb382d4a3c03da4c885f4e2da08f1b8 not found: ID does not exist" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.486207 4856 scope.go:117] "RemoveContainer" containerID="bde762f89d652780070c92f01e36c2683a22b6dafb2a7aeb329597d7245a493d" Nov 22 08:09:16 crc kubenswrapper[4856]: E1122 08:09:16.486675 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bde762f89d652780070c92f01e36c2683a22b6dafb2a7aeb329597d7245a493d\": container with ID starting with bde762f89d652780070c92f01e36c2683a22b6dafb2a7aeb329597d7245a493d not found: ID does not exist" containerID="bde762f89d652780070c92f01e36c2683a22b6dafb2a7aeb329597d7245a493d" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.486704 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bde762f89d652780070c92f01e36c2683a22b6dafb2a7aeb329597d7245a493d"} err="failed to get container status \"bde762f89d652780070c92f01e36c2683a22b6dafb2a7aeb329597d7245a493d\": rpc error: code = NotFound desc = could not find container \"bde762f89d652780070c92f01e36c2683a22b6dafb2a7aeb329597d7245a493d\": container with ID starting with bde762f89d652780070c92f01e36c2683a22b6dafb2a7aeb329597d7245a493d not found: ID does not exist" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.722717 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41e54cac-c849-42be-bec3-0a5f46b07d94" path="/var/lib/kubelet/pods/41e54cac-c849-42be-bec3-0a5f46b07d94/volumes" Nov 22 08:09:16 crc kubenswrapper[4856]: I1122 08:09:16.723608 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" path="/var/lib/kubelet/pods/c3cd2c25-e9d8-4921-bc8b-80e83b6e668e/volumes" Nov 22 08:09:17 crc kubenswrapper[4856]: I1122 08:09:17.443659 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d5k52"] Nov 22 08:09:17 crc kubenswrapper[4856]: I1122 08:09:17.444263 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d5k52" podUID="cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" containerName="registry-server" containerID="cri-o://619444fa6f75e2da2ce6cf60ffd49aa035df635a4643e03086fc9ddf99582a30" gracePeriod=2 Nov 22 08:09:17 crc kubenswrapper[4856]: I1122 08:09:17.815495 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:17 crc kubenswrapper[4856]: I1122 08:09:17.818303 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8zpj\" (UniqueName: \"kubernetes.io/projected/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-kube-api-access-k8zpj\") pod \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\" (UID: \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\") " Nov 22 08:09:17 crc kubenswrapper[4856]: I1122 08:09:17.818387 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-utilities\") pod \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\" (UID: \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\") " Nov 22 08:09:17 crc kubenswrapper[4856]: I1122 08:09:17.818427 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-catalog-content\") pod \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\" (UID: \"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873\") " Nov 22 08:09:17 crc kubenswrapper[4856]: I1122 08:09:17.819906 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-utilities" (OuterVolumeSpecName: "utilities") pod "cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" (UID: "cdb580ca-77d8-4adf-aea1-e2bbfd4fb873"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:09:17 crc kubenswrapper[4856]: I1122 08:09:17.828608 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-kube-api-access-k8zpj" (OuterVolumeSpecName: "kube-api-access-k8zpj") pod "cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" (UID: "cdb580ca-77d8-4adf-aea1-e2bbfd4fb873"). InnerVolumeSpecName "kube-api-access-k8zpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:09:17 crc kubenswrapper[4856]: I1122 08:09:17.920222 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8zpj\" (UniqueName: \"kubernetes.io/projected/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-kube-api-access-k8zpj\") on node \"crc\" DevicePath \"\"" Nov 22 08:09:17 crc kubenswrapper[4856]: I1122 08:09:17.920276 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:09:17 crc kubenswrapper[4856]: I1122 08:09:17.924425 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" (UID: "cdb580ca-77d8-4adf-aea1-e2bbfd4fb873"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.022227 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.353320 4856 generic.go:334] "Generic (PLEG): container finished" podID="cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" containerID="619444fa6f75e2da2ce6cf60ffd49aa035df635a4643e03086fc9ddf99582a30" exitCode=0 Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.353396 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5k52" event={"ID":"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873","Type":"ContainerDied","Data":"619444fa6f75e2da2ce6cf60ffd49aa035df635a4643e03086fc9ddf99582a30"} Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.353812 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d5k52" event={"ID":"cdb580ca-77d8-4adf-aea1-e2bbfd4fb873","Type":"ContainerDied","Data":"e478ce678e286b3b7a45ec74614ce445dd50d4f78c7fe20e092e0670fcae11b2"} Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.353850 4856 scope.go:117] "RemoveContainer" containerID="619444fa6f75e2da2ce6cf60ffd49aa035df635a4643e03086fc9ddf99582a30" Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.353570 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d5k52" Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.376355 4856 scope.go:117] "RemoveContainer" containerID="9a919d0ffbd3030e1d5bff15d1d6299667bdccbffc53b5afa9f687a590c989dd" Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.404646 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d5k52"] Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.410978 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d5k52"] Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.422659 4856 scope.go:117] "RemoveContainer" containerID="7531a52e273d11d0bdf36c205638bda09cf5c3c4259d33f945970e4c46e31469" Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.446643 4856 scope.go:117] "RemoveContainer" containerID="619444fa6f75e2da2ce6cf60ffd49aa035df635a4643e03086fc9ddf99582a30" Nov 22 08:09:18 crc kubenswrapper[4856]: E1122 08:09:18.447379 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619444fa6f75e2da2ce6cf60ffd49aa035df635a4643e03086fc9ddf99582a30\": container with ID starting with 619444fa6f75e2da2ce6cf60ffd49aa035df635a4643e03086fc9ddf99582a30 not found: ID does not exist" containerID="619444fa6f75e2da2ce6cf60ffd49aa035df635a4643e03086fc9ddf99582a30" Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.447436 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619444fa6f75e2da2ce6cf60ffd49aa035df635a4643e03086fc9ddf99582a30"} err="failed to get container status \"619444fa6f75e2da2ce6cf60ffd49aa035df635a4643e03086fc9ddf99582a30\": rpc error: code = NotFound desc = could not find container \"619444fa6f75e2da2ce6cf60ffd49aa035df635a4643e03086fc9ddf99582a30\": container with ID starting with 619444fa6f75e2da2ce6cf60ffd49aa035df635a4643e03086fc9ddf99582a30 not found: ID does not exist" Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.447466 4856 scope.go:117] "RemoveContainer" containerID="9a919d0ffbd3030e1d5bff15d1d6299667bdccbffc53b5afa9f687a590c989dd" Nov 22 08:09:18 crc kubenswrapper[4856]: E1122 08:09:18.447827 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a919d0ffbd3030e1d5bff15d1d6299667bdccbffc53b5afa9f687a590c989dd\": container with ID starting with 9a919d0ffbd3030e1d5bff15d1d6299667bdccbffc53b5afa9f687a590c989dd not found: ID does not exist" containerID="9a919d0ffbd3030e1d5bff15d1d6299667bdccbffc53b5afa9f687a590c989dd" Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.447883 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a919d0ffbd3030e1d5bff15d1d6299667bdccbffc53b5afa9f687a590c989dd"} err="failed to get container status \"9a919d0ffbd3030e1d5bff15d1d6299667bdccbffc53b5afa9f687a590c989dd\": rpc error: code = NotFound desc = could not find container \"9a919d0ffbd3030e1d5bff15d1d6299667bdccbffc53b5afa9f687a590c989dd\": container with ID starting with 9a919d0ffbd3030e1d5bff15d1d6299667bdccbffc53b5afa9f687a590c989dd not found: ID does not exist" Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.447914 4856 scope.go:117] "RemoveContainer" containerID="7531a52e273d11d0bdf36c205638bda09cf5c3c4259d33f945970e4c46e31469" Nov 22 08:09:18 crc kubenswrapper[4856]: E1122 08:09:18.448289 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7531a52e273d11d0bdf36c205638bda09cf5c3c4259d33f945970e4c46e31469\": container with ID starting with 7531a52e273d11d0bdf36c205638bda09cf5c3c4259d33f945970e4c46e31469 not found: ID does not exist" containerID="7531a52e273d11d0bdf36c205638bda09cf5c3c4259d33f945970e4c46e31469" Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.448319 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7531a52e273d11d0bdf36c205638bda09cf5c3c4259d33f945970e4c46e31469"} err="failed to get container status \"7531a52e273d11d0bdf36c205638bda09cf5c3c4259d33f945970e4c46e31469\": rpc error: code = NotFound desc = could not find container \"7531a52e273d11d0bdf36c205638bda09cf5c3c4259d33f945970e4c46e31469\": container with ID starting with 7531a52e273d11d0bdf36c205638bda09cf5c3c4259d33f945970e4c46e31469 not found: ID does not exist" Nov 22 08:09:18 crc kubenswrapper[4856]: I1122 08:09:18.718639 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" path="/var/lib/kubelet/pods/cdb580ca-77d8-4adf-aea1-e2bbfd4fb873/volumes" Nov 22 08:09:25 crc kubenswrapper[4856]: I1122 08:09:25.709347 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:09:25 crc kubenswrapper[4856]: E1122 08:09:25.710121 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:09:38 crc kubenswrapper[4856]: I1122 08:09:38.713112 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:09:38 crc kubenswrapper[4856]: E1122 08:09:38.715255 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:09:49 crc kubenswrapper[4856]: I1122 08:09:49.709914 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:09:49 crc kubenswrapper[4856]: E1122 08:09:49.710992 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:10:02 crc kubenswrapper[4856]: I1122 08:10:02.710295 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:10:02 crc kubenswrapper[4856]: E1122 08:10:02.711117 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.290997 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kv77p"] Nov 22 08:10:13 crc kubenswrapper[4856]: E1122 08:10:13.292010 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" containerName="extract-utilities" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.292025 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" containerName="extract-utilities" Nov 22 08:10:13 crc kubenswrapper[4856]: E1122 08:10:13.292034 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41e54cac-c849-42be-bec3-0a5f46b07d94" containerName="registry-server" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.292041 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="41e54cac-c849-42be-bec3-0a5f46b07d94" containerName="registry-server" Nov 22 08:10:13 crc kubenswrapper[4856]: E1122 08:10:13.292059 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41e54cac-c849-42be-bec3-0a5f46b07d94" containerName="extract-utilities" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.292067 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="41e54cac-c849-42be-bec3-0a5f46b07d94" containerName="extract-utilities" Nov 22 08:10:13 crc kubenswrapper[4856]: E1122 08:10:13.292076 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41e54cac-c849-42be-bec3-0a5f46b07d94" containerName="extract-content" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.292083 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="41e54cac-c849-42be-bec3-0a5f46b07d94" containerName="extract-content" Nov 22 08:10:13 crc kubenswrapper[4856]: E1122 08:10:13.292095 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" containerName="registry-server" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.292101 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" containerName="registry-server" Nov 22 08:10:13 crc kubenswrapper[4856]: E1122 08:10:13.292117 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" containerName="extract-content" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.292123 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" containerName="extract-content" Nov 22 08:10:13 crc kubenswrapper[4856]: E1122 08:10:13.292130 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" containerName="extract-content" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.292136 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" containerName="extract-content" Nov 22 08:10:13 crc kubenswrapper[4856]: E1122 08:10:13.292147 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" containerName="registry-server" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.292153 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" containerName="registry-server" Nov 22 08:10:13 crc kubenswrapper[4856]: E1122 08:10:13.292161 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" containerName="extract-utilities" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.292167 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" containerName="extract-utilities" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.292298 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="41e54cac-c849-42be-bec3-0a5f46b07d94" containerName="registry-server" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.292324 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdb580ca-77d8-4adf-aea1-e2bbfd4fb873" containerName="registry-server" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.292333 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3cd2c25-e9d8-4921-bc8b-80e83b6e668e" containerName="registry-server" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.294048 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.308191 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f5caa02-c63f-463d-88fd-d9d4322cde64-utilities\") pod \"redhat-marketplace-kv77p\" (UID: \"3f5caa02-c63f-463d-88fd-d9d4322cde64\") " pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.308755 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj2rm\" (UniqueName: \"kubernetes.io/projected/3f5caa02-c63f-463d-88fd-d9d4322cde64-kube-api-access-rj2rm\") pod \"redhat-marketplace-kv77p\" (UID: \"3f5caa02-c63f-463d-88fd-d9d4322cde64\") " pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.308824 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f5caa02-c63f-463d-88fd-d9d4322cde64-catalog-content\") pod \"redhat-marketplace-kv77p\" (UID: \"3f5caa02-c63f-463d-88fd-d9d4322cde64\") " pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.311591 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv77p"] Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.409303 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f5caa02-c63f-463d-88fd-d9d4322cde64-catalog-content\") pod \"redhat-marketplace-kv77p\" (UID: \"3f5caa02-c63f-463d-88fd-d9d4322cde64\") " pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.409381 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f5caa02-c63f-463d-88fd-d9d4322cde64-utilities\") pod \"redhat-marketplace-kv77p\" (UID: \"3f5caa02-c63f-463d-88fd-d9d4322cde64\") " pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.409424 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj2rm\" (UniqueName: \"kubernetes.io/projected/3f5caa02-c63f-463d-88fd-d9d4322cde64-kube-api-access-rj2rm\") pod \"redhat-marketplace-kv77p\" (UID: \"3f5caa02-c63f-463d-88fd-d9d4322cde64\") " pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.409893 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f5caa02-c63f-463d-88fd-d9d4322cde64-catalog-content\") pod \"redhat-marketplace-kv77p\" (UID: \"3f5caa02-c63f-463d-88fd-d9d4322cde64\") " pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.410025 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f5caa02-c63f-463d-88fd-d9d4322cde64-utilities\") pod \"redhat-marketplace-kv77p\" (UID: \"3f5caa02-c63f-463d-88fd-d9d4322cde64\") " pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.428223 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj2rm\" (UniqueName: \"kubernetes.io/projected/3f5caa02-c63f-463d-88fd-d9d4322cde64-kube-api-access-rj2rm\") pod \"redhat-marketplace-kv77p\" (UID: \"3f5caa02-c63f-463d-88fd-d9d4322cde64\") " pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:13 crc kubenswrapper[4856]: I1122 08:10:13.665808 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:14 crc kubenswrapper[4856]: I1122 08:10:14.078712 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv77p"] Nov 22 08:10:14 crc kubenswrapper[4856]: W1122 08:10:14.089344 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f5caa02_c63f_463d_88fd_d9d4322cde64.slice/crio-8596d515dfc0f77f6023d9ec12474ebbe43f081eb4313e8b9ca6148dce5d34be WatchSource:0}: Error finding container 8596d515dfc0f77f6023d9ec12474ebbe43f081eb4313e8b9ca6148dce5d34be: Status 404 returned error can't find the container with id 8596d515dfc0f77f6023d9ec12474ebbe43f081eb4313e8b9ca6148dce5d34be Nov 22 08:10:14 crc kubenswrapper[4856]: I1122 08:10:14.755466 4856 generic.go:334] "Generic (PLEG): container finished" podID="3f5caa02-c63f-463d-88fd-d9d4322cde64" containerID="7382b49ab94b16e81f8dba84da4287ba1f5ed7aa9f3159d02914a96ae762a405" exitCode=0 Nov 22 08:10:14 crc kubenswrapper[4856]: I1122 08:10:14.755540 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv77p" event={"ID":"3f5caa02-c63f-463d-88fd-d9d4322cde64","Type":"ContainerDied","Data":"7382b49ab94b16e81f8dba84da4287ba1f5ed7aa9f3159d02914a96ae762a405"} Nov 22 08:10:14 crc kubenswrapper[4856]: I1122 08:10:14.755837 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv77p" event={"ID":"3f5caa02-c63f-463d-88fd-d9d4322cde64","Type":"ContainerStarted","Data":"8596d515dfc0f77f6023d9ec12474ebbe43f081eb4313e8b9ca6148dce5d34be"} Nov 22 08:10:15 crc kubenswrapper[4856]: I1122 08:10:15.709873 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:10:15 crc kubenswrapper[4856]: E1122 08:10:15.710406 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:10:15 crc kubenswrapper[4856]: I1122 08:10:15.769396 4856 generic.go:334] "Generic (PLEG): container finished" podID="3f5caa02-c63f-463d-88fd-d9d4322cde64" containerID="96179db219402dc1c30c8d5c9cc4b74802f5c2c1c6f2d45e79d7119e535c3cc1" exitCode=0 Nov 22 08:10:15 crc kubenswrapper[4856]: I1122 08:10:15.769473 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv77p" event={"ID":"3f5caa02-c63f-463d-88fd-d9d4322cde64","Type":"ContainerDied","Data":"96179db219402dc1c30c8d5c9cc4b74802f5c2c1c6f2d45e79d7119e535c3cc1"} Nov 22 08:10:16 crc kubenswrapper[4856]: I1122 08:10:16.778816 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv77p" event={"ID":"3f5caa02-c63f-463d-88fd-d9d4322cde64","Type":"ContainerStarted","Data":"f546473cbf4e4f151a5a53063cfc812f34e7e31ee59fcdb4ba19c52fa4d077cf"} Nov 22 08:10:16 crc kubenswrapper[4856]: I1122 08:10:16.802010 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kv77p" podStartSLOduration=2.302094148 podStartE2EDuration="3.801976831s" podCreationTimestamp="2025-11-22 08:10:13 +0000 UTC" firstStartedPulling="2025-11-22 08:10:14.757118174 +0000 UTC m=+4057.170511422" lastFinishedPulling="2025-11-22 08:10:16.257000847 +0000 UTC m=+4058.670394105" observedRunningTime="2025-11-22 08:10:16.79856075 +0000 UTC m=+4059.211954008" watchObservedRunningTime="2025-11-22 08:10:16.801976831 +0000 UTC m=+4059.215370089" Nov 22 08:10:23 crc kubenswrapper[4856]: I1122 08:10:23.666485 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:23 crc kubenswrapper[4856]: I1122 08:10:23.667165 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:23 crc kubenswrapper[4856]: I1122 08:10:23.730395 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:23 crc kubenswrapper[4856]: I1122 08:10:23.881165 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:23 crc kubenswrapper[4856]: I1122 08:10:23.964906 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv77p"] Nov 22 08:10:25 crc kubenswrapper[4856]: I1122 08:10:25.839608 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kv77p" podUID="3f5caa02-c63f-463d-88fd-d9d4322cde64" containerName="registry-server" containerID="cri-o://f546473cbf4e4f151a5a53063cfc812f34e7e31ee59fcdb4ba19c52fa4d077cf" gracePeriod=2 Nov 22 08:10:26 crc kubenswrapper[4856]: I1122 08:10:26.859647 4856 generic.go:334] "Generic (PLEG): container finished" podID="3f5caa02-c63f-463d-88fd-d9d4322cde64" containerID="f546473cbf4e4f151a5a53063cfc812f34e7e31ee59fcdb4ba19c52fa4d077cf" exitCode=0 Nov 22 08:10:26 crc kubenswrapper[4856]: I1122 08:10:26.859941 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv77p" event={"ID":"3f5caa02-c63f-463d-88fd-d9d4322cde64","Type":"ContainerDied","Data":"f546473cbf4e4f151a5a53063cfc812f34e7e31ee59fcdb4ba19c52fa4d077cf"} Nov 22 08:10:26 crc kubenswrapper[4856]: I1122 08:10:26.896110 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.022052 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f5caa02-c63f-463d-88fd-d9d4322cde64-catalog-content\") pod \"3f5caa02-c63f-463d-88fd-d9d4322cde64\" (UID: \"3f5caa02-c63f-463d-88fd-d9d4322cde64\") " Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.022104 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f5caa02-c63f-463d-88fd-d9d4322cde64-utilities\") pod \"3f5caa02-c63f-463d-88fd-d9d4322cde64\" (UID: \"3f5caa02-c63f-463d-88fd-d9d4322cde64\") " Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.022173 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj2rm\" (UniqueName: \"kubernetes.io/projected/3f5caa02-c63f-463d-88fd-d9d4322cde64-kube-api-access-rj2rm\") pod \"3f5caa02-c63f-463d-88fd-d9d4322cde64\" (UID: \"3f5caa02-c63f-463d-88fd-d9d4322cde64\") " Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.023464 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f5caa02-c63f-463d-88fd-d9d4322cde64-utilities" (OuterVolumeSpecName: "utilities") pod "3f5caa02-c63f-463d-88fd-d9d4322cde64" (UID: "3f5caa02-c63f-463d-88fd-d9d4322cde64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.030772 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f5caa02-c63f-463d-88fd-d9d4322cde64-kube-api-access-rj2rm" (OuterVolumeSpecName: "kube-api-access-rj2rm") pod "3f5caa02-c63f-463d-88fd-d9d4322cde64" (UID: "3f5caa02-c63f-463d-88fd-d9d4322cde64"). InnerVolumeSpecName "kube-api-access-rj2rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.043035 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f5caa02-c63f-463d-88fd-d9d4322cde64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f5caa02-c63f-463d-88fd-d9d4322cde64" (UID: "3f5caa02-c63f-463d-88fd-d9d4322cde64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.123712 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f5caa02-c63f-463d-88fd-d9d4322cde64-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.123766 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f5caa02-c63f-463d-88fd-d9d4322cde64-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.123781 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj2rm\" (UniqueName: \"kubernetes.io/projected/3f5caa02-c63f-463d-88fd-d9d4322cde64-kube-api-access-rj2rm\") on node \"crc\" DevicePath \"\"" Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.709411 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:10:27 crc kubenswrapper[4856]: E1122 08:10:27.709684 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.869630 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kv77p" event={"ID":"3f5caa02-c63f-463d-88fd-d9d4322cde64","Type":"ContainerDied","Data":"8596d515dfc0f77f6023d9ec12474ebbe43f081eb4313e8b9ca6148dce5d34be"} Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.869659 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kv77p" Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.869687 4856 scope.go:117] "RemoveContainer" containerID="f546473cbf4e4f151a5a53063cfc812f34e7e31ee59fcdb4ba19c52fa4d077cf" Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.898869 4856 scope.go:117] "RemoveContainer" containerID="96179db219402dc1c30c8d5c9cc4b74802f5c2c1c6f2d45e79d7119e535c3cc1" Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.906044 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv77p"] Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.912320 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kv77p"] Nov 22 08:10:27 crc kubenswrapper[4856]: I1122 08:10:27.916448 4856 scope.go:117] "RemoveContainer" containerID="7382b49ab94b16e81f8dba84da4287ba1f5ed7aa9f3159d02914a96ae762a405" Nov 22 08:10:28 crc kubenswrapper[4856]: I1122 08:10:28.718802 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f5caa02-c63f-463d-88fd-d9d4322cde64" path="/var/lib/kubelet/pods/3f5caa02-c63f-463d-88fd-d9d4322cde64/volumes" Nov 22 08:10:38 crc kubenswrapper[4856]: I1122 08:10:38.713599 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:10:38 crc kubenswrapper[4856]: E1122 08:10:38.714524 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:10:51 crc kubenswrapper[4856]: I1122 08:10:51.709840 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:10:51 crc kubenswrapper[4856]: E1122 08:10:51.711011 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:11:02 crc kubenswrapper[4856]: I1122 08:11:02.710185 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:11:02 crc kubenswrapper[4856]: E1122 08:11:02.711145 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:11:15 crc kubenswrapper[4856]: I1122 08:11:15.710220 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:11:15 crc kubenswrapper[4856]: E1122 08:11:15.711375 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:11:26 crc kubenswrapper[4856]: I1122 08:11:26.710352 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:11:26 crc kubenswrapper[4856]: E1122 08:11:26.711226 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:11:40 crc kubenswrapper[4856]: I1122 08:11:40.709631 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:11:41 crc kubenswrapper[4856]: I1122 08:11:41.396481 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"663d2d8acba5686d176e8643cd40d72b7984a982359f2095330e4eb26c70fd1c"} Nov 22 08:13:59 crc kubenswrapper[4856]: I1122 08:13:59.754913 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:13:59 crc kubenswrapper[4856]: I1122 08:13:59.755775 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:14:29 crc kubenswrapper[4856]: I1122 08:14:29.754779 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:14:29 crc kubenswrapper[4856]: I1122 08:14:29.755434 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:14:59 crc kubenswrapper[4856]: I1122 08:14:59.754499 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:14:59 crc kubenswrapper[4856]: I1122 08:14:59.755196 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:14:59 crc kubenswrapper[4856]: I1122 08:14:59.755261 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 08:14:59 crc kubenswrapper[4856]: I1122 08:14:59.756049 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"663d2d8acba5686d176e8643cd40d72b7984a982359f2095330e4eb26c70fd1c"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:14:59 crc kubenswrapper[4856]: I1122 08:14:59.756169 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://663d2d8acba5686d176e8643cd40d72b7984a982359f2095330e4eb26c70fd1c" gracePeriod=600 Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.145556 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c"] Nov 22 08:15:00 crc kubenswrapper[4856]: E1122 08:15:00.146272 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5caa02-c63f-463d-88fd-d9d4322cde64" containerName="extract-utilities" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.146300 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5caa02-c63f-463d-88fd-d9d4322cde64" containerName="extract-utilities" Nov 22 08:15:00 crc kubenswrapper[4856]: E1122 08:15:00.146316 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5caa02-c63f-463d-88fd-d9d4322cde64" containerName="extract-content" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.146324 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5caa02-c63f-463d-88fd-d9d4322cde64" containerName="extract-content" Nov 22 08:15:00 crc kubenswrapper[4856]: E1122 08:15:00.146339 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f5caa02-c63f-463d-88fd-d9d4322cde64" containerName="registry-server" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.146349 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f5caa02-c63f-463d-88fd-d9d4322cde64" containerName="registry-server" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.146535 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f5caa02-c63f-463d-88fd-d9d4322cde64" containerName="registry-server" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.147061 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.148907 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.149131 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.180292 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c"] Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.199330 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-secret-volume\") pod \"collect-profiles-29396655-vht6c\" (UID: \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.199705 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbppr\" (UniqueName: \"kubernetes.io/projected/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-kube-api-access-zbppr\") pod \"collect-profiles-29396655-vht6c\" (UID: \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.199842 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-config-volume\") pod \"collect-profiles-29396655-vht6c\" (UID: \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.301383 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbppr\" (UniqueName: \"kubernetes.io/projected/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-kube-api-access-zbppr\") pod \"collect-profiles-29396655-vht6c\" (UID: \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.301447 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-config-volume\") pod \"collect-profiles-29396655-vht6c\" (UID: \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.301478 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-secret-volume\") pod \"collect-profiles-29396655-vht6c\" (UID: \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.302731 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-config-volume\") pod \"collect-profiles-29396655-vht6c\" (UID: \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.314128 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-secret-volume\") pod \"collect-profiles-29396655-vht6c\" (UID: \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.317034 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbppr\" (UniqueName: \"kubernetes.io/projected/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-kube-api-access-zbppr\") pod \"collect-profiles-29396655-vht6c\" (UID: \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.470236 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.831910 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="663d2d8acba5686d176e8643cd40d72b7984a982359f2095330e4eb26c70fd1c" exitCode=0 Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.832108 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"663d2d8acba5686d176e8643cd40d72b7984a982359f2095330e4eb26c70fd1c"} Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.832406 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e"} Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.832434 4856 scope.go:117] "RemoveContainer" containerID="187d3bca008ac71331584b8ff5077ef22ddedcda529e54d3582ab9b1f49ae7d3" Nov 22 08:15:00 crc kubenswrapper[4856]: I1122 08:15:00.911432 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c"] Nov 22 08:15:01 crc kubenswrapper[4856]: I1122 08:15:01.844394 4856 generic.go:334] "Generic (PLEG): container finished" podID="eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e" containerID="6e1d2f3c38ac18a9201c0b70e01e02be31cd1e024837c3df8f58e84a84e3a3b5" exitCode=0 Nov 22 08:15:01 crc kubenswrapper[4856]: I1122 08:15:01.844453 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" event={"ID":"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e","Type":"ContainerDied","Data":"6e1d2f3c38ac18a9201c0b70e01e02be31cd1e024837c3df8f58e84a84e3a3b5"} Nov 22 08:15:01 crc kubenswrapper[4856]: I1122 08:15:01.844713 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" event={"ID":"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e","Type":"ContainerStarted","Data":"602d6f0bb520a633ef2ee87d0e990a86fd686b7d592290e7d7758913f1f25d02"} Nov 22 08:15:03 crc kubenswrapper[4856]: I1122 08:15:03.144556 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" Nov 22 08:15:03 crc kubenswrapper[4856]: I1122 08:15:03.243930 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-config-volume\") pod \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\" (UID: \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\") " Nov 22 08:15:03 crc kubenswrapper[4856]: I1122 08:15:03.244015 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-secret-volume\") pod \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\" (UID: \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\") " Nov 22 08:15:03 crc kubenswrapper[4856]: I1122 08:15:03.244142 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbppr\" (UniqueName: \"kubernetes.io/projected/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-kube-api-access-zbppr\") pod \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\" (UID: \"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e\") " Nov 22 08:15:03 crc kubenswrapper[4856]: I1122 08:15:03.245610 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-config-volume" (OuterVolumeSpecName: "config-volume") pod "eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e" (UID: "eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:15:03 crc kubenswrapper[4856]: I1122 08:15:03.250567 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e" (UID: "eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:15:03 crc kubenswrapper[4856]: I1122 08:15:03.250641 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-kube-api-access-zbppr" (OuterVolumeSpecName: "kube-api-access-zbppr") pod "eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e" (UID: "eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e"). InnerVolumeSpecName "kube-api-access-zbppr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:15:03 crc kubenswrapper[4856]: I1122 08:15:03.345443 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbppr\" (UniqueName: \"kubernetes.io/projected/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-kube-api-access-zbppr\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:03 crc kubenswrapper[4856]: I1122 08:15:03.345490 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:03 crc kubenswrapper[4856]: I1122 08:15:03.345503 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:15:03 crc kubenswrapper[4856]: I1122 08:15:03.874923 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" event={"ID":"eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e","Type":"ContainerDied","Data":"602d6f0bb520a633ef2ee87d0e990a86fd686b7d592290e7d7758913f1f25d02"} Nov 22 08:15:03 crc kubenswrapper[4856]: I1122 08:15:03.875476 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="602d6f0bb520a633ef2ee87d0e990a86fd686b7d592290e7d7758913f1f25d02" Nov 22 08:15:03 crc kubenswrapper[4856]: I1122 08:15:03.874977 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c" Nov 22 08:15:04 crc kubenswrapper[4856]: I1122 08:15:04.221744 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk"] Nov 22 08:15:04 crc kubenswrapper[4856]: I1122 08:15:04.227070 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396610-86kfk"] Nov 22 08:15:04 crc kubenswrapper[4856]: I1122 08:15:04.721270 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b19df38b-b56d-4de6-9e84-be72dd06e7b3" path="/var/lib/kubelet/pods/b19df38b-b56d-4de6-9e84-be72dd06e7b3/volumes" Nov 22 08:15:21 crc kubenswrapper[4856]: I1122 08:15:21.439564 4856 scope.go:117] "RemoveContainer" containerID="19fdc225f8b1b275b9b6e6920018dfc4146c64843a5ff33faf2ec5fe1a6428e4" Nov 22 08:17:29 crc kubenswrapper[4856]: I1122 08:17:29.755156 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:17:29 crc kubenswrapper[4856]: I1122 08:17:29.755828 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:17:59 crc kubenswrapper[4856]: I1122 08:17:59.754495 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:17:59 crc kubenswrapper[4856]: I1122 08:17:59.755090 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:18:29 crc kubenswrapper[4856]: I1122 08:18:29.755073 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:18:29 crc kubenswrapper[4856]: I1122 08:18:29.756110 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:18:29 crc kubenswrapper[4856]: I1122 08:18:29.756193 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 08:18:29 crc kubenswrapper[4856]: I1122 08:18:29.757299 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:18:29 crc kubenswrapper[4856]: I1122 08:18:29.757383 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" gracePeriod=600 Nov 22 08:18:29 crc kubenswrapper[4856]: E1122 08:18:29.879957 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:18:30 crc kubenswrapper[4856]: I1122 08:18:30.626917 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" exitCode=0 Nov 22 08:18:30 crc kubenswrapper[4856]: I1122 08:18:30.626996 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e"} Nov 22 08:18:30 crc kubenswrapper[4856]: I1122 08:18:30.627365 4856 scope.go:117] "RemoveContainer" containerID="663d2d8acba5686d176e8643cd40d72b7984a982359f2095330e4eb26c70fd1c" Nov 22 08:18:30 crc kubenswrapper[4856]: I1122 08:18:30.627979 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:18:30 crc kubenswrapper[4856]: E1122 08:18:30.628344 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:18:44 crc kubenswrapper[4856]: I1122 08:18:44.710186 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:18:44 crc kubenswrapper[4856]: E1122 08:18:44.711013 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:18:57 crc kubenswrapper[4856]: I1122 08:18:57.710020 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:18:57 crc kubenswrapper[4856]: E1122 08:18:57.710902 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:19:10 crc kubenswrapper[4856]: I1122 08:19:10.710553 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:19:10 crc kubenswrapper[4856]: E1122 08:19:10.712099 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:19:25 crc kubenswrapper[4856]: I1122 08:19:25.709170 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:19:25 crc kubenswrapper[4856]: E1122 08:19:25.709892 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:19:40 crc kubenswrapper[4856]: I1122 08:19:40.709628 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:19:40 crc kubenswrapper[4856]: E1122 08:19:40.710556 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:19:52 crc kubenswrapper[4856]: I1122 08:19:52.710043 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:19:52 crc kubenswrapper[4856]: E1122 08:19:52.710733 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:20:05 crc kubenswrapper[4856]: I1122 08:20:05.710602 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:20:05 crc kubenswrapper[4856]: E1122 08:20:05.711863 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.363559 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6qwkq"] Nov 22 08:20:06 crc kubenswrapper[4856]: E1122 08:20:06.363993 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e" containerName="collect-profiles" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.364012 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e" containerName="collect-profiles" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.364188 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e" containerName="collect-profiles" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.365491 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.373460 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qwkq"] Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.499496 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3146de71-66bf-4c97-89ac-8fb96a7304ce-catalog-content\") pod \"redhat-operators-6qwkq\" (UID: \"3146de71-66bf-4c97-89ac-8fb96a7304ce\") " pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.499763 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3146de71-66bf-4c97-89ac-8fb96a7304ce-utilities\") pod \"redhat-operators-6qwkq\" (UID: \"3146de71-66bf-4c97-89ac-8fb96a7304ce\") " pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.499839 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkl6z\" (UniqueName: \"kubernetes.io/projected/3146de71-66bf-4c97-89ac-8fb96a7304ce-kube-api-access-kkl6z\") pod \"redhat-operators-6qwkq\" (UID: \"3146de71-66bf-4c97-89ac-8fb96a7304ce\") " pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.601972 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3146de71-66bf-4c97-89ac-8fb96a7304ce-utilities\") pod \"redhat-operators-6qwkq\" (UID: \"3146de71-66bf-4c97-89ac-8fb96a7304ce\") " pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.602067 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkl6z\" (UniqueName: \"kubernetes.io/projected/3146de71-66bf-4c97-89ac-8fb96a7304ce-kube-api-access-kkl6z\") pod \"redhat-operators-6qwkq\" (UID: \"3146de71-66bf-4c97-89ac-8fb96a7304ce\") " pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.602128 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3146de71-66bf-4c97-89ac-8fb96a7304ce-catalog-content\") pod \"redhat-operators-6qwkq\" (UID: \"3146de71-66bf-4c97-89ac-8fb96a7304ce\") " pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.602726 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3146de71-66bf-4c97-89ac-8fb96a7304ce-utilities\") pod \"redhat-operators-6qwkq\" (UID: \"3146de71-66bf-4c97-89ac-8fb96a7304ce\") " pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.603347 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3146de71-66bf-4c97-89ac-8fb96a7304ce-catalog-content\") pod \"redhat-operators-6qwkq\" (UID: \"3146de71-66bf-4c97-89ac-8fb96a7304ce\") " pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.627210 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkl6z\" (UniqueName: \"kubernetes.io/projected/3146de71-66bf-4c97-89ac-8fb96a7304ce-kube-api-access-kkl6z\") pod \"redhat-operators-6qwkq\" (UID: \"3146de71-66bf-4c97-89ac-8fb96a7304ce\") " pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:06 crc kubenswrapper[4856]: I1122 08:20:06.699782 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:07 crc kubenswrapper[4856]: I1122 08:20:07.133791 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6qwkq"] Nov 22 08:20:07 crc kubenswrapper[4856]: I1122 08:20:07.303683 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qwkq" event={"ID":"3146de71-66bf-4c97-89ac-8fb96a7304ce","Type":"ContainerStarted","Data":"b1400477ef9d465e506ff4f6b69112135c0f45a74e26115540d57a138d3ca7c6"} Nov 22 08:20:08 crc kubenswrapper[4856]: I1122 08:20:08.312820 4856 generic.go:334] "Generic (PLEG): container finished" podID="3146de71-66bf-4c97-89ac-8fb96a7304ce" containerID="7482116ad7c65192ce90b9fb596504bd6545c6e227d85150b03e2eff45f9b6c4" exitCode=0 Nov 22 08:20:08 crc kubenswrapper[4856]: I1122 08:20:08.313037 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qwkq" event={"ID":"3146de71-66bf-4c97-89ac-8fb96a7304ce","Type":"ContainerDied","Data":"7482116ad7c65192ce90b9fb596504bd6545c6e227d85150b03e2eff45f9b6c4"} Nov 22 08:20:08 crc kubenswrapper[4856]: I1122 08:20:08.315738 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:20:09 crc kubenswrapper[4856]: I1122 08:20:09.323043 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qwkq" event={"ID":"3146de71-66bf-4c97-89ac-8fb96a7304ce","Type":"ContainerStarted","Data":"570a5af058d433b1f82238ffe56b275c8ac7c8115e1494e543fb75e9ae21a549"} Nov 22 08:20:10 crc kubenswrapper[4856]: I1122 08:20:10.331556 4856 generic.go:334] "Generic (PLEG): container finished" podID="3146de71-66bf-4c97-89ac-8fb96a7304ce" containerID="570a5af058d433b1f82238ffe56b275c8ac7c8115e1494e543fb75e9ae21a549" exitCode=0 Nov 22 08:20:10 crc kubenswrapper[4856]: I1122 08:20:10.331667 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qwkq" event={"ID":"3146de71-66bf-4c97-89ac-8fb96a7304ce","Type":"ContainerDied","Data":"570a5af058d433b1f82238ffe56b275c8ac7c8115e1494e543fb75e9ae21a549"} Nov 22 08:20:11 crc kubenswrapper[4856]: I1122 08:20:11.340637 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qwkq" event={"ID":"3146de71-66bf-4c97-89ac-8fb96a7304ce","Type":"ContainerStarted","Data":"aea9de376a22c41f82b147fd3c0a33431d00ed037bde696d440c5ef6402373bb"} Nov 22 08:20:11 crc kubenswrapper[4856]: I1122 08:20:11.360323 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6qwkq" podStartSLOduration=2.948857396 podStartE2EDuration="5.360303893s" podCreationTimestamp="2025-11-22 08:20:06 +0000 UTC" firstStartedPulling="2025-11-22 08:20:08.315446225 +0000 UTC m=+4650.728839483" lastFinishedPulling="2025-11-22 08:20:10.726892722 +0000 UTC m=+4653.140285980" observedRunningTime="2025-11-22 08:20:11.356882461 +0000 UTC m=+4653.770275729" watchObservedRunningTime="2025-11-22 08:20:11.360303893 +0000 UTC m=+4653.773697151" Nov 22 08:20:16 crc kubenswrapper[4856]: I1122 08:20:16.700875 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:16 crc kubenswrapper[4856]: I1122 08:20:16.701201 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:16 crc kubenswrapper[4856]: I1122 08:20:16.745855 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.430382 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8vmn8"] Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.432295 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.437420 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8vmn8"] Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.450331 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.554863 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7m5s\" (UniqueName: \"kubernetes.io/projected/52255369-66d7-414a-bdb1-f6cf267d4953-kube-api-access-c7m5s\") pod \"community-operators-8vmn8\" (UID: \"52255369-66d7-414a-bdb1-f6cf267d4953\") " pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.554938 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52255369-66d7-414a-bdb1-f6cf267d4953-catalog-content\") pod \"community-operators-8vmn8\" (UID: \"52255369-66d7-414a-bdb1-f6cf267d4953\") " pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.554974 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52255369-66d7-414a-bdb1-f6cf267d4953-utilities\") pod \"community-operators-8vmn8\" (UID: \"52255369-66d7-414a-bdb1-f6cf267d4953\") " pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.656004 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7m5s\" (UniqueName: \"kubernetes.io/projected/52255369-66d7-414a-bdb1-f6cf267d4953-kube-api-access-c7m5s\") pod \"community-operators-8vmn8\" (UID: \"52255369-66d7-414a-bdb1-f6cf267d4953\") " pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.656085 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52255369-66d7-414a-bdb1-f6cf267d4953-catalog-content\") pod \"community-operators-8vmn8\" (UID: \"52255369-66d7-414a-bdb1-f6cf267d4953\") " pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.656120 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52255369-66d7-414a-bdb1-f6cf267d4953-utilities\") pod \"community-operators-8vmn8\" (UID: \"52255369-66d7-414a-bdb1-f6cf267d4953\") " pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.656648 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52255369-66d7-414a-bdb1-f6cf267d4953-utilities\") pod \"community-operators-8vmn8\" (UID: \"52255369-66d7-414a-bdb1-f6cf267d4953\") " pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.656754 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52255369-66d7-414a-bdb1-f6cf267d4953-catalog-content\") pod \"community-operators-8vmn8\" (UID: \"52255369-66d7-414a-bdb1-f6cf267d4953\") " pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.677071 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7m5s\" (UniqueName: \"kubernetes.io/projected/52255369-66d7-414a-bdb1-f6cf267d4953-kube-api-access-c7m5s\") pod \"community-operators-8vmn8\" (UID: \"52255369-66d7-414a-bdb1-f6cf267d4953\") " pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.709148 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:20:17 crc kubenswrapper[4856]: E1122 08:20:17.709407 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:20:17 crc kubenswrapper[4856]: I1122 08:20:17.757527 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:18 crc kubenswrapper[4856]: I1122 08:20:18.243652 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8vmn8"] Nov 22 08:20:18 crc kubenswrapper[4856]: I1122 08:20:18.392566 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vmn8" event={"ID":"52255369-66d7-414a-bdb1-f6cf267d4953","Type":"ContainerStarted","Data":"18c334acdee1d0cca0f0edc180ea18a21d9ce83b3918109a29f3a171eb632c6b"} Nov 22 08:20:19 crc kubenswrapper[4856]: I1122 08:20:19.401694 4856 generic.go:334] "Generic (PLEG): container finished" podID="52255369-66d7-414a-bdb1-f6cf267d4953" containerID="01301fbae2337bf4b0a5e455496f818c3d893efdceb1aba3eb431a4bfd75d4e2" exitCode=0 Nov 22 08:20:19 crc kubenswrapper[4856]: I1122 08:20:19.401749 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vmn8" event={"ID":"52255369-66d7-414a-bdb1-f6cf267d4953","Type":"ContainerDied","Data":"01301fbae2337bf4b0a5e455496f818c3d893efdceb1aba3eb431a4bfd75d4e2"} Nov 22 08:20:19 crc kubenswrapper[4856]: I1122 08:20:19.780156 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qwkq"] Nov 22 08:20:19 crc kubenswrapper[4856]: I1122 08:20:19.780978 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6qwkq" podUID="3146de71-66bf-4c97-89ac-8fb96a7304ce" containerName="registry-server" containerID="cri-o://aea9de376a22c41f82b147fd3c0a33431d00ed037bde696d440c5ef6402373bb" gracePeriod=2 Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.083779 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.236323 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3146de71-66bf-4c97-89ac-8fb96a7304ce-utilities\") pod \"3146de71-66bf-4c97-89ac-8fb96a7304ce\" (UID: \"3146de71-66bf-4c97-89ac-8fb96a7304ce\") " Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.236370 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkl6z\" (UniqueName: \"kubernetes.io/projected/3146de71-66bf-4c97-89ac-8fb96a7304ce-kube-api-access-kkl6z\") pod \"3146de71-66bf-4c97-89ac-8fb96a7304ce\" (UID: \"3146de71-66bf-4c97-89ac-8fb96a7304ce\") " Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.236436 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3146de71-66bf-4c97-89ac-8fb96a7304ce-catalog-content\") pod \"3146de71-66bf-4c97-89ac-8fb96a7304ce\" (UID: \"3146de71-66bf-4c97-89ac-8fb96a7304ce\") " Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.237314 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3146de71-66bf-4c97-89ac-8fb96a7304ce-utilities" (OuterVolumeSpecName: "utilities") pod "3146de71-66bf-4c97-89ac-8fb96a7304ce" (UID: "3146de71-66bf-4c97-89ac-8fb96a7304ce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.241550 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3146de71-66bf-4c97-89ac-8fb96a7304ce-kube-api-access-kkl6z" (OuterVolumeSpecName: "kube-api-access-kkl6z") pod "3146de71-66bf-4c97-89ac-8fb96a7304ce" (UID: "3146de71-66bf-4c97-89ac-8fb96a7304ce"). InnerVolumeSpecName "kube-api-access-kkl6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.320681 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3146de71-66bf-4c97-89ac-8fb96a7304ce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3146de71-66bf-4c97-89ac-8fb96a7304ce" (UID: "3146de71-66bf-4c97-89ac-8fb96a7304ce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.337996 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3146de71-66bf-4c97-89ac-8fb96a7304ce-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.338035 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3146de71-66bf-4c97-89ac-8fb96a7304ce-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.338049 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkl6z\" (UniqueName: \"kubernetes.io/projected/3146de71-66bf-4c97-89ac-8fb96a7304ce-kube-api-access-kkl6z\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.437241 4856 generic.go:334] "Generic (PLEG): container finished" podID="3146de71-66bf-4c97-89ac-8fb96a7304ce" containerID="aea9de376a22c41f82b147fd3c0a33431d00ed037bde696d440c5ef6402373bb" exitCode=0 Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.437293 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qwkq" event={"ID":"3146de71-66bf-4c97-89ac-8fb96a7304ce","Type":"ContainerDied","Data":"aea9de376a22c41f82b147fd3c0a33431d00ed037bde696d440c5ef6402373bb"} Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.437309 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6qwkq" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.437339 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6qwkq" event={"ID":"3146de71-66bf-4c97-89ac-8fb96a7304ce","Type":"ContainerDied","Data":"b1400477ef9d465e506ff4f6b69112135c0f45a74e26115540d57a138d3ca7c6"} Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.437358 4856 scope.go:117] "RemoveContainer" containerID="aea9de376a22c41f82b147fd3c0a33431d00ed037bde696d440c5ef6402373bb" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.440496 4856 generic.go:334] "Generic (PLEG): container finished" podID="52255369-66d7-414a-bdb1-f6cf267d4953" containerID="c886c4342492b4724c25fe554f09bc898ef31337376add829470ba9477cc65f0" exitCode=0 Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.440547 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vmn8" event={"ID":"52255369-66d7-414a-bdb1-f6cf267d4953","Type":"ContainerDied","Data":"c886c4342492b4724c25fe554f09bc898ef31337376add829470ba9477cc65f0"} Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.460842 4856 scope.go:117] "RemoveContainer" containerID="570a5af058d433b1f82238ffe56b275c8ac7c8115e1494e543fb75e9ae21a549" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.473831 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6qwkq"] Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.479100 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6qwkq"] Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.502704 4856 scope.go:117] "RemoveContainer" containerID="7482116ad7c65192ce90b9fb596504bd6545c6e227d85150b03e2eff45f9b6c4" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.519328 4856 scope.go:117] "RemoveContainer" containerID="aea9de376a22c41f82b147fd3c0a33431d00ed037bde696d440c5ef6402373bb" Nov 22 08:20:23 crc kubenswrapper[4856]: E1122 08:20:23.519978 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aea9de376a22c41f82b147fd3c0a33431d00ed037bde696d440c5ef6402373bb\": container with ID starting with aea9de376a22c41f82b147fd3c0a33431d00ed037bde696d440c5ef6402373bb not found: ID does not exist" containerID="aea9de376a22c41f82b147fd3c0a33431d00ed037bde696d440c5ef6402373bb" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.520075 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea9de376a22c41f82b147fd3c0a33431d00ed037bde696d440c5ef6402373bb"} err="failed to get container status \"aea9de376a22c41f82b147fd3c0a33431d00ed037bde696d440c5ef6402373bb\": rpc error: code = NotFound desc = could not find container \"aea9de376a22c41f82b147fd3c0a33431d00ed037bde696d440c5ef6402373bb\": container with ID starting with aea9de376a22c41f82b147fd3c0a33431d00ed037bde696d440c5ef6402373bb not found: ID does not exist" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.520169 4856 scope.go:117] "RemoveContainer" containerID="570a5af058d433b1f82238ffe56b275c8ac7c8115e1494e543fb75e9ae21a549" Nov 22 08:20:23 crc kubenswrapper[4856]: E1122 08:20:23.520794 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"570a5af058d433b1f82238ffe56b275c8ac7c8115e1494e543fb75e9ae21a549\": container with ID starting with 570a5af058d433b1f82238ffe56b275c8ac7c8115e1494e543fb75e9ae21a549 not found: ID does not exist" containerID="570a5af058d433b1f82238ffe56b275c8ac7c8115e1494e543fb75e9ae21a549" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.520883 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"570a5af058d433b1f82238ffe56b275c8ac7c8115e1494e543fb75e9ae21a549"} err="failed to get container status \"570a5af058d433b1f82238ffe56b275c8ac7c8115e1494e543fb75e9ae21a549\": rpc error: code = NotFound desc = could not find container \"570a5af058d433b1f82238ffe56b275c8ac7c8115e1494e543fb75e9ae21a549\": container with ID starting with 570a5af058d433b1f82238ffe56b275c8ac7c8115e1494e543fb75e9ae21a549 not found: ID does not exist" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.520919 4856 scope.go:117] "RemoveContainer" containerID="7482116ad7c65192ce90b9fb596504bd6545c6e227d85150b03e2eff45f9b6c4" Nov 22 08:20:23 crc kubenswrapper[4856]: E1122 08:20:23.521277 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7482116ad7c65192ce90b9fb596504bd6545c6e227d85150b03e2eff45f9b6c4\": container with ID starting with 7482116ad7c65192ce90b9fb596504bd6545c6e227d85150b03e2eff45f9b6c4 not found: ID does not exist" containerID="7482116ad7c65192ce90b9fb596504bd6545c6e227d85150b03e2eff45f9b6c4" Nov 22 08:20:23 crc kubenswrapper[4856]: I1122 08:20:23.521300 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7482116ad7c65192ce90b9fb596504bd6545c6e227d85150b03e2eff45f9b6c4"} err="failed to get container status \"7482116ad7c65192ce90b9fb596504bd6545c6e227d85150b03e2eff45f9b6c4\": rpc error: code = NotFound desc = could not find container \"7482116ad7c65192ce90b9fb596504bd6545c6e227d85150b03e2eff45f9b6c4\": container with ID starting with 7482116ad7c65192ce90b9fb596504bd6545c6e227d85150b03e2eff45f9b6c4 not found: ID does not exist" Nov 22 08:20:24 crc kubenswrapper[4856]: I1122 08:20:24.453152 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vmn8" event={"ID":"52255369-66d7-414a-bdb1-f6cf267d4953","Type":"ContainerStarted","Data":"b4bdfa235bfe840b733959044db86ba814864a1d66d50cf5cb4b78e4c6349e3a"} Nov 22 08:20:24 crc kubenswrapper[4856]: I1122 08:20:24.478563 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8vmn8" podStartSLOduration=2.957344791 podStartE2EDuration="7.478545319s" podCreationTimestamp="2025-11-22 08:20:17 +0000 UTC" firstStartedPulling="2025-11-22 08:20:19.403357145 +0000 UTC m=+4661.816750403" lastFinishedPulling="2025-11-22 08:20:23.924557663 +0000 UTC m=+4666.337950931" observedRunningTime="2025-11-22 08:20:24.473116093 +0000 UTC m=+4666.886509351" watchObservedRunningTime="2025-11-22 08:20:24.478545319 +0000 UTC m=+4666.891938567" Nov 22 08:20:24 crc kubenswrapper[4856]: I1122 08:20:24.718495 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3146de71-66bf-4c97-89ac-8fb96a7304ce" path="/var/lib/kubelet/pods/3146de71-66bf-4c97-89ac-8fb96a7304ce/volumes" Nov 22 08:20:27 crc kubenswrapper[4856]: I1122 08:20:27.758599 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:27 crc kubenswrapper[4856]: I1122 08:20:27.758913 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:27 crc kubenswrapper[4856]: I1122 08:20:27.801212 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:30 crc kubenswrapper[4856]: I1122 08:20:30.709370 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:20:30 crc kubenswrapper[4856]: E1122 08:20:30.709919 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.392178 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rbjbk"] Nov 22 08:20:31 crc kubenswrapper[4856]: E1122 08:20:31.392501 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3146de71-66bf-4c97-89ac-8fb96a7304ce" containerName="extract-content" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.392536 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3146de71-66bf-4c97-89ac-8fb96a7304ce" containerName="extract-content" Nov 22 08:20:31 crc kubenswrapper[4856]: E1122 08:20:31.392580 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3146de71-66bf-4c97-89ac-8fb96a7304ce" containerName="registry-server" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.392592 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3146de71-66bf-4c97-89ac-8fb96a7304ce" containerName="registry-server" Nov 22 08:20:31 crc kubenswrapper[4856]: E1122 08:20:31.392605 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3146de71-66bf-4c97-89ac-8fb96a7304ce" containerName="extract-utilities" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.392613 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3146de71-66bf-4c97-89ac-8fb96a7304ce" containerName="extract-utilities" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.392774 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3146de71-66bf-4c97-89ac-8fb96a7304ce" containerName="registry-server" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.394055 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.403228 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rbjbk"] Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.550374 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxzcz\" (UniqueName: \"kubernetes.io/projected/f5fd987b-0eac-45bc-8087-eab9d5798012-kube-api-access-nxzcz\") pod \"certified-operators-rbjbk\" (UID: \"f5fd987b-0eac-45bc-8087-eab9d5798012\") " pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.550443 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5fd987b-0eac-45bc-8087-eab9d5798012-utilities\") pod \"certified-operators-rbjbk\" (UID: \"f5fd987b-0eac-45bc-8087-eab9d5798012\") " pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.550478 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5fd987b-0eac-45bc-8087-eab9d5798012-catalog-content\") pod \"certified-operators-rbjbk\" (UID: \"f5fd987b-0eac-45bc-8087-eab9d5798012\") " pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.651820 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxzcz\" (UniqueName: \"kubernetes.io/projected/f5fd987b-0eac-45bc-8087-eab9d5798012-kube-api-access-nxzcz\") pod \"certified-operators-rbjbk\" (UID: \"f5fd987b-0eac-45bc-8087-eab9d5798012\") " pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.651876 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5fd987b-0eac-45bc-8087-eab9d5798012-utilities\") pod \"certified-operators-rbjbk\" (UID: \"f5fd987b-0eac-45bc-8087-eab9d5798012\") " pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.651901 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5fd987b-0eac-45bc-8087-eab9d5798012-catalog-content\") pod \"certified-operators-rbjbk\" (UID: \"f5fd987b-0eac-45bc-8087-eab9d5798012\") " pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.652373 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5fd987b-0eac-45bc-8087-eab9d5798012-catalog-content\") pod \"certified-operators-rbjbk\" (UID: \"f5fd987b-0eac-45bc-8087-eab9d5798012\") " pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.652439 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5fd987b-0eac-45bc-8087-eab9d5798012-utilities\") pod \"certified-operators-rbjbk\" (UID: \"f5fd987b-0eac-45bc-8087-eab9d5798012\") " pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.686879 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxzcz\" (UniqueName: \"kubernetes.io/projected/f5fd987b-0eac-45bc-8087-eab9d5798012-kube-api-access-nxzcz\") pod \"certified-operators-rbjbk\" (UID: \"f5fd987b-0eac-45bc-8087-eab9d5798012\") " pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:31 crc kubenswrapper[4856]: I1122 08:20:31.717701 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:32 crc kubenswrapper[4856]: I1122 08:20:32.012586 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rbjbk"] Nov 22 08:20:32 crc kubenswrapper[4856]: I1122 08:20:32.517838 4856 generic.go:334] "Generic (PLEG): container finished" podID="f5fd987b-0eac-45bc-8087-eab9d5798012" containerID="5a28696985b6b2ae2628b3fd66f9036417d0f04bd5014af3fc681a7d7a677736" exitCode=0 Nov 22 08:20:32 crc kubenswrapper[4856]: I1122 08:20:32.517916 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbjbk" event={"ID":"f5fd987b-0eac-45bc-8087-eab9d5798012","Type":"ContainerDied","Data":"5a28696985b6b2ae2628b3fd66f9036417d0f04bd5014af3fc681a7d7a677736"} Nov 22 08:20:32 crc kubenswrapper[4856]: I1122 08:20:32.517951 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbjbk" event={"ID":"f5fd987b-0eac-45bc-8087-eab9d5798012","Type":"ContainerStarted","Data":"fd5ccda6ac4e0676ec8034eb0172359863fe9633ae6debd3002d2b2653355129"} Nov 22 08:20:33 crc kubenswrapper[4856]: I1122 08:20:33.528438 4856 generic.go:334] "Generic (PLEG): container finished" podID="f5fd987b-0eac-45bc-8087-eab9d5798012" containerID="14ea9edf2d27728af2a838b3292137b5318416e69277a90a10c20ad8c05b4678" exitCode=0 Nov 22 08:20:33 crc kubenswrapper[4856]: I1122 08:20:33.528524 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbjbk" event={"ID":"f5fd987b-0eac-45bc-8087-eab9d5798012","Type":"ContainerDied","Data":"14ea9edf2d27728af2a838b3292137b5318416e69277a90a10c20ad8c05b4678"} Nov 22 08:20:35 crc kubenswrapper[4856]: I1122 08:20:35.546938 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbjbk" event={"ID":"f5fd987b-0eac-45bc-8087-eab9d5798012","Type":"ContainerStarted","Data":"1bd0deb8ecd6b81143f02271704cf714d6d9a989b99fec021f6df715dfa83f3d"} Nov 22 08:20:35 crc kubenswrapper[4856]: I1122 08:20:35.594669 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rbjbk" podStartSLOduration=2.374250214 podStartE2EDuration="4.594647708s" podCreationTimestamp="2025-11-22 08:20:31 +0000 UTC" firstStartedPulling="2025-11-22 08:20:32.519435665 +0000 UTC m=+4674.932828933" lastFinishedPulling="2025-11-22 08:20:34.739833169 +0000 UTC m=+4677.153226427" observedRunningTime="2025-11-22 08:20:35.589664134 +0000 UTC m=+4678.003057392" watchObservedRunningTime="2025-11-22 08:20:35.594647708 +0000 UTC m=+4678.008040976" Nov 22 08:20:37 crc kubenswrapper[4856]: I1122 08:20:37.797712 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:37 crc kubenswrapper[4856]: I1122 08:20:37.837060 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8vmn8"] Nov 22 08:20:38 crc kubenswrapper[4856]: I1122 08:20:38.566456 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8vmn8" podUID="52255369-66d7-414a-bdb1-f6cf267d4953" containerName="registry-server" containerID="cri-o://b4bdfa235bfe840b733959044db86ba814864a1d66d50cf5cb4b78e4c6349e3a" gracePeriod=2 Nov 22 08:20:38 crc kubenswrapper[4856]: I1122 08:20:38.926629 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.050585 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7m5s\" (UniqueName: \"kubernetes.io/projected/52255369-66d7-414a-bdb1-f6cf267d4953-kube-api-access-c7m5s\") pod \"52255369-66d7-414a-bdb1-f6cf267d4953\" (UID: \"52255369-66d7-414a-bdb1-f6cf267d4953\") " Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.050784 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52255369-66d7-414a-bdb1-f6cf267d4953-catalog-content\") pod \"52255369-66d7-414a-bdb1-f6cf267d4953\" (UID: \"52255369-66d7-414a-bdb1-f6cf267d4953\") " Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.050805 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52255369-66d7-414a-bdb1-f6cf267d4953-utilities\") pod \"52255369-66d7-414a-bdb1-f6cf267d4953\" (UID: \"52255369-66d7-414a-bdb1-f6cf267d4953\") " Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.051741 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52255369-66d7-414a-bdb1-f6cf267d4953-utilities" (OuterVolumeSpecName: "utilities") pod "52255369-66d7-414a-bdb1-f6cf267d4953" (UID: "52255369-66d7-414a-bdb1-f6cf267d4953"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.056738 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52255369-66d7-414a-bdb1-f6cf267d4953-kube-api-access-c7m5s" (OuterVolumeSpecName: "kube-api-access-c7m5s") pod "52255369-66d7-414a-bdb1-f6cf267d4953" (UID: "52255369-66d7-414a-bdb1-f6cf267d4953"). InnerVolumeSpecName "kube-api-access-c7m5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.099660 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52255369-66d7-414a-bdb1-f6cf267d4953-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52255369-66d7-414a-bdb1-f6cf267d4953" (UID: "52255369-66d7-414a-bdb1-f6cf267d4953"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.151837 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52255369-66d7-414a-bdb1-f6cf267d4953-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.152167 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52255369-66d7-414a-bdb1-f6cf267d4953-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.152254 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7m5s\" (UniqueName: \"kubernetes.io/projected/52255369-66d7-414a-bdb1-f6cf267d4953-kube-api-access-c7m5s\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.578437 4856 generic.go:334] "Generic (PLEG): container finished" podID="52255369-66d7-414a-bdb1-f6cf267d4953" containerID="b4bdfa235bfe840b733959044db86ba814864a1d66d50cf5cb4b78e4c6349e3a" exitCode=0 Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.578552 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8vmn8" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.578537 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vmn8" event={"ID":"52255369-66d7-414a-bdb1-f6cf267d4953","Type":"ContainerDied","Data":"b4bdfa235bfe840b733959044db86ba814864a1d66d50cf5cb4b78e4c6349e3a"} Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.579208 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8vmn8" event={"ID":"52255369-66d7-414a-bdb1-f6cf267d4953","Type":"ContainerDied","Data":"18c334acdee1d0cca0f0edc180ea18a21d9ce83b3918109a29f3a171eb632c6b"} Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.579255 4856 scope.go:117] "RemoveContainer" containerID="b4bdfa235bfe840b733959044db86ba814864a1d66d50cf5cb4b78e4c6349e3a" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.602771 4856 scope.go:117] "RemoveContainer" containerID="c886c4342492b4724c25fe554f09bc898ef31337376add829470ba9477cc65f0" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.612793 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8vmn8"] Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.618978 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8vmn8"] Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.645164 4856 scope.go:117] "RemoveContainer" containerID="01301fbae2337bf4b0a5e455496f818c3d893efdceb1aba3eb431a4bfd75d4e2" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.662722 4856 scope.go:117] "RemoveContainer" containerID="b4bdfa235bfe840b733959044db86ba814864a1d66d50cf5cb4b78e4c6349e3a" Nov 22 08:20:39 crc kubenswrapper[4856]: E1122 08:20:39.663216 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4bdfa235bfe840b733959044db86ba814864a1d66d50cf5cb4b78e4c6349e3a\": container with ID starting with b4bdfa235bfe840b733959044db86ba814864a1d66d50cf5cb4b78e4c6349e3a not found: ID does not exist" containerID="b4bdfa235bfe840b733959044db86ba814864a1d66d50cf5cb4b78e4c6349e3a" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.663254 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4bdfa235bfe840b733959044db86ba814864a1d66d50cf5cb4b78e4c6349e3a"} err="failed to get container status \"b4bdfa235bfe840b733959044db86ba814864a1d66d50cf5cb4b78e4c6349e3a\": rpc error: code = NotFound desc = could not find container \"b4bdfa235bfe840b733959044db86ba814864a1d66d50cf5cb4b78e4c6349e3a\": container with ID starting with b4bdfa235bfe840b733959044db86ba814864a1d66d50cf5cb4b78e4c6349e3a not found: ID does not exist" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.663280 4856 scope.go:117] "RemoveContainer" containerID="c886c4342492b4724c25fe554f09bc898ef31337376add829470ba9477cc65f0" Nov 22 08:20:39 crc kubenswrapper[4856]: E1122 08:20:39.663694 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c886c4342492b4724c25fe554f09bc898ef31337376add829470ba9477cc65f0\": container with ID starting with c886c4342492b4724c25fe554f09bc898ef31337376add829470ba9477cc65f0 not found: ID does not exist" containerID="c886c4342492b4724c25fe554f09bc898ef31337376add829470ba9477cc65f0" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.663726 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c886c4342492b4724c25fe554f09bc898ef31337376add829470ba9477cc65f0"} err="failed to get container status \"c886c4342492b4724c25fe554f09bc898ef31337376add829470ba9477cc65f0\": rpc error: code = NotFound desc = could not find container \"c886c4342492b4724c25fe554f09bc898ef31337376add829470ba9477cc65f0\": container with ID starting with c886c4342492b4724c25fe554f09bc898ef31337376add829470ba9477cc65f0 not found: ID does not exist" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.663745 4856 scope.go:117] "RemoveContainer" containerID="01301fbae2337bf4b0a5e455496f818c3d893efdceb1aba3eb431a4bfd75d4e2" Nov 22 08:20:39 crc kubenswrapper[4856]: E1122 08:20:39.663942 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01301fbae2337bf4b0a5e455496f818c3d893efdceb1aba3eb431a4bfd75d4e2\": container with ID starting with 01301fbae2337bf4b0a5e455496f818c3d893efdceb1aba3eb431a4bfd75d4e2 not found: ID does not exist" containerID="01301fbae2337bf4b0a5e455496f818c3d893efdceb1aba3eb431a4bfd75d4e2" Nov 22 08:20:39 crc kubenswrapper[4856]: I1122 08:20:39.664018 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01301fbae2337bf4b0a5e455496f818c3d893efdceb1aba3eb431a4bfd75d4e2"} err="failed to get container status \"01301fbae2337bf4b0a5e455496f818c3d893efdceb1aba3eb431a4bfd75d4e2\": rpc error: code = NotFound desc = could not find container \"01301fbae2337bf4b0a5e455496f818c3d893efdceb1aba3eb431a4bfd75d4e2\": container with ID starting with 01301fbae2337bf4b0a5e455496f818c3d893efdceb1aba3eb431a4bfd75d4e2 not found: ID does not exist" Nov 22 08:20:40 crc kubenswrapper[4856]: I1122 08:20:40.720478 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52255369-66d7-414a-bdb1-f6cf267d4953" path="/var/lib/kubelet/pods/52255369-66d7-414a-bdb1-f6cf267d4953/volumes" Nov 22 08:20:41 crc kubenswrapper[4856]: I1122 08:20:41.717981 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:41 crc kubenswrapper[4856]: I1122 08:20:41.718791 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:41 crc kubenswrapper[4856]: I1122 08:20:41.763244 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:42 crc kubenswrapper[4856]: I1122 08:20:42.643671 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:43 crc kubenswrapper[4856]: I1122 08:20:43.174403 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rbjbk"] Nov 22 08:20:44 crc kubenswrapper[4856]: I1122 08:20:44.634624 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rbjbk" podUID="f5fd987b-0eac-45bc-8087-eab9d5798012" containerName="registry-server" containerID="cri-o://1bd0deb8ecd6b81143f02271704cf714d6d9a989b99fec021f6df715dfa83f3d" gracePeriod=2 Nov 22 08:20:44 crc kubenswrapper[4856]: I1122 08:20:44.710067 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:20:44 crc kubenswrapper[4856]: E1122 08:20:44.710357 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.524641 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.642267 4856 generic.go:334] "Generic (PLEG): container finished" podID="f5fd987b-0eac-45bc-8087-eab9d5798012" containerID="1bd0deb8ecd6b81143f02271704cf714d6d9a989b99fec021f6df715dfa83f3d" exitCode=0 Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.642306 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbjbk" event={"ID":"f5fd987b-0eac-45bc-8087-eab9d5798012","Type":"ContainerDied","Data":"1bd0deb8ecd6b81143f02271704cf714d6d9a989b99fec021f6df715dfa83f3d"} Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.642343 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rbjbk" event={"ID":"f5fd987b-0eac-45bc-8087-eab9d5798012","Type":"ContainerDied","Data":"fd5ccda6ac4e0676ec8034eb0172359863fe9633ae6debd3002d2b2653355129"} Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.642363 4856 scope.go:117] "RemoveContainer" containerID="1bd0deb8ecd6b81143f02271704cf714d6d9a989b99fec021f6df715dfa83f3d" Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.642386 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rbjbk" Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.651548 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5fd987b-0eac-45bc-8087-eab9d5798012-catalog-content\") pod \"f5fd987b-0eac-45bc-8087-eab9d5798012\" (UID: \"f5fd987b-0eac-45bc-8087-eab9d5798012\") " Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.651632 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxzcz\" (UniqueName: \"kubernetes.io/projected/f5fd987b-0eac-45bc-8087-eab9d5798012-kube-api-access-nxzcz\") pod \"f5fd987b-0eac-45bc-8087-eab9d5798012\" (UID: \"f5fd987b-0eac-45bc-8087-eab9d5798012\") " Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.651718 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5fd987b-0eac-45bc-8087-eab9d5798012-utilities\") pod \"f5fd987b-0eac-45bc-8087-eab9d5798012\" (UID: \"f5fd987b-0eac-45bc-8087-eab9d5798012\") " Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.652781 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5fd987b-0eac-45bc-8087-eab9d5798012-utilities" (OuterVolumeSpecName: "utilities") pod "f5fd987b-0eac-45bc-8087-eab9d5798012" (UID: "f5fd987b-0eac-45bc-8087-eab9d5798012"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.657749 4856 scope.go:117] "RemoveContainer" containerID="14ea9edf2d27728af2a838b3292137b5318416e69277a90a10c20ad8c05b4678" Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.696467 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5fd987b-0eac-45bc-8087-eab9d5798012-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5fd987b-0eac-45bc-8087-eab9d5798012" (UID: "f5fd987b-0eac-45bc-8087-eab9d5798012"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.753276 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5fd987b-0eac-45bc-8087-eab9d5798012-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.753331 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5fd987b-0eac-45bc-8087-eab9d5798012-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.967836 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5fd987b-0eac-45bc-8087-eab9d5798012-kube-api-access-nxzcz" (OuterVolumeSpecName: "kube-api-access-nxzcz") pod "f5fd987b-0eac-45bc-8087-eab9d5798012" (UID: "f5fd987b-0eac-45bc-8087-eab9d5798012"). InnerVolumeSpecName "kube-api-access-nxzcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:20:45 crc kubenswrapper[4856]: I1122 08:20:45.977230 4856 scope.go:117] "RemoveContainer" containerID="5a28696985b6b2ae2628b3fd66f9036417d0f04bd5014af3fc681a7d7a677736" Nov 22 08:20:46 crc kubenswrapper[4856]: I1122 08:20:46.034238 4856 scope.go:117] "RemoveContainer" containerID="1bd0deb8ecd6b81143f02271704cf714d6d9a989b99fec021f6df715dfa83f3d" Nov 22 08:20:46 crc kubenswrapper[4856]: E1122 08:20:46.034690 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bd0deb8ecd6b81143f02271704cf714d6d9a989b99fec021f6df715dfa83f3d\": container with ID starting with 1bd0deb8ecd6b81143f02271704cf714d6d9a989b99fec021f6df715dfa83f3d not found: ID does not exist" containerID="1bd0deb8ecd6b81143f02271704cf714d6d9a989b99fec021f6df715dfa83f3d" Nov 22 08:20:46 crc kubenswrapper[4856]: I1122 08:20:46.034729 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bd0deb8ecd6b81143f02271704cf714d6d9a989b99fec021f6df715dfa83f3d"} err="failed to get container status \"1bd0deb8ecd6b81143f02271704cf714d6d9a989b99fec021f6df715dfa83f3d\": rpc error: code = NotFound desc = could not find container \"1bd0deb8ecd6b81143f02271704cf714d6d9a989b99fec021f6df715dfa83f3d\": container with ID starting with 1bd0deb8ecd6b81143f02271704cf714d6d9a989b99fec021f6df715dfa83f3d not found: ID does not exist" Nov 22 08:20:46 crc kubenswrapper[4856]: I1122 08:20:46.034751 4856 scope.go:117] "RemoveContainer" containerID="14ea9edf2d27728af2a838b3292137b5318416e69277a90a10c20ad8c05b4678" Nov 22 08:20:46 crc kubenswrapper[4856]: E1122 08:20:46.035203 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14ea9edf2d27728af2a838b3292137b5318416e69277a90a10c20ad8c05b4678\": container with ID starting with 14ea9edf2d27728af2a838b3292137b5318416e69277a90a10c20ad8c05b4678 not found: ID does not exist" containerID="14ea9edf2d27728af2a838b3292137b5318416e69277a90a10c20ad8c05b4678" Nov 22 08:20:46 crc kubenswrapper[4856]: I1122 08:20:46.035239 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14ea9edf2d27728af2a838b3292137b5318416e69277a90a10c20ad8c05b4678"} err="failed to get container status \"14ea9edf2d27728af2a838b3292137b5318416e69277a90a10c20ad8c05b4678\": rpc error: code = NotFound desc = could not find container \"14ea9edf2d27728af2a838b3292137b5318416e69277a90a10c20ad8c05b4678\": container with ID starting with 14ea9edf2d27728af2a838b3292137b5318416e69277a90a10c20ad8c05b4678 not found: ID does not exist" Nov 22 08:20:46 crc kubenswrapper[4856]: I1122 08:20:46.035280 4856 scope.go:117] "RemoveContainer" containerID="5a28696985b6b2ae2628b3fd66f9036417d0f04bd5014af3fc681a7d7a677736" Nov 22 08:20:46 crc kubenswrapper[4856]: E1122 08:20:46.035554 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a28696985b6b2ae2628b3fd66f9036417d0f04bd5014af3fc681a7d7a677736\": container with ID starting with 5a28696985b6b2ae2628b3fd66f9036417d0f04bd5014af3fc681a7d7a677736 not found: ID does not exist" containerID="5a28696985b6b2ae2628b3fd66f9036417d0f04bd5014af3fc681a7d7a677736" Nov 22 08:20:46 crc kubenswrapper[4856]: I1122 08:20:46.035589 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a28696985b6b2ae2628b3fd66f9036417d0f04bd5014af3fc681a7d7a677736"} err="failed to get container status \"5a28696985b6b2ae2628b3fd66f9036417d0f04bd5014af3fc681a7d7a677736\": rpc error: code = NotFound desc = could not find container \"5a28696985b6b2ae2628b3fd66f9036417d0f04bd5014af3fc681a7d7a677736\": container with ID starting with 5a28696985b6b2ae2628b3fd66f9036417d0f04bd5014af3fc681a7d7a677736 not found: ID does not exist" Nov 22 08:20:46 crc kubenswrapper[4856]: I1122 08:20:46.059216 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxzcz\" (UniqueName: \"kubernetes.io/projected/f5fd987b-0eac-45bc-8087-eab9d5798012-kube-api-access-nxzcz\") on node \"crc\" DevicePath \"\"" Nov 22 08:20:46 crc kubenswrapper[4856]: I1122 08:20:46.276762 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rbjbk"] Nov 22 08:20:46 crc kubenswrapper[4856]: I1122 08:20:46.281381 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rbjbk"] Nov 22 08:20:46 crc kubenswrapper[4856]: I1122 08:20:46.718362 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5fd987b-0eac-45bc-8087-eab9d5798012" path="/var/lib/kubelet/pods/f5fd987b-0eac-45bc-8087-eab9d5798012/volumes" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.029366 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mcjl9"] Nov 22 08:20:55 crc kubenswrapper[4856]: E1122 08:20:55.030139 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52255369-66d7-414a-bdb1-f6cf267d4953" containerName="extract-content" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.030151 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="52255369-66d7-414a-bdb1-f6cf267d4953" containerName="extract-content" Nov 22 08:20:55 crc kubenswrapper[4856]: E1122 08:20:55.030171 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fd987b-0eac-45bc-8087-eab9d5798012" containerName="extract-content" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.030177 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fd987b-0eac-45bc-8087-eab9d5798012" containerName="extract-content" Nov 22 08:20:55 crc kubenswrapper[4856]: E1122 08:20:55.030190 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fd987b-0eac-45bc-8087-eab9d5798012" containerName="registry-server" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.030197 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fd987b-0eac-45bc-8087-eab9d5798012" containerName="registry-server" Nov 22 08:20:55 crc kubenswrapper[4856]: E1122 08:20:55.030208 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fd987b-0eac-45bc-8087-eab9d5798012" containerName="extract-utilities" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.030214 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fd987b-0eac-45bc-8087-eab9d5798012" containerName="extract-utilities" Nov 22 08:20:55 crc kubenswrapper[4856]: E1122 08:20:55.030224 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52255369-66d7-414a-bdb1-f6cf267d4953" containerName="registry-server" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.030229 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="52255369-66d7-414a-bdb1-f6cf267d4953" containerName="registry-server" Nov 22 08:20:55 crc kubenswrapper[4856]: E1122 08:20:55.030242 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52255369-66d7-414a-bdb1-f6cf267d4953" containerName="extract-utilities" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.030248 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="52255369-66d7-414a-bdb1-f6cf267d4953" containerName="extract-utilities" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.030388 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fd987b-0eac-45bc-8087-eab9d5798012" containerName="registry-server" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.030404 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="52255369-66d7-414a-bdb1-f6cf267d4953" containerName="registry-server" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.031358 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.041404 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcjl9"] Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.182098 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d21c79d7-e636-4661-a6dc-d05c4c36e843-utilities\") pod \"redhat-marketplace-mcjl9\" (UID: \"d21c79d7-e636-4661-a6dc-d05c4c36e843\") " pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.182475 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d21c79d7-e636-4661-a6dc-d05c4c36e843-catalog-content\") pod \"redhat-marketplace-mcjl9\" (UID: \"d21c79d7-e636-4661-a6dc-d05c4c36e843\") " pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.182614 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjdrj\" (UniqueName: \"kubernetes.io/projected/d21c79d7-e636-4661-a6dc-d05c4c36e843-kube-api-access-gjdrj\") pod \"redhat-marketplace-mcjl9\" (UID: \"d21c79d7-e636-4661-a6dc-d05c4c36e843\") " pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.283843 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d21c79d7-e636-4661-a6dc-d05c4c36e843-catalog-content\") pod \"redhat-marketplace-mcjl9\" (UID: \"d21c79d7-e636-4661-a6dc-d05c4c36e843\") " pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.283924 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjdrj\" (UniqueName: \"kubernetes.io/projected/d21c79d7-e636-4661-a6dc-d05c4c36e843-kube-api-access-gjdrj\") pod \"redhat-marketplace-mcjl9\" (UID: \"d21c79d7-e636-4661-a6dc-d05c4c36e843\") " pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.283960 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d21c79d7-e636-4661-a6dc-d05c4c36e843-utilities\") pod \"redhat-marketplace-mcjl9\" (UID: \"d21c79d7-e636-4661-a6dc-d05c4c36e843\") " pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.284555 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d21c79d7-e636-4661-a6dc-d05c4c36e843-utilities\") pod \"redhat-marketplace-mcjl9\" (UID: \"d21c79d7-e636-4661-a6dc-d05c4c36e843\") " pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.284628 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d21c79d7-e636-4661-a6dc-d05c4c36e843-catalog-content\") pod \"redhat-marketplace-mcjl9\" (UID: \"d21c79d7-e636-4661-a6dc-d05c4c36e843\") " pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.303895 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjdrj\" (UniqueName: \"kubernetes.io/projected/d21c79d7-e636-4661-a6dc-d05c4c36e843-kube-api-access-gjdrj\") pod \"redhat-marketplace-mcjl9\" (UID: \"d21c79d7-e636-4661-a6dc-d05c4c36e843\") " pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.351246 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:20:55 crc kubenswrapper[4856]: I1122 08:20:55.783879 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcjl9"] Nov 22 08:20:55 crc kubenswrapper[4856]: W1122 08:20:55.874054 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd21c79d7_e636_4661_a6dc_d05c4c36e843.slice/crio-a0dc6365de95746c9f8878937122e6ba8efe404d4855ce0559c1aff93a229972 WatchSource:0}: Error finding container a0dc6365de95746c9f8878937122e6ba8efe404d4855ce0559c1aff93a229972: Status 404 returned error can't find the container with id a0dc6365de95746c9f8878937122e6ba8efe404d4855ce0559c1aff93a229972 Nov 22 08:20:56 crc kubenswrapper[4856]: I1122 08:20:56.727956 4856 generic.go:334] "Generic (PLEG): container finished" podID="d21c79d7-e636-4661-a6dc-d05c4c36e843" containerID="af7a9b89e445a899ac0a96ddf4299cf45cc2fbd22f52e2de8687110108588b5a" exitCode=0 Nov 22 08:20:56 crc kubenswrapper[4856]: I1122 08:20:56.728081 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcjl9" event={"ID":"d21c79d7-e636-4661-a6dc-d05c4c36e843","Type":"ContainerDied","Data":"af7a9b89e445a899ac0a96ddf4299cf45cc2fbd22f52e2de8687110108588b5a"} Nov 22 08:20:56 crc kubenswrapper[4856]: I1122 08:20:56.728886 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcjl9" event={"ID":"d21c79d7-e636-4661-a6dc-d05c4c36e843","Type":"ContainerStarted","Data":"a0dc6365de95746c9f8878937122e6ba8efe404d4855ce0559c1aff93a229972"} Nov 22 08:20:57 crc kubenswrapper[4856]: I1122 08:20:57.738951 4856 generic.go:334] "Generic (PLEG): container finished" podID="d21c79d7-e636-4661-a6dc-d05c4c36e843" containerID="93b0d97640197c5c6562cf4726a1275fe60fad5b604e849e3b8c45111015bfaa" exitCode=0 Nov 22 08:20:57 crc kubenswrapper[4856]: I1122 08:20:57.739037 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcjl9" event={"ID":"d21c79d7-e636-4661-a6dc-d05c4c36e843","Type":"ContainerDied","Data":"93b0d97640197c5c6562cf4726a1275fe60fad5b604e849e3b8c45111015bfaa"} Nov 22 08:20:58 crc kubenswrapper[4856]: I1122 08:20:58.715146 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:20:58 crc kubenswrapper[4856]: E1122 08:20:58.715731 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:20:58 crc kubenswrapper[4856]: I1122 08:20:58.755070 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcjl9" event={"ID":"d21c79d7-e636-4661-a6dc-d05c4c36e843","Type":"ContainerStarted","Data":"2aabaea734d09fb4a2622a28e7b61d86e97809ee4c119ffe51c53c0736e11dd8"} Nov 22 08:20:58 crc kubenswrapper[4856]: I1122 08:20:58.775324 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mcjl9" podStartSLOduration=2.35551104 podStartE2EDuration="3.775302791s" podCreationTimestamp="2025-11-22 08:20:55 +0000 UTC" firstStartedPulling="2025-11-22 08:20:56.731700228 +0000 UTC m=+4699.145093496" lastFinishedPulling="2025-11-22 08:20:58.151491989 +0000 UTC m=+4700.564885247" observedRunningTime="2025-11-22 08:20:58.771350674 +0000 UTC m=+4701.184743942" watchObservedRunningTime="2025-11-22 08:20:58.775302791 +0000 UTC m=+4701.188696049" Nov 22 08:21:05 crc kubenswrapper[4856]: I1122 08:21:05.352440 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:21:05 crc kubenswrapper[4856]: I1122 08:21:05.352974 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:21:05 crc kubenswrapper[4856]: I1122 08:21:05.396565 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:21:05 crc kubenswrapper[4856]: I1122 08:21:05.861489 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:21:05 crc kubenswrapper[4856]: I1122 08:21:05.905724 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcjl9"] Nov 22 08:21:07 crc kubenswrapper[4856]: I1122 08:21:07.821746 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mcjl9" podUID="d21c79d7-e636-4661-a6dc-d05c4c36e843" containerName="registry-server" containerID="cri-o://2aabaea734d09fb4a2622a28e7b61d86e97809ee4c119ffe51c53c0736e11dd8" gracePeriod=2 Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.228106 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.373410 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjdrj\" (UniqueName: \"kubernetes.io/projected/d21c79d7-e636-4661-a6dc-d05c4c36e843-kube-api-access-gjdrj\") pod \"d21c79d7-e636-4661-a6dc-d05c4c36e843\" (UID: \"d21c79d7-e636-4661-a6dc-d05c4c36e843\") " Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.373526 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d21c79d7-e636-4661-a6dc-d05c4c36e843-utilities\") pod \"d21c79d7-e636-4661-a6dc-d05c4c36e843\" (UID: \"d21c79d7-e636-4661-a6dc-d05c4c36e843\") " Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.373570 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d21c79d7-e636-4661-a6dc-d05c4c36e843-catalog-content\") pod \"d21c79d7-e636-4661-a6dc-d05c4c36e843\" (UID: \"d21c79d7-e636-4661-a6dc-d05c4c36e843\") " Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.374344 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d21c79d7-e636-4661-a6dc-d05c4c36e843-utilities" (OuterVolumeSpecName: "utilities") pod "d21c79d7-e636-4661-a6dc-d05c4c36e843" (UID: "d21c79d7-e636-4661-a6dc-d05c4c36e843"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.378314 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d21c79d7-e636-4661-a6dc-d05c4c36e843-kube-api-access-gjdrj" (OuterVolumeSpecName: "kube-api-access-gjdrj") pod "d21c79d7-e636-4661-a6dc-d05c4c36e843" (UID: "d21c79d7-e636-4661-a6dc-d05c4c36e843"). InnerVolumeSpecName "kube-api-access-gjdrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.396332 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d21c79d7-e636-4661-a6dc-d05c4c36e843-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d21c79d7-e636-4661-a6dc-d05c4c36e843" (UID: "d21c79d7-e636-4661-a6dc-d05c4c36e843"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.475413 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjdrj\" (UniqueName: \"kubernetes.io/projected/d21c79d7-e636-4661-a6dc-d05c4c36e843-kube-api-access-gjdrj\") on node \"crc\" DevicePath \"\"" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.475765 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d21c79d7-e636-4661-a6dc-d05c4c36e843-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.475781 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d21c79d7-e636-4661-a6dc-d05c4c36e843-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.833180 4856 generic.go:334] "Generic (PLEG): container finished" podID="d21c79d7-e636-4661-a6dc-d05c4c36e843" containerID="2aabaea734d09fb4a2622a28e7b61d86e97809ee4c119ffe51c53c0736e11dd8" exitCode=0 Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.833244 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcjl9" event={"ID":"d21c79d7-e636-4661-a6dc-d05c4c36e843","Type":"ContainerDied","Data":"2aabaea734d09fb4a2622a28e7b61d86e97809ee4c119ffe51c53c0736e11dd8"} Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.833286 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcjl9" event={"ID":"d21c79d7-e636-4661-a6dc-d05c4c36e843","Type":"ContainerDied","Data":"a0dc6365de95746c9f8878937122e6ba8efe404d4855ce0559c1aff93a229972"} Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.833301 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcjl9" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.833310 4856 scope.go:117] "RemoveContainer" containerID="2aabaea734d09fb4a2622a28e7b61d86e97809ee4c119ffe51c53c0736e11dd8" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.857378 4856 scope.go:117] "RemoveContainer" containerID="93b0d97640197c5c6562cf4726a1275fe60fad5b604e849e3b8c45111015bfaa" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.858138 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcjl9"] Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.863613 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcjl9"] Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.876861 4856 scope.go:117] "RemoveContainer" containerID="af7a9b89e445a899ac0a96ddf4299cf45cc2fbd22f52e2de8687110108588b5a" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.903011 4856 scope.go:117] "RemoveContainer" containerID="2aabaea734d09fb4a2622a28e7b61d86e97809ee4c119ffe51c53c0736e11dd8" Nov 22 08:21:08 crc kubenswrapper[4856]: E1122 08:21:08.903474 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aabaea734d09fb4a2622a28e7b61d86e97809ee4c119ffe51c53c0736e11dd8\": container with ID starting with 2aabaea734d09fb4a2622a28e7b61d86e97809ee4c119ffe51c53c0736e11dd8 not found: ID does not exist" containerID="2aabaea734d09fb4a2622a28e7b61d86e97809ee4c119ffe51c53c0736e11dd8" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.903537 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aabaea734d09fb4a2622a28e7b61d86e97809ee4c119ffe51c53c0736e11dd8"} err="failed to get container status \"2aabaea734d09fb4a2622a28e7b61d86e97809ee4c119ffe51c53c0736e11dd8\": rpc error: code = NotFound desc = could not find container \"2aabaea734d09fb4a2622a28e7b61d86e97809ee4c119ffe51c53c0736e11dd8\": container with ID starting with 2aabaea734d09fb4a2622a28e7b61d86e97809ee4c119ffe51c53c0736e11dd8 not found: ID does not exist" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.903565 4856 scope.go:117] "RemoveContainer" containerID="93b0d97640197c5c6562cf4726a1275fe60fad5b604e849e3b8c45111015bfaa" Nov 22 08:21:08 crc kubenswrapper[4856]: E1122 08:21:08.903903 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b0d97640197c5c6562cf4726a1275fe60fad5b604e849e3b8c45111015bfaa\": container with ID starting with 93b0d97640197c5c6562cf4726a1275fe60fad5b604e849e3b8c45111015bfaa not found: ID does not exist" containerID="93b0d97640197c5c6562cf4726a1275fe60fad5b604e849e3b8c45111015bfaa" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.903931 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b0d97640197c5c6562cf4726a1275fe60fad5b604e849e3b8c45111015bfaa"} err="failed to get container status \"93b0d97640197c5c6562cf4726a1275fe60fad5b604e849e3b8c45111015bfaa\": rpc error: code = NotFound desc = could not find container \"93b0d97640197c5c6562cf4726a1275fe60fad5b604e849e3b8c45111015bfaa\": container with ID starting with 93b0d97640197c5c6562cf4726a1275fe60fad5b604e849e3b8c45111015bfaa not found: ID does not exist" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.903945 4856 scope.go:117] "RemoveContainer" containerID="af7a9b89e445a899ac0a96ddf4299cf45cc2fbd22f52e2de8687110108588b5a" Nov 22 08:21:08 crc kubenswrapper[4856]: E1122 08:21:08.904186 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af7a9b89e445a899ac0a96ddf4299cf45cc2fbd22f52e2de8687110108588b5a\": container with ID starting with af7a9b89e445a899ac0a96ddf4299cf45cc2fbd22f52e2de8687110108588b5a not found: ID does not exist" containerID="af7a9b89e445a899ac0a96ddf4299cf45cc2fbd22f52e2de8687110108588b5a" Nov 22 08:21:08 crc kubenswrapper[4856]: I1122 08:21:08.904221 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af7a9b89e445a899ac0a96ddf4299cf45cc2fbd22f52e2de8687110108588b5a"} err="failed to get container status \"af7a9b89e445a899ac0a96ddf4299cf45cc2fbd22f52e2de8687110108588b5a\": rpc error: code = NotFound desc = could not find container \"af7a9b89e445a899ac0a96ddf4299cf45cc2fbd22f52e2de8687110108588b5a\": container with ID starting with af7a9b89e445a899ac0a96ddf4299cf45cc2fbd22f52e2de8687110108588b5a not found: ID does not exist" Nov 22 08:21:10 crc kubenswrapper[4856]: I1122 08:21:10.720064 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d21c79d7-e636-4661-a6dc-d05c4c36e843" path="/var/lib/kubelet/pods/d21c79d7-e636-4661-a6dc-d05c4c36e843/volumes" Nov 22 08:21:13 crc kubenswrapper[4856]: I1122 08:21:13.709945 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:21:13 crc kubenswrapper[4856]: E1122 08:21:13.710501 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:21:27 crc kubenswrapper[4856]: I1122 08:21:27.710001 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:21:27 crc kubenswrapper[4856]: E1122 08:21:27.710798 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:21:42 crc kubenswrapper[4856]: I1122 08:21:42.709410 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:21:42 crc kubenswrapper[4856]: E1122 08:21:42.710189 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:21:55 crc kubenswrapper[4856]: I1122 08:21:55.710152 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:21:55 crc kubenswrapper[4856]: E1122 08:21:55.710985 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:22:08 crc kubenswrapper[4856]: I1122 08:22:08.713609 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:22:08 crc kubenswrapper[4856]: E1122 08:22:08.714277 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:22:23 crc kubenswrapper[4856]: I1122 08:22:23.710159 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:22:23 crc kubenswrapper[4856]: E1122 08:22:23.711343 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:22:36 crc kubenswrapper[4856]: I1122 08:22:36.709558 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:22:36 crc kubenswrapper[4856]: E1122 08:22:36.710270 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:22:51 crc kubenswrapper[4856]: I1122 08:22:51.709926 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:22:51 crc kubenswrapper[4856]: E1122 08:22:51.711132 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:23:02 crc kubenswrapper[4856]: I1122 08:23:02.710542 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:23:02 crc kubenswrapper[4856]: E1122 08:23:02.711436 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:23:16 crc kubenswrapper[4856]: I1122 08:23:16.711351 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:23:16 crc kubenswrapper[4856]: E1122 08:23:16.712081 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:23:30 crc kubenswrapper[4856]: I1122 08:23:30.709730 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:23:30 crc kubenswrapper[4856]: I1122 08:23:30.927437 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"9ce3e3934fdbe90bae0874d6336f40d827becf7fe198989484e4604fe47fb112"} Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.114826 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-v4vg6"] Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.121056 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-v4vg6"] Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.248596 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-4cfz9"] Nov 22 08:24:16 crc kubenswrapper[4856]: E1122 08:24:16.249200 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21c79d7-e636-4661-a6dc-d05c4c36e843" containerName="extract-content" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.249240 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21c79d7-e636-4661-a6dc-d05c4c36e843" containerName="extract-content" Nov 22 08:24:16 crc kubenswrapper[4856]: E1122 08:24:16.249298 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21c79d7-e636-4661-a6dc-d05c4c36e843" containerName="registry-server" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.249312 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21c79d7-e636-4661-a6dc-d05c4c36e843" containerName="registry-server" Nov 22 08:24:16 crc kubenswrapper[4856]: E1122 08:24:16.249346 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d21c79d7-e636-4661-a6dc-d05c4c36e843" containerName="extract-utilities" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.249360 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d21c79d7-e636-4661-a6dc-d05c4c36e843" containerName="extract-utilities" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.249657 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d21c79d7-e636-4661-a6dc-d05c4c36e843" containerName="registry-server" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.250700 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-4cfz9" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.252835 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.254037 4856 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-zw7lf" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.254284 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.254569 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.258351 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-4cfz9"] Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.396341 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5t6n\" (UniqueName: \"kubernetes.io/projected/74a868fc-daab-46da-ad58-643a8d4c7089-kube-api-access-v5t6n\") pod \"crc-storage-crc-4cfz9\" (UID: \"74a868fc-daab-46da-ad58-643a8d4c7089\") " pod="crc-storage/crc-storage-crc-4cfz9" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.396963 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/74a868fc-daab-46da-ad58-643a8d4c7089-node-mnt\") pod \"crc-storage-crc-4cfz9\" (UID: \"74a868fc-daab-46da-ad58-643a8d4c7089\") " pod="crc-storage/crc-storage-crc-4cfz9" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.397036 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/74a868fc-daab-46da-ad58-643a8d4c7089-crc-storage\") pod \"crc-storage-crc-4cfz9\" (UID: \"74a868fc-daab-46da-ad58-643a8d4c7089\") " pod="crc-storage/crc-storage-crc-4cfz9" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.498393 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5t6n\" (UniqueName: \"kubernetes.io/projected/74a868fc-daab-46da-ad58-643a8d4c7089-kube-api-access-v5t6n\") pod \"crc-storage-crc-4cfz9\" (UID: \"74a868fc-daab-46da-ad58-643a8d4c7089\") " pod="crc-storage/crc-storage-crc-4cfz9" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.498455 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/74a868fc-daab-46da-ad58-643a8d4c7089-node-mnt\") pod \"crc-storage-crc-4cfz9\" (UID: \"74a868fc-daab-46da-ad58-643a8d4c7089\") " pod="crc-storage/crc-storage-crc-4cfz9" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.498580 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/74a868fc-daab-46da-ad58-643a8d4c7089-crc-storage\") pod \"crc-storage-crc-4cfz9\" (UID: \"74a868fc-daab-46da-ad58-643a8d4c7089\") " pod="crc-storage/crc-storage-crc-4cfz9" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.498928 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/74a868fc-daab-46da-ad58-643a8d4c7089-node-mnt\") pod \"crc-storage-crc-4cfz9\" (UID: \"74a868fc-daab-46da-ad58-643a8d4c7089\") " pod="crc-storage/crc-storage-crc-4cfz9" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.499656 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/74a868fc-daab-46da-ad58-643a8d4c7089-crc-storage\") pod \"crc-storage-crc-4cfz9\" (UID: \"74a868fc-daab-46da-ad58-643a8d4c7089\") " pod="crc-storage/crc-storage-crc-4cfz9" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.519832 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5t6n\" (UniqueName: \"kubernetes.io/projected/74a868fc-daab-46da-ad58-643a8d4c7089-kube-api-access-v5t6n\") pod \"crc-storage-crc-4cfz9\" (UID: \"74a868fc-daab-46da-ad58-643a8d4c7089\") " pod="crc-storage/crc-storage-crc-4cfz9" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.571620 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-4cfz9" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.738422 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c3d51ee-61ce-4d1b-936d-a69a12c83fb5" path="/var/lib/kubelet/pods/5c3d51ee-61ce-4d1b-936d-a69a12c83fb5/volumes" Nov 22 08:24:16 crc kubenswrapper[4856]: I1122 08:24:16.996316 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-4cfz9"] Nov 22 08:24:17 crc kubenswrapper[4856]: I1122 08:24:17.281098 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-4cfz9" event={"ID":"74a868fc-daab-46da-ad58-643a8d4c7089","Type":"ContainerStarted","Data":"74a9e9058319f5442c9162bd1ed9330b828cecc239cf1a6263a4634d142126c3"} Nov 22 08:24:19 crc kubenswrapper[4856]: I1122 08:24:19.297740 4856 generic.go:334] "Generic (PLEG): container finished" podID="74a868fc-daab-46da-ad58-643a8d4c7089" containerID="5cb3cc7f64c76e128baf9671310b5089aa84855b0cff1d241021f610973c772f" exitCode=0 Nov 22 08:24:19 crc kubenswrapper[4856]: I1122 08:24:19.297829 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-4cfz9" event={"ID":"74a868fc-daab-46da-ad58-643a8d4c7089","Type":"ContainerDied","Data":"5cb3cc7f64c76e128baf9671310b5089aa84855b0cff1d241021f610973c772f"} Nov 22 08:24:20 crc kubenswrapper[4856]: I1122 08:24:20.583259 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-4cfz9" Nov 22 08:24:20 crc kubenswrapper[4856]: I1122 08:24:20.656259 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/74a868fc-daab-46da-ad58-643a8d4c7089-crc-storage\") pod \"74a868fc-daab-46da-ad58-643a8d4c7089\" (UID: \"74a868fc-daab-46da-ad58-643a8d4c7089\") " Nov 22 08:24:20 crc kubenswrapper[4856]: I1122 08:24:20.656326 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/74a868fc-daab-46da-ad58-643a8d4c7089-node-mnt\") pod \"74a868fc-daab-46da-ad58-643a8d4c7089\" (UID: \"74a868fc-daab-46da-ad58-643a8d4c7089\") " Nov 22 08:24:20 crc kubenswrapper[4856]: I1122 08:24:20.656439 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5t6n\" (UniqueName: \"kubernetes.io/projected/74a868fc-daab-46da-ad58-643a8d4c7089-kube-api-access-v5t6n\") pod \"74a868fc-daab-46da-ad58-643a8d4c7089\" (UID: \"74a868fc-daab-46da-ad58-643a8d4c7089\") " Nov 22 08:24:20 crc kubenswrapper[4856]: I1122 08:24:20.656471 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74a868fc-daab-46da-ad58-643a8d4c7089-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "74a868fc-daab-46da-ad58-643a8d4c7089" (UID: "74a868fc-daab-46da-ad58-643a8d4c7089"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 08:24:20 crc kubenswrapper[4856]: I1122 08:24:20.656854 4856 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/74a868fc-daab-46da-ad58-643a8d4c7089-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 22 08:24:20 crc kubenswrapper[4856]: I1122 08:24:20.661854 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a868fc-daab-46da-ad58-643a8d4c7089-kube-api-access-v5t6n" (OuterVolumeSpecName: "kube-api-access-v5t6n") pod "74a868fc-daab-46da-ad58-643a8d4c7089" (UID: "74a868fc-daab-46da-ad58-643a8d4c7089"). InnerVolumeSpecName "kube-api-access-v5t6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:24:20 crc kubenswrapper[4856]: I1122 08:24:20.674386 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74a868fc-daab-46da-ad58-643a8d4c7089-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "74a868fc-daab-46da-ad58-643a8d4c7089" (UID: "74a868fc-daab-46da-ad58-643a8d4c7089"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:24:20 crc kubenswrapper[4856]: I1122 08:24:20.758948 4856 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/74a868fc-daab-46da-ad58-643a8d4c7089-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 22 08:24:20 crc kubenswrapper[4856]: I1122 08:24:20.759288 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5t6n\" (UniqueName: \"kubernetes.io/projected/74a868fc-daab-46da-ad58-643a8d4c7089-kube-api-access-v5t6n\") on node \"crc\" DevicePath \"\"" Nov 22 08:24:21 crc kubenswrapper[4856]: I1122 08:24:21.316632 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-4cfz9" event={"ID":"74a868fc-daab-46da-ad58-643a8d4c7089","Type":"ContainerDied","Data":"74a9e9058319f5442c9162bd1ed9330b828cecc239cf1a6263a4634d142126c3"} Nov 22 08:24:21 crc kubenswrapper[4856]: I1122 08:24:21.316685 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74a9e9058319f5442c9162bd1ed9330b828cecc239cf1a6263a4634d142126c3" Nov 22 08:24:21 crc kubenswrapper[4856]: I1122 08:24:21.316783 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-4cfz9" Nov 22 08:24:21 crc kubenswrapper[4856]: I1122 08:24:21.652984 4856 scope.go:117] "RemoveContainer" containerID="1b78db4dcb6243b88818e65cb9e71f6ee40dec58af9897cd29c76851b4505745" Nov 22 08:24:22 crc kubenswrapper[4856]: I1122 08:24:22.968035 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-4cfz9"] Nov 22 08:24:22 crc kubenswrapper[4856]: I1122 08:24:22.974484 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-4cfz9"] Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.128675 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-jjjl2"] Nov 22 08:24:23 crc kubenswrapper[4856]: E1122 08:24:23.129050 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a868fc-daab-46da-ad58-643a8d4c7089" containerName="storage" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.129071 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a868fc-daab-46da-ad58-643a8d4c7089" containerName="storage" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.129203 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a868fc-daab-46da-ad58-643a8d4c7089" containerName="storage" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.129770 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jjjl2" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.132226 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.132284 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.133175 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.133418 4856 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-zw7lf" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.144495 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-jjjl2"] Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.191411 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s92p\" (UniqueName: \"kubernetes.io/projected/4a9d20f3-068c-48be-94df-308c16e2aa0f-kube-api-access-9s92p\") pod \"crc-storage-crc-jjjl2\" (UID: \"4a9d20f3-068c-48be-94df-308c16e2aa0f\") " pod="crc-storage/crc-storage-crc-jjjl2" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.191486 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a9d20f3-068c-48be-94df-308c16e2aa0f-node-mnt\") pod \"crc-storage-crc-jjjl2\" (UID: \"4a9d20f3-068c-48be-94df-308c16e2aa0f\") " pod="crc-storage/crc-storage-crc-jjjl2" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.191564 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a9d20f3-068c-48be-94df-308c16e2aa0f-crc-storage\") pod \"crc-storage-crc-jjjl2\" (UID: \"4a9d20f3-068c-48be-94df-308c16e2aa0f\") " pod="crc-storage/crc-storage-crc-jjjl2" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.293380 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a9d20f3-068c-48be-94df-308c16e2aa0f-crc-storage\") pod \"crc-storage-crc-jjjl2\" (UID: \"4a9d20f3-068c-48be-94df-308c16e2aa0f\") " pod="crc-storage/crc-storage-crc-jjjl2" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.293502 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s92p\" (UniqueName: \"kubernetes.io/projected/4a9d20f3-068c-48be-94df-308c16e2aa0f-kube-api-access-9s92p\") pod \"crc-storage-crc-jjjl2\" (UID: \"4a9d20f3-068c-48be-94df-308c16e2aa0f\") " pod="crc-storage/crc-storage-crc-jjjl2" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.293636 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a9d20f3-068c-48be-94df-308c16e2aa0f-node-mnt\") pod \"crc-storage-crc-jjjl2\" (UID: \"4a9d20f3-068c-48be-94df-308c16e2aa0f\") " pod="crc-storage/crc-storage-crc-jjjl2" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.293955 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a9d20f3-068c-48be-94df-308c16e2aa0f-node-mnt\") pod \"crc-storage-crc-jjjl2\" (UID: \"4a9d20f3-068c-48be-94df-308c16e2aa0f\") " pod="crc-storage/crc-storage-crc-jjjl2" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.294275 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a9d20f3-068c-48be-94df-308c16e2aa0f-crc-storage\") pod \"crc-storage-crc-jjjl2\" (UID: \"4a9d20f3-068c-48be-94df-308c16e2aa0f\") " pod="crc-storage/crc-storage-crc-jjjl2" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.311007 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s92p\" (UniqueName: \"kubernetes.io/projected/4a9d20f3-068c-48be-94df-308c16e2aa0f-kube-api-access-9s92p\") pod \"crc-storage-crc-jjjl2\" (UID: \"4a9d20f3-068c-48be-94df-308c16e2aa0f\") " pod="crc-storage/crc-storage-crc-jjjl2" Nov 22 08:24:23 crc kubenswrapper[4856]: I1122 08:24:23.450471 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jjjl2" Nov 22 08:24:26 crc kubenswrapper[4856]: I1122 08:24:24.718424 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a868fc-daab-46da-ad58-643a8d4c7089" path="/var/lib/kubelet/pods/74a868fc-daab-46da-ad58-643a8d4c7089/volumes" Nov 22 08:24:27 crc kubenswrapper[4856]: I1122 08:24:27.273220 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-jjjl2"] Nov 22 08:24:27 crc kubenswrapper[4856]: I1122 08:24:27.365461 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jjjl2" event={"ID":"4a9d20f3-068c-48be-94df-308c16e2aa0f","Type":"ContainerStarted","Data":"44667fd9e4777e3ec763309a62e2142333cac28a4205035f68a5e93264e8f41e"} Nov 22 08:24:30 crc kubenswrapper[4856]: I1122 08:24:30.390667 4856 generic.go:334] "Generic (PLEG): container finished" podID="4a9d20f3-068c-48be-94df-308c16e2aa0f" containerID="af56aed0e10fb14c56bcd59e415e66c0b7a6245c4684843e1ca3e386d734d668" exitCode=0 Nov 22 08:24:30 crc kubenswrapper[4856]: I1122 08:24:30.390766 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jjjl2" event={"ID":"4a9d20f3-068c-48be-94df-308c16e2aa0f","Type":"ContainerDied","Data":"af56aed0e10fb14c56bcd59e415e66c0b7a6245c4684843e1ca3e386d734d668"} Nov 22 08:24:31 crc kubenswrapper[4856]: I1122 08:24:31.665628 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jjjl2" Nov 22 08:24:31 crc kubenswrapper[4856]: I1122 08:24:31.819471 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a9d20f3-068c-48be-94df-308c16e2aa0f-node-mnt\") pod \"4a9d20f3-068c-48be-94df-308c16e2aa0f\" (UID: \"4a9d20f3-068c-48be-94df-308c16e2aa0f\") " Nov 22 08:24:31 crc kubenswrapper[4856]: I1122 08:24:31.819632 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s92p\" (UniqueName: \"kubernetes.io/projected/4a9d20f3-068c-48be-94df-308c16e2aa0f-kube-api-access-9s92p\") pod \"4a9d20f3-068c-48be-94df-308c16e2aa0f\" (UID: \"4a9d20f3-068c-48be-94df-308c16e2aa0f\") " Nov 22 08:24:31 crc kubenswrapper[4856]: I1122 08:24:31.819703 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a9d20f3-068c-48be-94df-308c16e2aa0f-crc-storage\") pod \"4a9d20f3-068c-48be-94df-308c16e2aa0f\" (UID: \"4a9d20f3-068c-48be-94df-308c16e2aa0f\") " Nov 22 08:24:31 crc kubenswrapper[4856]: I1122 08:24:31.819713 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4a9d20f3-068c-48be-94df-308c16e2aa0f-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "4a9d20f3-068c-48be-94df-308c16e2aa0f" (UID: "4a9d20f3-068c-48be-94df-308c16e2aa0f"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 08:24:31 crc kubenswrapper[4856]: I1122 08:24:31.820095 4856 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/4a9d20f3-068c-48be-94df-308c16e2aa0f-node-mnt\") on node \"crc\" DevicePath \"\"" Nov 22 08:24:31 crc kubenswrapper[4856]: I1122 08:24:31.825731 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9d20f3-068c-48be-94df-308c16e2aa0f-kube-api-access-9s92p" (OuterVolumeSpecName: "kube-api-access-9s92p") pod "4a9d20f3-068c-48be-94df-308c16e2aa0f" (UID: "4a9d20f3-068c-48be-94df-308c16e2aa0f"). InnerVolumeSpecName "kube-api-access-9s92p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:24:31 crc kubenswrapper[4856]: I1122 08:24:31.839011 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a9d20f3-068c-48be-94df-308c16e2aa0f-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "4a9d20f3-068c-48be-94df-308c16e2aa0f" (UID: "4a9d20f3-068c-48be-94df-308c16e2aa0f"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:24:31 crc kubenswrapper[4856]: I1122 08:24:31.921135 4856 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/4a9d20f3-068c-48be-94df-308c16e2aa0f-crc-storage\") on node \"crc\" DevicePath \"\"" Nov 22 08:24:31 crc kubenswrapper[4856]: I1122 08:24:31.921712 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s92p\" (UniqueName: \"kubernetes.io/projected/4a9d20f3-068c-48be-94df-308c16e2aa0f-kube-api-access-9s92p\") on node \"crc\" DevicePath \"\"" Nov 22 08:24:32 crc kubenswrapper[4856]: I1122 08:24:32.411395 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jjjl2" event={"ID":"4a9d20f3-068c-48be-94df-308c16e2aa0f","Type":"ContainerDied","Data":"44667fd9e4777e3ec763309a62e2142333cac28a4205035f68a5e93264e8f41e"} Nov 22 08:24:32 crc kubenswrapper[4856]: I1122 08:24:32.411437 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44667fd9e4777e3ec763309a62e2142333cac28a4205035f68a5e93264e8f41e" Nov 22 08:24:32 crc kubenswrapper[4856]: I1122 08:24:32.411521 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jjjl2" Nov 22 08:25:59 crc kubenswrapper[4856]: I1122 08:25:59.755057 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:25:59 crc kubenswrapper[4856]: I1122 08:25:59.755629 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:26:29 crc kubenswrapper[4856]: I1122 08:26:29.755213 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:26:29 crc kubenswrapper[4856]: I1122 08:26:29.755768 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.601413 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bbc85cdbf-k8s8c"] Nov 22 08:26:37 crc kubenswrapper[4856]: E1122 08:26:37.602696 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9d20f3-068c-48be-94df-308c16e2aa0f" containerName="storage" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.602714 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9d20f3-068c-48be-94df-308c16e2aa0f" containerName="storage" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.602868 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9d20f3-068c-48be-94df-308c16e2aa0f" containerName="storage" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.610427 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bbc85cdbf-k8s8c" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.613846 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.614672 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.615482 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.615662 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-lj4lj" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.618727 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bbc85cdbf-k8s8c"] Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.629736 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c4878bb99-8b66s"] Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.639040 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.648514 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.655963 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c4878bb99-8b66s"] Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.775885 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c58c1559-e306-4c9c-b909-0713c8a84710-dns-svc\") pod \"dnsmasq-dns-7c4878bb99-8b66s\" (UID: \"c58c1559-e306-4c9c-b909-0713c8a84710\") " pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.775969 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpxs6\" (UniqueName: \"kubernetes.io/projected/c58c1559-e306-4c9c-b909-0713c8a84710-kube-api-access-vpxs6\") pod \"dnsmasq-dns-7c4878bb99-8b66s\" (UID: \"c58c1559-e306-4c9c-b909-0713c8a84710\") " pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.776012 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58c1559-e306-4c9c-b909-0713c8a84710-config\") pod \"dnsmasq-dns-7c4878bb99-8b66s\" (UID: \"c58c1559-e306-4c9c-b909-0713c8a84710\") " pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.776032 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d403750d-73ae-4025-b7e1-c83c315a5985-config\") pod \"dnsmasq-dns-6bbc85cdbf-k8s8c\" (UID: \"d403750d-73ae-4025-b7e1-c83c315a5985\") " pod="openstack/dnsmasq-dns-6bbc85cdbf-k8s8c" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.776585 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn2g5\" (UniqueName: \"kubernetes.io/projected/d403750d-73ae-4025-b7e1-c83c315a5985-kube-api-access-jn2g5\") pod \"dnsmasq-dns-6bbc85cdbf-k8s8c\" (UID: \"d403750d-73ae-4025-b7e1-c83c315a5985\") " pod="openstack/dnsmasq-dns-6bbc85cdbf-k8s8c" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.877968 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn2g5\" (UniqueName: \"kubernetes.io/projected/d403750d-73ae-4025-b7e1-c83c315a5985-kube-api-access-jn2g5\") pod \"dnsmasq-dns-6bbc85cdbf-k8s8c\" (UID: \"d403750d-73ae-4025-b7e1-c83c315a5985\") " pod="openstack/dnsmasq-dns-6bbc85cdbf-k8s8c" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.878129 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c58c1559-e306-4c9c-b909-0713c8a84710-dns-svc\") pod \"dnsmasq-dns-7c4878bb99-8b66s\" (UID: \"c58c1559-e306-4c9c-b909-0713c8a84710\") " pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.878170 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpxs6\" (UniqueName: \"kubernetes.io/projected/c58c1559-e306-4c9c-b909-0713c8a84710-kube-api-access-vpxs6\") pod \"dnsmasq-dns-7c4878bb99-8b66s\" (UID: \"c58c1559-e306-4c9c-b909-0713c8a84710\") " pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.878206 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58c1559-e306-4c9c-b909-0713c8a84710-config\") pod \"dnsmasq-dns-7c4878bb99-8b66s\" (UID: \"c58c1559-e306-4c9c-b909-0713c8a84710\") " pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.878230 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d403750d-73ae-4025-b7e1-c83c315a5985-config\") pod \"dnsmasq-dns-6bbc85cdbf-k8s8c\" (UID: \"d403750d-73ae-4025-b7e1-c83c315a5985\") " pod="openstack/dnsmasq-dns-6bbc85cdbf-k8s8c" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.879361 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c58c1559-e306-4c9c-b909-0713c8a84710-dns-svc\") pod \"dnsmasq-dns-7c4878bb99-8b66s\" (UID: \"c58c1559-e306-4c9c-b909-0713c8a84710\") " pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.879481 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d403750d-73ae-4025-b7e1-c83c315a5985-config\") pod \"dnsmasq-dns-6bbc85cdbf-k8s8c\" (UID: \"d403750d-73ae-4025-b7e1-c83c315a5985\") " pod="openstack/dnsmasq-dns-6bbc85cdbf-k8s8c" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.879483 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58c1559-e306-4c9c-b909-0713c8a84710-config\") pod \"dnsmasq-dns-7c4878bb99-8b66s\" (UID: \"c58c1559-e306-4c9c-b909-0713c8a84710\") " pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.903729 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn2g5\" (UniqueName: \"kubernetes.io/projected/d403750d-73ae-4025-b7e1-c83c315a5985-kube-api-access-jn2g5\") pod \"dnsmasq-dns-6bbc85cdbf-k8s8c\" (UID: \"d403750d-73ae-4025-b7e1-c83c315a5985\") " pod="openstack/dnsmasq-dns-6bbc85cdbf-k8s8c" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.904132 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpxs6\" (UniqueName: \"kubernetes.io/projected/c58c1559-e306-4c9c-b909-0713c8a84710-kube-api-access-vpxs6\") pod \"dnsmasq-dns-7c4878bb99-8b66s\" (UID: \"c58c1559-e306-4c9c-b909-0713c8a84710\") " pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.937059 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bbc85cdbf-k8s8c" Nov 22 08:26:37 crc kubenswrapper[4856]: I1122 08:26:37.962294 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.395766 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bbc85cdbf-k8s8c"] Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.402456 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.449371 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c4878bb99-8b66s"] Nov 22 08:26:38 crc kubenswrapper[4856]: W1122 08:26:38.454204 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc58c1559_e306_4c9c_b909_0713c8a84710.slice/crio-c26df6940039928cb97e23dae1795a5b7b0fd5e6400fb7d6be43aa5ad3680a31 WatchSource:0}: Error finding container c26df6940039928cb97e23dae1795a5b7b0fd5e6400fb7d6be43aa5ad3680a31: Status 404 returned error can't find the container with id c26df6940039928cb97e23dae1795a5b7b0fd5e6400fb7d6be43aa5ad3680a31 Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.662600 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bbc85cdbf-k8s8c"] Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.686333 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59688db5f9-pt45n"] Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.688056 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.711275 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqnw2\" (UniqueName: \"kubernetes.io/projected/8c37e2f9-b495-43c5-9738-85293ff06e7d-kube-api-access-jqnw2\") pod \"dnsmasq-dns-59688db5f9-pt45n\" (UID: \"8c37e2f9-b495-43c5-9738-85293ff06e7d\") " pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.711394 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c37e2f9-b495-43c5-9738-85293ff06e7d-dns-svc\") pod \"dnsmasq-dns-59688db5f9-pt45n\" (UID: \"8c37e2f9-b495-43c5-9738-85293ff06e7d\") " pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.711430 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c37e2f9-b495-43c5-9738-85293ff06e7d-config\") pod \"dnsmasq-dns-59688db5f9-pt45n\" (UID: \"8c37e2f9-b495-43c5-9738-85293ff06e7d\") " pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.797007 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59688db5f9-pt45n"] Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.814252 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqnw2\" (UniqueName: \"kubernetes.io/projected/8c37e2f9-b495-43c5-9738-85293ff06e7d-kube-api-access-jqnw2\") pod \"dnsmasq-dns-59688db5f9-pt45n\" (UID: \"8c37e2f9-b495-43c5-9738-85293ff06e7d\") " pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.814332 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c37e2f9-b495-43c5-9738-85293ff06e7d-dns-svc\") pod \"dnsmasq-dns-59688db5f9-pt45n\" (UID: \"8c37e2f9-b495-43c5-9738-85293ff06e7d\") " pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.814393 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c37e2f9-b495-43c5-9738-85293ff06e7d-config\") pod \"dnsmasq-dns-59688db5f9-pt45n\" (UID: \"8c37e2f9-b495-43c5-9738-85293ff06e7d\") " pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.817037 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c37e2f9-b495-43c5-9738-85293ff06e7d-config\") pod \"dnsmasq-dns-59688db5f9-pt45n\" (UID: \"8c37e2f9-b495-43c5-9738-85293ff06e7d\") " pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.818284 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c37e2f9-b495-43c5-9738-85293ff06e7d-dns-svc\") pod \"dnsmasq-dns-59688db5f9-pt45n\" (UID: \"8c37e2f9-b495-43c5-9738-85293ff06e7d\") " pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.857739 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqnw2\" (UniqueName: \"kubernetes.io/projected/8c37e2f9-b495-43c5-9738-85293ff06e7d-kube-api-access-jqnw2\") pod \"dnsmasq-dns-59688db5f9-pt45n\" (UID: \"8c37e2f9-b495-43c5-9738-85293ff06e7d\") " pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:26:38 crc kubenswrapper[4856]: I1122 08:26:38.988037 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c4878bb99-8b66s"] Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.005768 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-574cff9d7f-n8854"] Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.006990 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.025097 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574cff9d7f-n8854"] Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.029219 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.133031 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l5bc\" (UniqueName: \"kubernetes.io/projected/0996884b-451f-4d66-85bf-680b8be0d7ee-kube-api-access-8l5bc\") pod \"dnsmasq-dns-574cff9d7f-n8854\" (UID: \"0996884b-451f-4d66-85bf-680b8be0d7ee\") " pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.133165 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0996884b-451f-4d66-85bf-680b8be0d7ee-dns-svc\") pod \"dnsmasq-dns-574cff9d7f-n8854\" (UID: \"0996884b-451f-4d66-85bf-680b8be0d7ee\") " pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.133355 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0996884b-451f-4d66-85bf-680b8be0d7ee-config\") pod \"dnsmasq-dns-574cff9d7f-n8854\" (UID: \"0996884b-451f-4d66-85bf-680b8be0d7ee\") " pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.234341 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0996884b-451f-4d66-85bf-680b8be0d7ee-config\") pod \"dnsmasq-dns-574cff9d7f-n8854\" (UID: \"0996884b-451f-4d66-85bf-680b8be0d7ee\") " pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.234395 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l5bc\" (UniqueName: \"kubernetes.io/projected/0996884b-451f-4d66-85bf-680b8be0d7ee-kube-api-access-8l5bc\") pod \"dnsmasq-dns-574cff9d7f-n8854\" (UID: \"0996884b-451f-4d66-85bf-680b8be0d7ee\") " pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.234493 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0996884b-451f-4d66-85bf-680b8be0d7ee-dns-svc\") pod \"dnsmasq-dns-574cff9d7f-n8854\" (UID: \"0996884b-451f-4d66-85bf-680b8be0d7ee\") " pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.235465 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0996884b-451f-4d66-85bf-680b8be0d7ee-dns-svc\") pod \"dnsmasq-dns-574cff9d7f-n8854\" (UID: \"0996884b-451f-4d66-85bf-680b8be0d7ee\") " pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.236160 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0996884b-451f-4d66-85bf-680b8be0d7ee-config\") pod \"dnsmasq-dns-574cff9d7f-n8854\" (UID: \"0996884b-451f-4d66-85bf-680b8be0d7ee\") " pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.290275 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l5bc\" (UniqueName: \"kubernetes.io/projected/0996884b-451f-4d66-85bf-680b8be0d7ee-kube-api-access-8l5bc\") pod \"dnsmasq-dns-574cff9d7f-n8854\" (UID: \"0996884b-451f-4d66-85bf-680b8be0d7ee\") " pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.343109 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.379171 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bbc85cdbf-k8s8c" event={"ID":"d403750d-73ae-4025-b7e1-c83c315a5985","Type":"ContainerStarted","Data":"537cf0467b17bf980dc6cc84f631d8361bdfa576a9d05a995a076ea2b27700b0"} Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.384513 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" event={"ID":"c58c1559-e306-4c9c-b909-0713c8a84710","Type":"ContainerStarted","Data":"c26df6940039928cb97e23dae1795a5b7b0fd5e6400fb7d6be43aa5ad3680a31"} Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.616324 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59688db5f9-pt45n"] Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.823429 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.825353 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.828464 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hvgpt" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.828701 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.828901 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.829090 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.829265 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.829370 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.829467 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.845965 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.943561 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-574cff9d7f-n8854"] Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.944387 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.944466 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b673936f-4f1b-42ea-b1da-12b855b8ee6d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.944496 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.944621 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.944679 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.944733 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.944837 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b673936f-4f1b-42ea-b1da-12b855b8ee6d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.944906 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.945024 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.945056 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2vgf\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-kube-api-access-m2vgf\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:39 crc kubenswrapper[4856]: I1122 08:26:39.945087 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-config-data\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.046617 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-config-data\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.046672 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.046727 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b673936f-4f1b-42ea-b1da-12b855b8ee6d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.046759 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.046786 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.046813 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.046844 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.046881 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b673936f-4f1b-42ea-b1da-12b855b8ee6d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.046909 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.046957 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.046986 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2vgf\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-kube-api-access-m2vgf\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.047396 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.047471 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.048430 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.049611 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-config-data\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.049954 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.051805 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b673936f-4f1b-42ea-b1da-12b855b8ee6d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.052193 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.052730 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.060753 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.060798 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/675b50c71cf4d76e24255932f233e1308e49f3fbbec5594a63f6595cbe644c77/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.072630 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2vgf\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-kube-api-access-m2vgf\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.081797 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b673936f-4f1b-42ea-b1da-12b855b8ee6d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.111262 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\") pod \"rabbitmq-server-0\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.131500 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.133732 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.137679 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.137716 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.137753 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.137913 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.138105 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.138412 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hd4vk" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.138604 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.161922 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.188259 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.249265 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.249345 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.249398 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.249419 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.249457 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc30d930-50c7-4002-b44d-80f76828c9c1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.249489 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-526ca893-675e-4752-952d-e9936927c34d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-526ca893-675e-4752-952d-e9936927c34d\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.249541 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.249562 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.249578 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc30d930-50c7-4002-b44d-80f76828c9c1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.249615 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8xm4\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-kube-api-access-q8xm4\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.249646 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.352471 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8xm4\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-kube-api-access-q8xm4\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.352584 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.353223 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.353392 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.353429 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.353451 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.353494 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc30d930-50c7-4002-b44d-80f76828c9c1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.353551 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-526ca893-675e-4752-952d-e9936927c34d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-526ca893-675e-4752-952d-e9936927c34d\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.353576 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.353594 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.353613 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc30d930-50c7-4002-b44d-80f76828c9c1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.354287 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.354985 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.355294 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.355757 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.356743 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.359410 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc30d930-50c7-4002-b44d-80f76828c9c1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.359985 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.360056 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-526ca893-675e-4752-952d-e9936927c34d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-526ca893-675e-4752-952d-e9936927c34d\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/de4312679e98056a9e2352058a52abe0161146fbcf6616a28fa4f9c792b7b37c/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.360472 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.364004 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.364211 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc30d930-50c7-4002-b44d-80f76828c9c1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.372345 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8xm4\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-kube-api-access-q8xm4\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.403147 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574cff9d7f-n8854" event={"ID":"0996884b-451f-4d66-85bf-680b8be0d7ee","Type":"ContainerStarted","Data":"e1d6faa0e2ec94216acf79baf969007655bc881c622b5d3ba00b46b05c410118"} Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.406064 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" event={"ID":"8c37e2f9-b495-43c5-9738-85293ff06e7d","Type":"ContainerStarted","Data":"17913761f792381d03464ff0c97e6fd069ccf985fe22add433a5a765074b6140"} Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.406419 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-526ca893-675e-4752-952d-e9936927c34d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-526ca893-675e-4752-952d-e9936927c34d\") pod \"rabbitmq-cell1-server-0\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.466457 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.704891 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.878123 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.882554 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.890958 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ql8dm" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.895805 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.898307 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.898592 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.918586 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 22 08:26:40 crc kubenswrapper[4856]: I1122 08:26:40.918918 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.066656 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274230c4-41e5-433a-8878-a09cd3ea7de8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.066713 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/274230c4-41e5-433a-8878-a09cd3ea7de8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.066735 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjxvl\" (UniqueName: \"kubernetes.io/projected/274230c4-41e5-433a-8878-a09cd3ea7de8-kube-api-access-rjxvl\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.066798 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/274230c4-41e5-433a-8878-a09cd3ea7de8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.066886 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/274230c4-41e5-433a-8878-a09cd3ea7de8-kolla-config\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.066947 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/274230c4-41e5-433a-8878-a09cd3ea7de8-config-data-default\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.067012 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/274230c4-41e5-433a-8878-a09cd3ea7de8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.067060 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aa3e6113-0e52-46b5-97dc-8d285a39562a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa3e6113-0e52-46b5-97dc-8d285a39562a\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.176683 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/274230c4-41e5-433a-8878-a09cd3ea7de8-config-data-default\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.176748 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/274230c4-41e5-433a-8878-a09cd3ea7de8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.176800 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-aa3e6113-0e52-46b5-97dc-8d285a39562a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa3e6113-0e52-46b5-97dc-8d285a39562a\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.176836 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274230c4-41e5-433a-8878-a09cd3ea7de8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.176865 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/274230c4-41e5-433a-8878-a09cd3ea7de8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.176888 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjxvl\" (UniqueName: \"kubernetes.io/projected/274230c4-41e5-433a-8878-a09cd3ea7de8-kube-api-access-rjxvl\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.176923 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/274230c4-41e5-433a-8878-a09cd3ea7de8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.176971 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/274230c4-41e5-433a-8878-a09cd3ea7de8-kolla-config\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.177831 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/274230c4-41e5-433a-8878-a09cd3ea7de8-kolla-config\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.178606 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/274230c4-41e5-433a-8878-a09cd3ea7de8-config-data-default\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.178896 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/274230c4-41e5-433a-8878-a09cd3ea7de8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.185644 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.185690 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-aa3e6113-0e52-46b5-97dc-8d285a39562a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa3e6113-0e52-46b5-97dc-8d285a39562a\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f8e071774d688d0342887fe0468d787d90fcf778159bf1e6048edfa019bbac92/globalmount\"" pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.186892 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/274230c4-41e5-433a-8878-a09cd3ea7de8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.197583 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/274230c4-41e5-433a-8878-a09cd3ea7de8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.199121 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/274230c4-41e5-433a-8878-a09cd3ea7de8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.202800 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjxvl\" (UniqueName: \"kubernetes.io/projected/274230c4-41e5-433a-8878-a09cd3ea7de8-kube-api-access-rjxvl\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.280452 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-aa3e6113-0e52-46b5-97dc-8d285a39562a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-aa3e6113-0e52-46b5-97dc-8d285a39562a\") pod \"openstack-galera-0\" (UID: \"274230c4-41e5-433a-8878-a09cd3ea7de8\") " pod="openstack/openstack-galera-0" Nov 22 08:26:41 crc kubenswrapper[4856]: I1122 08:26:41.576465 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.277718 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.281705 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.284970 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tpfz8" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.285030 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.285299 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.285465 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.299375 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.354504 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.419128 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.419199 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.419225 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.419253 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3d5c9a3f-ee96-444c-96ef-1aa8ce6e1864\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d5c9a3f-ee96-444c-96ef-1aa8ce6e1864\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.419280 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.419347 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.419368 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wzbx\" (UniqueName: \"kubernetes.io/projected/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-kube-api-access-5wzbx\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.419449 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.461824 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b673936f-4f1b-42ea-b1da-12b855b8ee6d","Type":"ContainerStarted","Data":"3f16daa979b5436723eb6f30e33cea4faa0a102b28298bc4c3bdef2a9f4b0356"} Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.474744 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc30d930-50c7-4002-b44d-80f76828c9c1","Type":"ContainerStarted","Data":"a4117eccfabce705ccf87ea40af7a273faed3b30cc35e9cb86f68c462b19bf34"} Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.480584 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.521301 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.521358 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.521392 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3d5c9a3f-ee96-444c-96ef-1aa8ce6e1864\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d5c9a3f-ee96-444c-96ef-1aa8ce6e1864\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.521424 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.521466 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.521485 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wzbx\" (UniqueName: \"kubernetes.io/projected/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-kube-api-access-5wzbx\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.521550 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.521582 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.522435 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.523331 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.523909 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.524325 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.527436 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.527490 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3d5c9a3f-ee96-444c-96ef-1aa8ce6e1864\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d5c9a3f-ee96-444c-96ef-1aa8ce6e1864\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/19a2ab6fe52eee0250665d00d9c006cf4bad2a47cacdb61a86c0c1a30b2bf71b/globalmount\"" pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.529548 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.532484 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.543553 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wzbx\" (UniqueName: \"kubernetes.io/projected/d4dcc1d5-4e57-45ff-931e-0be9bc3be546-kube-api-access-5wzbx\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.573822 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3d5c9a3f-ee96-444c-96ef-1aa8ce6e1864\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3d5c9a3f-ee96-444c-96ef-1aa8ce6e1864\") pod \"openstack-cell1-galera-0\" (UID: \"d4dcc1d5-4e57-45ff-931e-0be9bc3be546\") " pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.604679 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.858197 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.859729 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.862007 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.862275 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.862697 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-zdlxj" Nov 22 08:26:42 crc kubenswrapper[4856]: I1122 08:26:42.881562 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.033612 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.035891 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbn6x\" (UniqueName: \"kubernetes.io/projected/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-kube-api-access-zbn6x\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.036104 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.036179 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-config-data\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.036214 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-kolla-config\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.137946 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-config-data\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.137997 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-kolla-config\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.138032 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.138074 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbn6x\" (UniqueName: \"kubernetes.io/projected/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-kube-api-access-zbn6x\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.138127 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.139150 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-config-data\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.139158 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-kolla-config\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.161094 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbn6x\" (UniqueName: \"kubernetes.io/projected/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-kube-api-access-zbn6x\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.164458 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.168239 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19ba91be-fe9b-4d3e-a85c-0f5236cfd60b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b\") " pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.216670 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.303230 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.492656 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"274230c4-41e5-433a-8878-a09cd3ea7de8","Type":"ContainerStarted","Data":"819d396fd8b202d6f45d694e7ce0d002bbe8bbf2096556015d8ab9d838b04f69"} Nov 22 08:26:43 crc kubenswrapper[4856]: I1122 08:26:43.728373 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 22 08:26:45 crc kubenswrapper[4856]: W1122 08:26:45.903521 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19ba91be_fe9b_4d3e_a85c_0f5236cfd60b.slice/crio-e8c8593dff2ee48cce18d2007b15ff547f7e16aaedbd8efc2ecb49acd39f9363 WatchSource:0}: Error finding container e8c8593dff2ee48cce18d2007b15ff547f7e16aaedbd8efc2ecb49acd39f9363: Status 404 returned error can't find the container with id e8c8593dff2ee48cce18d2007b15ff547f7e16aaedbd8efc2ecb49acd39f9363 Nov 22 08:26:45 crc kubenswrapper[4856]: W1122 08:26:45.905946 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4dcc1d5_4e57_45ff_931e_0be9bc3be546.slice/crio-13a5728f21838bf941f598e926b8f05d4226b2d03192ebfa0c62b26a0b41d8fa WatchSource:0}: Error finding container 13a5728f21838bf941f598e926b8f05d4226b2d03192ebfa0c62b26a0b41d8fa: Status 404 returned error can't find the container with id 13a5728f21838bf941f598e926b8f05d4226b2d03192ebfa0c62b26a0b41d8fa Nov 22 08:26:46 crc kubenswrapper[4856]: I1122 08:26:46.522125 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b","Type":"ContainerStarted","Data":"e8c8593dff2ee48cce18d2007b15ff547f7e16aaedbd8efc2ecb49acd39f9363"} Nov 22 08:26:46 crc kubenswrapper[4856]: I1122 08:26:46.523510 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4dcc1d5-4e57-45ff-931e-0be9bc3be546","Type":"ContainerStarted","Data":"13a5728f21838bf941f598e926b8f05d4226b2d03192ebfa0c62b26a0b41d8fa"} Nov 22 08:26:58 crc kubenswrapper[4856]: I1122 08:26:58.618602 4856 generic.go:334] "Generic (PLEG): container finished" podID="0996884b-451f-4d66-85bf-680b8be0d7ee" containerID="9cb235aadb0a5717e7dd203243c33e07bdbf04b6a559672633c423ba6168f8b0" exitCode=0 Nov 22 08:26:58 crc kubenswrapper[4856]: I1122 08:26:58.618712 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574cff9d7f-n8854" event={"ID":"0996884b-451f-4d66-85bf-680b8be0d7ee","Type":"ContainerDied","Data":"9cb235aadb0a5717e7dd203243c33e07bdbf04b6a559672633c423ba6168f8b0"} Nov 22 08:26:58 crc kubenswrapper[4856]: I1122 08:26:58.621261 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"274230c4-41e5-433a-8878-a09cd3ea7de8","Type":"ContainerStarted","Data":"2b439508f4fa334a3e2a97aed53419a8a5f557ebae5c047275d7ea908d42701f"} Nov 22 08:26:58 crc kubenswrapper[4856]: I1122 08:26:58.622784 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"19ba91be-fe9b-4d3e-a85c-0f5236cfd60b","Type":"ContainerStarted","Data":"5fc12ae8b79612326bab5f25d1df8fdd09883b3d8115ff37fe275937a6d62096"} Nov 22 08:26:58 crc kubenswrapper[4856]: I1122 08:26:58.622962 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 22 08:26:58 crc kubenswrapper[4856]: I1122 08:26:58.624060 4856 generic.go:334] "Generic (PLEG): container finished" podID="d403750d-73ae-4025-b7e1-c83c315a5985" containerID="b509ce4c2fe22bb4afbf98580c3d62c9788c26716a2e2c986093104842fabd4b" exitCode=0 Nov 22 08:26:58 crc kubenswrapper[4856]: I1122 08:26:58.624129 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bbc85cdbf-k8s8c" event={"ID":"d403750d-73ae-4025-b7e1-c83c315a5985","Type":"ContainerDied","Data":"b509ce4c2fe22bb4afbf98580c3d62c9788c26716a2e2c986093104842fabd4b"} Nov 22 08:26:58 crc kubenswrapper[4856]: I1122 08:26:58.627662 4856 generic.go:334] "Generic (PLEG): container finished" podID="c58c1559-e306-4c9c-b909-0713c8a84710" containerID="8a4a47c681bc1360db508845426fb838bda56e639e19648f327f8597e2bef273" exitCode=0 Nov 22 08:26:58 crc kubenswrapper[4856]: I1122 08:26:58.627728 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" event={"ID":"c58c1559-e306-4c9c-b909-0713c8a84710","Type":"ContainerDied","Data":"8a4a47c681bc1360db508845426fb838bda56e639e19648f327f8597e2bef273"} Nov 22 08:26:58 crc kubenswrapper[4856]: I1122 08:26:58.629578 4856 generic.go:334] "Generic (PLEG): container finished" podID="8c37e2f9-b495-43c5-9738-85293ff06e7d" containerID="2e1014b58f51de0e535e08537cacf32cb06fbc6e747f6c0e4788c747d6c97eb0" exitCode=0 Nov 22 08:26:58 crc kubenswrapper[4856]: I1122 08:26:58.629634 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" event={"ID":"8c37e2f9-b495-43c5-9738-85293ff06e7d","Type":"ContainerDied","Data":"2e1014b58f51de0e535e08537cacf32cb06fbc6e747f6c0e4788c747d6c97eb0"} Nov 22 08:26:58 crc kubenswrapper[4856]: I1122 08:26:58.631942 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4dcc1d5-4e57-45ff-931e-0be9bc3be546","Type":"ContainerStarted","Data":"e43ab9efe45ae12fab6828886527eb32ef7d551cab9b5ca96b8f2276fa54d6bb"} Nov 22 08:26:58 crc kubenswrapper[4856]: I1122 08:26:58.776176 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.927345458 podStartE2EDuration="16.776147605s" podCreationTimestamp="2025-11-22 08:26:42 +0000 UTC" firstStartedPulling="2025-11-22 08:26:45.911420306 +0000 UTC m=+5048.324813564" lastFinishedPulling="2025-11-22 08:26:57.760222453 +0000 UTC m=+5060.173615711" observedRunningTime="2025-11-22 08:26:58.765814487 +0000 UTC m=+5061.179207765" watchObservedRunningTime="2025-11-22 08:26:58.776147605 +0000 UTC m=+5061.189540863" Nov 22 08:26:58 crc kubenswrapper[4856]: E1122 08:26:58.985645 4856 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Nov 22 08:26:58 crc kubenswrapper[4856]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/8c37e2f9-b495-43c5-9738-85293ff06e7d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 22 08:26:58 crc kubenswrapper[4856]: > podSandboxID="17913761f792381d03464ff0c97e6fd069ccf985fe22add433a5a765074b6140" Nov 22 08:26:58 crc kubenswrapper[4856]: E1122 08:26:58.985826 4856 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 22 08:26:58 crc kubenswrapper[4856]: container &Container{Name:dnsmasq-dns,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:87d86758a49b8425a546c66207f21761,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqnw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-59688db5f9-pt45n_openstack(8c37e2f9-b495-43c5-9738-85293ff06e7d): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/8c37e2f9-b495-43c5-9738-85293ff06e7d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Nov 22 08:26:58 crc kubenswrapper[4856]: > logger="UnhandledError" Nov 22 08:26:58 crc kubenswrapper[4856]: E1122 08:26:58.986955 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/8c37e2f9-b495-43c5-9738-85293ff06e7d/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" podUID="8c37e2f9-b495-43c5-9738-85293ff06e7d" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.091099 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.097547 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bbc85cdbf-k8s8c" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.221912 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpxs6\" (UniqueName: \"kubernetes.io/projected/c58c1559-e306-4c9c-b909-0713c8a84710-kube-api-access-vpxs6\") pod \"c58c1559-e306-4c9c-b909-0713c8a84710\" (UID: \"c58c1559-e306-4c9c-b909-0713c8a84710\") " Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.221997 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn2g5\" (UniqueName: \"kubernetes.io/projected/d403750d-73ae-4025-b7e1-c83c315a5985-kube-api-access-jn2g5\") pod \"d403750d-73ae-4025-b7e1-c83c315a5985\" (UID: \"d403750d-73ae-4025-b7e1-c83c315a5985\") " Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.222136 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c58c1559-e306-4c9c-b909-0713c8a84710-dns-svc\") pod \"c58c1559-e306-4c9c-b909-0713c8a84710\" (UID: \"c58c1559-e306-4c9c-b909-0713c8a84710\") " Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.222270 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58c1559-e306-4c9c-b909-0713c8a84710-config\") pod \"c58c1559-e306-4c9c-b909-0713c8a84710\" (UID: \"c58c1559-e306-4c9c-b909-0713c8a84710\") " Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.222308 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d403750d-73ae-4025-b7e1-c83c315a5985-config\") pod \"d403750d-73ae-4025-b7e1-c83c315a5985\" (UID: \"d403750d-73ae-4025-b7e1-c83c315a5985\") " Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.227399 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c58c1559-e306-4c9c-b909-0713c8a84710-kube-api-access-vpxs6" (OuterVolumeSpecName: "kube-api-access-vpxs6") pod "c58c1559-e306-4c9c-b909-0713c8a84710" (UID: "c58c1559-e306-4c9c-b909-0713c8a84710"). InnerVolumeSpecName "kube-api-access-vpxs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.227788 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d403750d-73ae-4025-b7e1-c83c315a5985-kube-api-access-jn2g5" (OuterVolumeSpecName: "kube-api-access-jn2g5") pod "d403750d-73ae-4025-b7e1-c83c315a5985" (UID: "d403750d-73ae-4025-b7e1-c83c315a5985"). InnerVolumeSpecName "kube-api-access-jn2g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.247042 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c58c1559-e306-4c9c-b909-0713c8a84710-config" (OuterVolumeSpecName: "config") pod "c58c1559-e306-4c9c-b909-0713c8a84710" (UID: "c58c1559-e306-4c9c-b909-0713c8a84710"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.249267 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d403750d-73ae-4025-b7e1-c83c315a5985-config" (OuterVolumeSpecName: "config") pod "d403750d-73ae-4025-b7e1-c83c315a5985" (UID: "d403750d-73ae-4025-b7e1-c83c315a5985"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.249997 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c58c1559-e306-4c9c-b909-0713c8a84710-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c58c1559-e306-4c9c-b909-0713c8a84710" (UID: "c58c1559-e306-4c9c-b909-0713c8a84710"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.324533 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpxs6\" (UniqueName: \"kubernetes.io/projected/c58c1559-e306-4c9c-b909-0713c8a84710-kube-api-access-vpxs6\") on node \"crc\" DevicePath \"\"" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.324566 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn2g5\" (UniqueName: \"kubernetes.io/projected/d403750d-73ae-4025-b7e1-c83c315a5985-kube-api-access-jn2g5\") on node \"crc\" DevicePath \"\"" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.324578 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c58c1559-e306-4c9c-b909-0713c8a84710-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.324587 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58c1559-e306-4c9c-b909-0713c8a84710-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.324594 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d403750d-73ae-4025-b7e1-c83c315a5985-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.640079 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574cff9d7f-n8854" event={"ID":"0996884b-451f-4d66-85bf-680b8be0d7ee","Type":"ContainerStarted","Data":"da3e1855e2927fddd4bd779f5fc9175ce5538f3b5e3a121d6bcd90895f6abd4c"} Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.640239 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.642098 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc30d930-50c7-4002-b44d-80f76828c9c1","Type":"ContainerStarted","Data":"9c200b94b2d3848ec5b3d78e5c0d48024bb092ae1bff2f09f1faf0c32f8466f0"} Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.643913 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bbc85cdbf-k8s8c" event={"ID":"d403750d-73ae-4025-b7e1-c83c315a5985","Type":"ContainerDied","Data":"537cf0467b17bf980dc6cc84f631d8361bdfa576a9d05a995a076ea2b27700b0"} Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.643988 4856 scope.go:117] "RemoveContainer" containerID="b509ce4c2fe22bb4afbf98580c3d62c9788c26716a2e2c986093104842fabd4b" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.644038 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bbc85cdbf-k8s8c" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.645555 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.645901 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c4878bb99-8b66s" event={"ID":"c58c1559-e306-4c9c-b909-0713c8a84710","Type":"ContainerDied","Data":"c26df6940039928cb97e23dae1795a5b7b0fd5e6400fb7d6be43aa5ad3680a31"} Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.649744 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b673936f-4f1b-42ea-b1da-12b855b8ee6d","Type":"ContainerStarted","Data":"52c3d4341f06e158cb2c37e92246079ac97b8c23ca9d341028c636e797dfdef2"} Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.667631 4856 scope.go:117] "RemoveContainer" containerID="8a4a47c681bc1360db508845426fb838bda56e639e19648f327f8597e2bef273" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.698374 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-574cff9d7f-n8854" podStartSLOduration=3.940711249 podStartE2EDuration="21.698341886s" podCreationTimestamp="2025-11-22 08:26:38 +0000 UTC" firstStartedPulling="2025-11-22 08:26:39.973417601 +0000 UTC m=+5042.386810859" lastFinishedPulling="2025-11-22 08:26:57.731048238 +0000 UTC m=+5060.144441496" observedRunningTime="2025-11-22 08:26:59.69477946 +0000 UTC m=+5062.108172718" watchObservedRunningTime="2025-11-22 08:26:59.698341886 +0000 UTC m=+5062.111735144" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.749725 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bbc85cdbf-k8s8c"] Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.754957 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.755002 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.755042 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.755861 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9ce3e3934fdbe90bae0874d6336f40d827becf7fe198989484e4604fe47fb112"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.755911 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://9ce3e3934fdbe90bae0874d6336f40d827becf7fe198989484e4604fe47fb112" gracePeriod=600 Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.756982 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bbc85cdbf-k8s8c"] Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.829664 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c4878bb99-8b66s"] Nov 22 08:26:59 crc kubenswrapper[4856]: I1122 08:26:59.837538 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c4878bb99-8b66s"] Nov 22 08:27:00 crc kubenswrapper[4856]: I1122 08:27:00.665279 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="9ce3e3934fdbe90bae0874d6336f40d827becf7fe198989484e4604fe47fb112" exitCode=0 Nov 22 08:27:00 crc kubenswrapper[4856]: I1122 08:27:00.665466 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"9ce3e3934fdbe90bae0874d6336f40d827becf7fe198989484e4604fe47fb112"} Nov 22 08:27:00 crc kubenswrapper[4856]: I1122 08:27:00.666037 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650"} Nov 22 08:27:00 crc kubenswrapper[4856]: I1122 08:27:00.666058 4856 scope.go:117] "RemoveContainer" containerID="27a27eeea4e159ea16c06e2e47603798b3950737de9218b2de805db982b8556e" Nov 22 08:27:00 crc kubenswrapper[4856]: I1122 08:27:00.671611 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" event={"ID":"8c37e2f9-b495-43c5-9738-85293ff06e7d","Type":"ContainerStarted","Data":"87bb5c7ed676cede8b3f46c5ebc480e2e8a377420826351d3ec610227d426661"} Nov 22 08:27:00 crc kubenswrapper[4856]: I1122 08:27:00.672014 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:27:00 crc kubenswrapper[4856]: I1122 08:27:00.699777 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" podStartSLOduration=4.610033999 podStartE2EDuration="22.699745967s" podCreationTimestamp="2025-11-22 08:26:38 +0000 UTC" firstStartedPulling="2025-11-22 08:26:39.674998546 +0000 UTC m=+5042.088391804" lastFinishedPulling="2025-11-22 08:26:57.764710514 +0000 UTC m=+5060.178103772" observedRunningTime="2025-11-22 08:27:00.699239644 +0000 UTC m=+5063.112632922" watchObservedRunningTime="2025-11-22 08:27:00.699745967 +0000 UTC m=+5063.113139225" Nov 22 08:27:00 crc kubenswrapper[4856]: I1122 08:27:00.721706 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c58c1559-e306-4c9c-b909-0713c8a84710" path="/var/lib/kubelet/pods/c58c1559-e306-4c9c-b909-0713c8a84710/volumes" Nov 22 08:27:00 crc kubenswrapper[4856]: I1122 08:27:00.722216 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d403750d-73ae-4025-b7e1-c83c315a5985" path="/var/lib/kubelet/pods/d403750d-73ae-4025-b7e1-c83c315a5985/volumes" Nov 22 08:27:01 crc kubenswrapper[4856]: I1122 08:27:01.681186 4856 generic.go:334] "Generic (PLEG): container finished" podID="274230c4-41e5-433a-8878-a09cd3ea7de8" containerID="2b439508f4fa334a3e2a97aed53419a8a5f557ebae5c047275d7ea908d42701f" exitCode=0 Nov 22 08:27:01 crc kubenswrapper[4856]: I1122 08:27:01.681296 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"274230c4-41e5-433a-8878-a09cd3ea7de8","Type":"ContainerDied","Data":"2b439508f4fa334a3e2a97aed53419a8a5f557ebae5c047275d7ea908d42701f"} Nov 22 08:27:01 crc kubenswrapper[4856]: I1122 08:27:01.691922 4856 generic.go:334] "Generic (PLEG): container finished" podID="d4dcc1d5-4e57-45ff-931e-0be9bc3be546" containerID="e43ab9efe45ae12fab6828886527eb32ef7d551cab9b5ca96b8f2276fa54d6bb" exitCode=0 Nov 22 08:27:01 crc kubenswrapper[4856]: I1122 08:27:01.692013 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4dcc1d5-4e57-45ff-931e-0be9bc3be546","Type":"ContainerDied","Data":"e43ab9efe45ae12fab6828886527eb32ef7d551cab9b5ca96b8f2276fa54d6bb"} Nov 22 08:27:02 crc kubenswrapper[4856]: I1122 08:27:02.706376 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4dcc1d5-4e57-45ff-931e-0be9bc3be546","Type":"ContainerStarted","Data":"1b0e1e3f5d28bbe58a1c589936d2743b8452e391d4955daf2e428d4deadbbae7"} Nov 22 08:27:02 crc kubenswrapper[4856]: I1122 08:27:02.722649 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"274230c4-41e5-433a-8878-a09cd3ea7de8","Type":"ContainerStarted","Data":"c222a833c28b8d07bbcc39e31c5392cfa62ff29106674bb86f692f3d4cf51dac"} Nov 22 08:27:02 crc kubenswrapper[4856]: I1122 08:27:02.737936 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.816077741 podStartE2EDuration="21.737913171s" podCreationTimestamp="2025-11-22 08:26:41 +0000 UTC" firstStartedPulling="2025-11-22 08:26:45.911040686 +0000 UTC m=+5048.324433944" lastFinishedPulling="2025-11-22 08:26:57.832876116 +0000 UTC m=+5060.246269374" observedRunningTime="2025-11-22 08:27:02.733720189 +0000 UTC m=+5065.147113447" watchObservedRunningTime="2025-11-22 08:27:02.737913171 +0000 UTC m=+5065.151306429" Nov 22 08:27:02 crc kubenswrapper[4856]: I1122 08:27:02.754357 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.491435748 podStartE2EDuration="23.754337342s" podCreationTimestamp="2025-11-22 08:26:39 +0000 UTC" firstStartedPulling="2025-11-22 08:26:42.501903782 +0000 UTC m=+5044.915297040" lastFinishedPulling="2025-11-22 08:26:57.764805366 +0000 UTC m=+5060.178198634" observedRunningTime="2025-11-22 08:27:02.749984615 +0000 UTC m=+5065.163377883" watchObservedRunningTime="2025-11-22 08:27:02.754337342 +0000 UTC m=+5065.167730600" Nov 22 08:27:03 crc kubenswrapper[4856]: I1122 08:27:03.218485 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 22 08:27:04 crc kubenswrapper[4856]: I1122 08:27:04.030753 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:27:04 crc kubenswrapper[4856]: I1122 08:27:04.345064 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:27:04 crc kubenswrapper[4856]: I1122 08:27:04.392330 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59688db5f9-pt45n"] Nov 22 08:27:04 crc kubenswrapper[4856]: I1122 08:27:04.724004 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" podUID="8c37e2f9-b495-43c5-9738-85293ff06e7d" containerName="dnsmasq-dns" containerID="cri-o://87bb5c7ed676cede8b3f46c5ebc480e2e8a377420826351d3ec610227d426661" gracePeriod=10 Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.144460 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.236668 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqnw2\" (UniqueName: \"kubernetes.io/projected/8c37e2f9-b495-43c5-9738-85293ff06e7d-kube-api-access-jqnw2\") pod \"8c37e2f9-b495-43c5-9738-85293ff06e7d\" (UID: \"8c37e2f9-b495-43c5-9738-85293ff06e7d\") " Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.236761 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c37e2f9-b495-43c5-9738-85293ff06e7d-config\") pod \"8c37e2f9-b495-43c5-9738-85293ff06e7d\" (UID: \"8c37e2f9-b495-43c5-9738-85293ff06e7d\") " Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.236887 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c37e2f9-b495-43c5-9738-85293ff06e7d-dns-svc\") pod \"8c37e2f9-b495-43c5-9738-85293ff06e7d\" (UID: \"8c37e2f9-b495-43c5-9738-85293ff06e7d\") " Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.242380 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c37e2f9-b495-43c5-9738-85293ff06e7d-kube-api-access-jqnw2" (OuterVolumeSpecName: "kube-api-access-jqnw2") pod "8c37e2f9-b495-43c5-9738-85293ff06e7d" (UID: "8c37e2f9-b495-43c5-9738-85293ff06e7d"). InnerVolumeSpecName "kube-api-access-jqnw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.274337 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c37e2f9-b495-43c5-9738-85293ff06e7d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c37e2f9-b495-43c5-9738-85293ff06e7d" (UID: "8c37e2f9-b495-43c5-9738-85293ff06e7d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.276429 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c37e2f9-b495-43c5-9738-85293ff06e7d-config" (OuterVolumeSpecName: "config") pod "8c37e2f9-b495-43c5-9738-85293ff06e7d" (UID: "8c37e2f9-b495-43c5-9738-85293ff06e7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.339173 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c37e2f9-b495-43c5-9738-85293ff06e7d-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.339213 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c37e2f9-b495-43c5-9738-85293ff06e7d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.339230 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqnw2\" (UniqueName: \"kubernetes.io/projected/8c37e2f9-b495-43c5-9738-85293ff06e7d-kube-api-access-jqnw2\") on node \"crc\" DevicePath \"\"" Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.734268 4856 generic.go:334] "Generic (PLEG): container finished" podID="8c37e2f9-b495-43c5-9738-85293ff06e7d" containerID="87bb5c7ed676cede8b3f46c5ebc480e2e8a377420826351d3ec610227d426661" exitCode=0 Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.734320 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" event={"ID":"8c37e2f9-b495-43c5-9738-85293ff06e7d","Type":"ContainerDied","Data":"87bb5c7ed676cede8b3f46c5ebc480e2e8a377420826351d3ec610227d426661"} Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.734337 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.734376 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59688db5f9-pt45n" event={"ID":"8c37e2f9-b495-43c5-9738-85293ff06e7d","Type":"ContainerDied","Data":"17913761f792381d03464ff0c97e6fd069ccf985fe22add433a5a765074b6140"} Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.734412 4856 scope.go:117] "RemoveContainer" containerID="87bb5c7ed676cede8b3f46c5ebc480e2e8a377420826351d3ec610227d426661" Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.760891 4856 scope.go:117] "RemoveContainer" containerID="2e1014b58f51de0e535e08537cacf32cb06fbc6e747f6c0e4788c747d6c97eb0" Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.766111 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59688db5f9-pt45n"] Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.771783 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59688db5f9-pt45n"] Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.791021 4856 scope.go:117] "RemoveContainer" containerID="87bb5c7ed676cede8b3f46c5ebc480e2e8a377420826351d3ec610227d426661" Nov 22 08:27:05 crc kubenswrapper[4856]: E1122 08:27:05.791682 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87bb5c7ed676cede8b3f46c5ebc480e2e8a377420826351d3ec610227d426661\": container with ID starting with 87bb5c7ed676cede8b3f46c5ebc480e2e8a377420826351d3ec610227d426661 not found: ID does not exist" containerID="87bb5c7ed676cede8b3f46c5ebc480e2e8a377420826351d3ec610227d426661" Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.791726 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87bb5c7ed676cede8b3f46c5ebc480e2e8a377420826351d3ec610227d426661"} err="failed to get container status \"87bb5c7ed676cede8b3f46c5ebc480e2e8a377420826351d3ec610227d426661\": rpc error: code = NotFound desc = could not find container \"87bb5c7ed676cede8b3f46c5ebc480e2e8a377420826351d3ec610227d426661\": container with ID starting with 87bb5c7ed676cede8b3f46c5ebc480e2e8a377420826351d3ec610227d426661 not found: ID does not exist" Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.791754 4856 scope.go:117] "RemoveContainer" containerID="2e1014b58f51de0e535e08537cacf32cb06fbc6e747f6c0e4788c747d6c97eb0" Nov 22 08:27:05 crc kubenswrapper[4856]: E1122 08:27:05.792072 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e1014b58f51de0e535e08537cacf32cb06fbc6e747f6c0e4788c747d6c97eb0\": container with ID starting with 2e1014b58f51de0e535e08537cacf32cb06fbc6e747f6c0e4788c747d6c97eb0 not found: ID does not exist" containerID="2e1014b58f51de0e535e08537cacf32cb06fbc6e747f6c0e4788c747d6c97eb0" Nov 22 08:27:05 crc kubenswrapper[4856]: I1122 08:27:05.792104 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1014b58f51de0e535e08537cacf32cb06fbc6e747f6c0e4788c747d6c97eb0"} err="failed to get container status \"2e1014b58f51de0e535e08537cacf32cb06fbc6e747f6c0e4788c747d6c97eb0\": rpc error: code = NotFound desc = could not find container \"2e1014b58f51de0e535e08537cacf32cb06fbc6e747f6c0e4788c747d6c97eb0\": container with ID starting with 2e1014b58f51de0e535e08537cacf32cb06fbc6e747f6c0e4788c747d6c97eb0 not found: ID does not exist" Nov 22 08:27:06 crc kubenswrapper[4856]: I1122 08:27:06.722209 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c37e2f9-b495-43c5-9738-85293ff06e7d" path="/var/lib/kubelet/pods/8c37e2f9-b495-43c5-9738-85293ff06e7d/volumes" Nov 22 08:27:11 crc kubenswrapper[4856]: I1122 08:27:11.577673 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 22 08:27:11 crc kubenswrapper[4856]: I1122 08:27:11.578190 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 22 08:27:11 crc kubenswrapper[4856]: I1122 08:27:11.645255 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 22 08:27:11 crc kubenswrapper[4856]: I1122 08:27:11.837196 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 22 08:27:12 crc kubenswrapper[4856]: I1122 08:27:12.605281 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 22 08:27:12 crc kubenswrapper[4856]: I1122 08:27:12.605693 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 22 08:27:12 crc kubenswrapper[4856]: I1122 08:27:12.676481 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 22 08:27:12 crc kubenswrapper[4856]: I1122 08:27:12.846414 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 22 08:27:31 crc kubenswrapper[4856]: I1122 08:27:31.927408 4856 generic.go:334] "Generic (PLEG): container finished" podID="cc30d930-50c7-4002-b44d-80f76828c9c1" containerID="9c200b94b2d3848ec5b3d78e5c0d48024bb092ae1bff2f09f1faf0c32f8466f0" exitCode=0 Nov 22 08:27:31 crc kubenswrapper[4856]: I1122 08:27:31.927530 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc30d930-50c7-4002-b44d-80f76828c9c1","Type":"ContainerDied","Data":"9c200b94b2d3848ec5b3d78e5c0d48024bb092ae1bff2f09f1faf0c32f8466f0"} Nov 22 08:27:31 crc kubenswrapper[4856]: I1122 08:27:31.931245 4856 generic.go:334] "Generic (PLEG): container finished" podID="b673936f-4f1b-42ea-b1da-12b855b8ee6d" containerID="52c3d4341f06e158cb2c37e92246079ac97b8c23ca9d341028c636e797dfdef2" exitCode=0 Nov 22 08:27:31 crc kubenswrapper[4856]: I1122 08:27:31.931296 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b673936f-4f1b-42ea-b1da-12b855b8ee6d","Type":"ContainerDied","Data":"52c3d4341f06e158cb2c37e92246079ac97b8c23ca9d341028c636e797dfdef2"} Nov 22 08:27:32 crc kubenswrapper[4856]: I1122 08:27:32.939644 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc30d930-50c7-4002-b44d-80f76828c9c1","Type":"ContainerStarted","Data":"d1436087cb2f191c64008b598b04da2c1a30e99756102d7f3442975765740cc1"} Nov 22 08:27:32 crc kubenswrapper[4856]: I1122 08:27:32.940119 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:27:32 crc kubenswrapper[4856]: I1122 08:27:32.941873 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b673936f-4f1b-42ea-b1da-12b855b8ee6d","Type":"ContainerStarted","Data":"5d8651226e70d275fcf2341780ad29c527f4e71a685eacf9c979e0b62e514987"} Nov 22 08:27:32 crc kubenswrapper[4856]: I1122 08:27:32.942052 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 22 08:27:32 crc kubenswrapper[4856]: I1122 08:27:32.966622 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.625211943 podStartE2EDuration="53.966599248s" podCreationTimestamp="2025-11-22 08:26:39 +0000 UTC" firstStartedPulling="2025-11-22 08:26:42.377467675 +0000 UTC m=+5044.790860933" lastFinishedPulling="2025-11-22 08:26:57.71885498 +0000 UTC m=+5060.132248238" observedRunningTime="2025-11-22 08:27:32.962969071 +0000 UTC m=+5095.376362359" watchObservedRunningTime="2025-11-22 08:27:32.966599248 +0000 UTC m=+5095.379992506" Nov 22 08:27:32 crc kubenswrapper[4856]: I1122 08:27:32.995024 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.490720003 podStartE2EDuration="54.995002122s" podCreationTimestamp="2025-11-22 08:26:38 +0000 UTC" firstStartedPulling="2025-11-22 08:26:41.971390344 +0000 UTC m=+5044.384783602" lastFinishedPulling="2025-11-22 08:26:55.475672463 +0000 UTC m=+5057.889065721" observedRunningTime="2025-11-22 08:27:32.989115103 +0000 UTC m=+5095.402508381" watchObservedRunningTime="2025-11-22 08:27:32.995002122 +0000 UTC m=+5095.408395380" Nov 22 08:27:50 crc kubenswrapper[4856]: I1122 08:27:50.193094 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 22 08:27:50 crc kubenswrapper[4856]: I1122 08:27:50.470763 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.545686 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf8f59b77-jhbcc"] Nov 22 08:27:56 crc kubenswrapper[4856]: E1122 08:27:56.546497 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c37e2f9-b495-43c5-9738-85293ff06e7d" containerName="init" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.546533 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c37e2f9-b495-43c5-9738-85293ff06e7d" containerName="init" Nov 22 08:27:56 crc kubenswrapper[4856]: E1122 08:27:56.546551 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c37e2f9-b495-43c5-9738-85293ff06e7d" containerName="dnsmasq-dns" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.546557 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c37e2f9-b495-43c5-9738-85293ff06e7d" containerName="dnsmasq-dns" Nov 22 08:27:56 crc kubenswrapper[4856]: E1122 08:27:56.546570 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58c1559-e306-4c9c-b909-0713c8a84710" containerName="init" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.546575 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58c1559-e306-4c9c-b909-0713c8a84710" containerName="init" Nov 22 08:27:56 crc kubenswrapper[4856]: E1122 08:27:56.546584 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d403750d-73ae-4025-b7e1-c83c315a5985" containerName="init" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.546591 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d403750d-73ae-4025-b7e1-c83c315a5985" containerName="init" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.546760 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d403750d-73ae-4025-b7e1-c83c315a5985" containerName="init" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.546773 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58c1559-e306-4c9c-b909-0713c8a84710" containerName="init" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.546782 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c37e2f9-b495-43c5-9738-85293ff06e7d" containerName="dnsmasq-dns" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.547668 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.556999 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf8f59b77-jhbcc"] Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.688765 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkzlp\" (UniqueName: \"kubernetes.io/projected/57d240db-0f55-4656-97a6-3c1059b7eb76-kube-api-access-lkzlp\") pod \"dnsmasq-dns-5bf8f59b77-jhbcc\" (UID: \"57d240db-0f55-4656-97a6-3c1059b7eb76\") " pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.689002 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57d240db-0f55-4656-97a6-3c1059b7eb76-config\") pod \"dnsmasq-dns-5bf8f59b77-jhbcc\" (UID: \"57d240db-0f55-4656-97a6-3c1059b7eb76\") " pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.689055 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57d240db-0f55-4656-97a6-3c1059b7eb76-dns-svc\") pod \"dnsmasq-dns-5bf8f59b77-jhbcc\" (UID: \"57d240db-0f55-4656-97a6-3c1059b7eb76\") " pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.791010 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57d240db-0f55-4656-97a6-3c1059b7eb76-config\") pod \"dnsmasq-dns-5bf8f59b77-jhbcc\" (UID: \"57d240db-0f55-4656-97a6-3c1059b7eb76\") " pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.791092 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57d240db-0f55-4656-97a6-3c1059b7eb76-dns-svc\") pod \"dnsmasq-dns-5bf8f59b77-jhbcc\" (UID: \"57d240db-0f55-4656-97a6-3c1059b7eb76\") " pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.791113 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkzlp\" (UniqueName: \"kubernetes.io/projected/57d240db-0f55-4656-97a6-3c1059b7eb76-kube-api-access-lkzlp\") pod \"dnsmasq-dns-5bf8f59b77-jhbcc\" (UID: \"57d240db-0f55-4656-97a6-3c1059b7eb76\") " pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.792122 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57d240db-0f55-4656-97a6-3c1059b7eb76-dns-svc\") pod \"dnsmasq-dns-5bf8f59b77-jhbcc\" (UID: \"57d240db-0f55-4656-97a6-3c1059b7eb76\") " pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.792204 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57d240db-0f55-4656-97a6-3c1059b7eb76-config\") pod \"dnsmasq-dns-5bf8f59b77-jhbcc\" (UID: \"57d240db-0f55-4656-97a6-3c1059b7eb76\") " pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.810263 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkzlp\" (UniqueName: \"kubernetes.io/projected/57d240db-0f55-4656-97a6-3c1059b7eb76-kube-api-access-lkzlp\") pod \"dnsmasq-dns-5bf8f59b77-jhbcc\" (UID: \"57d240db-0f55-4656-97a6-3c1059b7eb76\") " pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:27:56 crc kubenswrapper[4856]: I1122 08:27:56.870107 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:27:57 crc kubenswrapper[4856]: I1122 08:27:57.264296 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 08:27:57 crc kubenswrapper[4856]: I1122 08:27:57.294385 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf8f59b77-jhbcc"] Nov 22 08:27:57 crc kubenswrapper[4856]: I1122 08:27:57.953075 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 08:27:58 crc kubenswrapper[4856]: I1122 08:27:58.125266 4856 generic.go:334] "Generic (PLEG): container finished" podID="57d240db-0f55-4656-97a6-3c1059b7eb76" containerID="49590df2dffa9a41117b2e00ed7c6700bcdc06fd46788266c03cc1c11019cc42" exitCode=0 Nov 22 08:27:58 crc kubenswrapper[4856]: I1122 08:27:58.125339 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" event={"ID":"57d240db-0f55-4656-97a6-3c1059b7eb76","Type":"ContainerDied","Data":"49590df2dffa9a41117b2e00ed7c6700bcdc06fd46788266c03cc1c11019cc42"} Nov 22 08:27:58 crc kubenswrapper[4856]: I1122 08:27:58.125372 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" event={"ID":"57d240db-0f55-4656-97a6-3c1059b7eb76","Type":"ContainerStarted","Data":"2e0d4e29c7cc73cfd24255f19ad9266ea32a3c8cd3c86649dda86045bb2307cb"} Nov 22 08:27:59 crc kubenswrapper[4856]: I1122 08:27:59.133895 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" event={"ID":"57d240db-0f55-4656-97a6-3c1059b7eb76","Type":"ContainerStarted","Data":"ba7a9f66068d18ef6e7ab0f03d7c83affa030518baba9fa96c07abc8cded8fd9"} Nov 22 08:27:59 crc kubenswrapper[4856]: I1122 08:27:59.134227 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:28:01 crc kubenswrapper[4856]: I1122 08:28:01.484702 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b673936f-4f1b-42ea-b1da-12b855b8ee6d" containerName="rabbitmq" containerID="cri-o://5d8651226e70d275fcf2341780ad29c527f4e71a685eacf9c979e0b62e514987" gracePeriod=604796 Nov 22 08:28:01 crc kubenswrapper[4856]: I1122 08:28:01.990949 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="cc30d930-50c7-4002-b44d-80f76828c9c1" containerName="rabbitmq" containerID="cri-o://d1436087cb2f191c64008b598b04da2c1a30e99756102d7f3442975765740cc1" gracePeriod=604796 Nov 22 08:28:06 crc kubenswrapper[4856]: I1122 08:28:06.871741 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:28:06 crc kubenswrapper[4856]: I1122 08:28:06.890246 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" podStartSLOduration=10.890227985 podStartE2EDuration="10.890227985s" podCreationTimestamp="2025-11-22 08:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:27:59.160763564 +0000 UTC m=+5121.574156822" watchObservedRunningTime="2025-11-22 08:28:06.890227985 +0000 UTC m=+5129.303621243" Nov 22 08:28:06 crc kubenswrapper[4856]: I1122 08:28:06.917292 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574cff9d7f-n8854"] Nov 22 08:28:06 crc kubenswrapper[4856]: I1122 08:28:06.917615 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-574cff9d7f-n8854" podUID="0996884b-451f-4d66-85bf-680b8be0d7ee" containerName="dnsmasq-dns" containerID="cri-o://da3e1855e2927fddd4bd779f5fc9175ce5538f3b5e3a121d6bcd90895f6abd4c" gracePeriod=10 Nov 22 08:28:07 crc kubenswrapper[4856]: I1122 08:28:07.215250 4856 generic.go:334] "Generic (PLEG): container finished" podID="0996884b-451f-4d66-85bf-680b8be0d7ee" containerID="da3e1855e2927fddd4bd779f5fc9175ce5538f3b5e3a121d6bcd90895f6abd4c" exitCode=0 Nov 22 08:28:07 crc kubenswrapper[4856]: I1122 08:28:07.215552 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574cff9d7f-n8854" event={"ID":"0996884b-451f-4d66-85bf-680b8be0d7ee","Type":"ContainerDied","Data":"da3e1855e2927fddd4bd779f5fc9175ce5538f3b5e3a121d6bcd90895f6abd4c"} Nov 22 08:28:07 crc kubenswrapper[4856]: I1122 08:28:07.341920 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:28:07 crc kubenswrapper[4856]: I1122 08:28:07.455765 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0996884b-451f-4d66-85bf-680b8be0d7ee-dns-svc\") pod \"0996884b-451f-4d66-85bf-680b8be0d7ee\" (UID: \"0996884b-451f-4d66-85bf-680b8be0d7ee\") " Nov 22 08:28:07 crc kubenswrapper[4856]: I1122 08:28:07.456031 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l5bc\" (UniqueName: \"kubernetes.io/projected/0996884b-451f-4d66-85bf-680b8be0d7ee-kube-api-access-8l5bc\") pod \"0996884b-451f-4d66-85bf-680b8be0d7ee\" (UID: \"0996884b-451f-4d66-85bf-680b8be0d7ee\") " Nov 22 08:28:07 crc kubenswrapper[4856]: I1122 08:28:07.456107 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0996884b-451f-4d66-85bf-680b8be0d7ee-config\") pod \"0996884b-451f-4d66-85bf-680b8be0d7ee\" (UID: \"0996884b-451f-4d66-85bf-680b8be0d7ee\") " Nov 22 08:28:07 crc kubenswrapper[4856]: I1122 08:28:07.461170 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0996884b-451f-4d66-85bf-680b8be0d7ee-kube-api-access-8l5bc" (OuterVolumeSpecName: "kube-api-access-8l5bc") pod "0996884b-451f-4d66-85bf-680b8be0d7ee" (UID: "0996884b-451f-4d66-85bf-680b8be0d7ee"). InnerVolumeSpecName "kube-api-access-8l5bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:28:07 crc kubenswrapper[4856]: I1122 08:28:07.494264 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0996884b-451f-4d66-85bf-680b8be0d7ee-config" (OuterVolumeSpecName: "config") pod "0996884b-451f-4d66-85bf-680b8be0d7ee" (UID: "0996884b-451f-4d66-85bf-680b8be0d7ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:28:07 crc kubenswrapper[4856]: I1122 08:28:07.495724 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0996884b-451f-4d66-85bf-680b8be0d7ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0996884b-451f-4d66-85bf-680b8be0d7ee" (UID: "0996884b-451f-4d66-85bf-680b8be0d7ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:28:07 crc kubenswrapper[4856]: I1122 08:28:07.557848 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0996884b-451f-4d66-85bf-680b8be0d7ee-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:07 crc kubenswrapper[4856]: I1122 08:28:07.557893 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l5bc\" (UniqueName: \"kubernetes.io/projected/0996884b-451f-4d66-85bf-680b8be0d7ee-kube-api-access-8l5bc\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:07 crc kubenswrapper[4856]: I1122 08:28:07.557905 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0996884b-451f-4d66-85bf-680b8be0d7ee-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:07 crc kubenswrapper[4856]: I1122 08:28:07.937573 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.066251 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-plugins\") pod \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.066589 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b673936f-4f1b-42ea-b1da-12b855b8ee6d-pod-info\") pod \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.066612 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-server-conf\") pod \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.066627 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-plugins-conf\") pod \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.066669 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-tls\") pod \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.066732 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b673936f-4f1b-42ea-b1da-12b855b8ee6d-erlang-cookie-secret\") pod \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.066723 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b673936f-4f1b-42ea-b1da-12b855b8ee6d" (UID: "b673936f-4f1b-42ea-b1da-12b855b8ee6d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.066776 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-confd\") pod \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.066926 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\") pod \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.067007 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-config-data\") pod \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.067038 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-erlang-cookie\") pod \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.067071 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2vgf\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-kube-api-access-m2vgf\") pod \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\" (UID: \"b673936f-4f1b-42ea-b1da-12b855b8ee6d\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.067179 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b673936f-4f1b-42ea-b1da-12b855b8ee6d" (UID: "b673936f-4f1b-42ea-b1da-12b855b8ee6d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.067397 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.067409 4856 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.068727 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b673936f-4f1b-42ea-b1da-12b855b8ee6d" (UID: "b673936f-4f1b-42ea-b1da-12b855b8ee6d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.071048 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b673936f-4f1b-42ea-b1da-12b855b8ee6d-pod-info" (OuterVolumeSpecName: "pod-info") pod "b673936f-4f1b-42ea-b1da-12b855b8ee6d" (UID: "b673936f-4f1b-42ea-b1da-12b855b8ee6d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.071136 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-kube-api-access-m2vgf" (OuterVolumeSpecName: "kube-api-access-m2vgf") pod "b673936f-4f1b-42ea-b1da-12b855b8ee6d" (UID: "b673936f-4f1b-42ea-b1da-12b855b8ee6d"). InnerVolumeSpecName "kube-api-access-m2vgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.072680 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b673936f-4f1b-42ea-b1da-12b855b8ee6d" (UID: "b673936f-4f1b-42ea-b1da-12b855b8ee6d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.072915 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b673936f-4f1b-42ea-b1da-12b855b8ee6d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b673936f-4f1b-42ea-b1da-12b855b8ee6d" (UID: "b673936f-4f1b-42ea-b1da-12b855b8ee6d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.088146 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd" (OuterVolumeSpecName: "persistence") pod "b673936f-4f1b-42ea-b1da-12b855b8ee6d" (UID: "b673936f-4f1b-42ea-b1da-12b855b8ee6d"). InnerVolumeSpecName "pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.103433 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-config-data" (OuterVolumeSpecName: "config-data") pod "b673936f-4f1b-42ea-b1da-12b855b8ee6d" (UID: "b673936f-4f1b-42ea-b1da-12b855b8ee6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.115080 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-server-conf" (OuterVolumeSpecName: "server-conf") pod "b673936f-4f1b-42ea-b1da-12b855b8ee6d" (UID: "b673936f-4f1b-42ea-b1da-12b855b8ee6d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.164288 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b673936f-4f1b-42ea-b1da-12b855b8ee6d" (UID: "b673936f-4f1b-42ea-b1da-12b855b8ee6d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.169880 4856 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b673936f-4f1b-42ea-b1da-12b855b8ee6d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.169947 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.170019 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\") on node \"crc\" " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.170037 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.170052 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.170064 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2vgf\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-kube-api-access-m2vgf\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.170077 4856 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b673936f-4f1b-42ea-b1da-12b855b8ee6d-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.170089 4856 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b673936f-4f1b-42ea-b1da-12b855b8ee6d-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.170101 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b673936f-4f1b-42ea-b1da-12b855b8ee6d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.191706 4856 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.191852 4856 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd") on node "crc" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.239741 4856 generic.go:334] "Generic (PLEG): container finished" podID="b673936f-4f1b-42ea-b1da-12b855b8ee6d" containerID="5d8651226e70d275fcf2341780ad29c527f4e71a685eacf9c979e0b62e514987" exitCode=0 Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.239841 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b673936f-4f1b-42ea-b1da-12b855b8ee6d","Type":"ContainerDied","Data":"5d8651226e70d275fcf2341780ad29c527f4e71a685eacf9c979e0b62e514987"} Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.239881 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b673936f-4f1b-42ea-b1da-12b855b8ee6d","Type":"ContainerDied","Data":"3f16daa979b5436723eb6f30e33cea4faa0a102b28298bc4c3bdef2a9f4b0356"} Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.239905 4856 scope.go:117] "RemoveContainer" containerID="5d8651226e70d275fcf2341780ad29c527f4e71a685eacf9c979e0b62e514987" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.239955 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.245255 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-574cff9d7f-n8854" event={"ID":"0996884b-451f-4d66-85bf-680b8be0d7ee","Type":"ContainerDied","Data":"e1d6faa0e2ec94216acf79baf969007655bc881c622b5d3ba00b46b05c410118"} Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.245412 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-574cff9d7f-n8854" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.246917 4856 generic.go:334] "Generic (PLEG): container finished" podID="cc30d930-50c7-4002-b44d-80f76828c9c1" containerID="d1436087cb2f191c64008b598b04da2c1a30e99756102d7f3442975765740cc1" exitCode=0 Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.246964 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc30d930-50c7-4002-b44d-80f76828c9c1","Type":"ContainerDied","Data":"d1436087cb2f191c64008b598b04da2c1a30e99756102d7f3442975765740cc1"} Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.271969 4856 reconciler_common.go:293] "Volume detached for volume \"pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.304594 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.313214 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.322777 4856 scope.go:117] "RemoveContainer" containerID="52c3d4341f06e158cb2c37e92246079ac97b8c23ca9d341028c636e797dfdef2" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.329062 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-574cff9d7f-n8854"] Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.347336 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-574cff9d7f-n8854"] Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.358288 4856 scope.go:117] "RemoveContainer" containerID="5d8651226e70d275fcf2341780ad29c527f4e71a685eacf9c979e0b62e514987" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.358442 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 08:28:08 crc kubenswrapper[4856]: E1122 08:28:08.358827 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0996884b-451f-4d66-85bf-680b8be0d7ee" containerName="dnsmasq-dns" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.358846 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0996884b-451f-4d66-85bf-680b8be0d7ee" containerName="dnsmasq-dns" Nov 22 08:28:08 crc kubenswrapper[4856]: E1122 08:28:08.358866 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b673936f-4f1b-42ea-b1da-12b855b8ee6d" containerName="rabbitmq" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.358873 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b673936f-4f1b-42ea-b1da-12b855b8ee6d" containerName="rabbitmq" Nov 22 08:28:08 crc kubenswrapper[4856]: E1122 08:28:08.358896 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b673936f-4f1b-42ea-b1da-12b855b8ee6d" containerName="setup-container" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.358901 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b673936f-4f1b-42ea-b1da-12b855b8ee6d" containerName="setup-container" Nov 22 08:28:08 crc kubenswrapper[4856]: E1122 08:28:08.358911 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0996884b-451f-4d66-85bf-680b8be0d7ee" containerName="init" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.358916 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0996884b-451f-4d66-85bf-680b8be0d7ee" containerName="init" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.359059 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0996884b-451f-4d66-85bf-680b8be0d7ee" containerName="dnsmasq-dns" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.359067 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b673936f-4f1b-42ea-b1da-12b855b8ee6d" containerName="rabbitmq" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.360123 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: E1122 08:28:08.362339 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d8651226e70d275fcf2341780ad29c527f4e71a685eacf9c979e0b62e514987\": container with ID starting with 5d8651226e70d275fcf2341780ad29c527f4e71a685eacf9c979e0b62e514987 not found: ID does not exist" containerID="5d8651226e70d275fcf2341780ad29c527f4e71a685eacf9c979e0b62e514987" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.362371 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d8651226e70d275fcf2341780ad29c527f4e71a685eacf9c979e0b62e514987"} err="failed to get container status \"5d8651226e70d275fcf2341780ad29c527f4e71a685eacf9c979e0b62e514987\": rpc error: code = NotFound desc = could not find container \"5d8651226e70d275fcf2341780ad29c527f4e71a685eacf9c979e0b62e514987\": container with ID starting with 5d8651226e70d275fcf2341780ad29c527f4e71a685eacf9c979e0b62e514987 not found: ID does not exist" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.362405 4856 scope.go:117] "RemoveContainer" containerID="52c3d4341f06e158cb2c37e92246079ac97b8c23ca9d341028c636e797dfdef2" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.363774 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.363961 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.364097 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hvgpt" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.364242 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.364342 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.364382 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.364443 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 22 08:28:08 crc kubenswrapper[4856]: E1122 08:28:08.364460 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52c3d4341f06e158cb2c37e92246079ac97b8c23ca9d341028c636e797dfdef2\": container with ID starting with 52c3d4341f06e158cb2c37e92246079ac97b8c23ca9d341028c636e797dfdef2 not found: ID does not exist" containerID="52c3d4341f06e158cb2c37e92246079ac97b8c23ca9d341028c636e797dfdef2" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.364483 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c3d4341f06e158cb2c37e92246079ac97b8c23ca9d341028c636e797dfdef2"} err="failed to get container status \"52c3d4341f06e158cb2c37e92246079ac97b8c23ca9d341028c636e797dfdef2\": rpc error: code = NotFound desc = could not find container \"52c3d4341f06e158cb2c37e92246079ac97b8c23ca9d341028c636e797dfdef2\": container with ID starting with 52c3d4341f06e158cb2c37e92246079ac97b8c23ca9d341028c636e797dfdef2 not found: ID does not exist" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.364501 4856 scope.go:117] "RemoveContainer" containerID="da3e1855e2927fddd4bd779f5fc9175ce5538f3b5e3a121d6bcd90895f6abd4c" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.364768 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.394744 4856 scope.go:117] "RemoveContainer" containerID="9cb235aadb0a5717e7dd203243c33e07bdbf04b6a559672633c423ba6168f8b0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.474800 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f1cbd452-2d8c-428f-98a7-325984950be2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.474864 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1cbd452-2d8c-428f-98a7-325984950be2-config-data\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.474892 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.474913 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f1cbd452-2d8c-428f-98a7-325984950be2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.475060 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f1cbd452-2d8c-428f-98a7-325984950be2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.475097 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f1cbd452-2d8c-428f-98a7-325984950be2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.475141 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f1cbd452-2d8c-428f-98a7-325984950be2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.475196 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb5tf\" (UniqueName: \"kubernetes.io/projected/f1cbd452-2d8c-428f-98a7-325984950be2-kube-api-access-lb5tf\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.475291 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f1cbd452-2d8c-428f-98a7-325984950be2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.475331 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f1cbd452-2d8c-428f-98a7-325984950be2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.475430 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f1cbd452-2d8c-428f-98a7-325984950be2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.530705 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.576765 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f1cbd452-2d8c-428f-98a7-325984950be2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.576848 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1cbd452-2d8c-428f-98a7-325984950be2-config-data\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.576889 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.576921 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f1cbd452-2d8c-428f-98a7-325984950be2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.576952 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f1cbd452-2d8c-428f-98a7-325984950be2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.576980 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f1cbd452-2d8c-428f-98a7-325984950be2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.577020 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f1cbd452-2d8c-428f-98a7-325984950be2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.577060 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb5tf\" (UniqueName: \"kubernetes.io/projected/f1cbd452-2d8c-428f-98a7-325984950be2-kube-api-access-lb5tf\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.577102 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f1cbd452-2d8c-428f-98a7-325984950be2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.577133 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f1cbd452-2d8c-428f-98a7-325984950be2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.577157 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f1cbd452-2d8c-428f-98a7-325984950be2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.577458 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f1cbd452-2d8c-428f-98a7-325984950be2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.578841 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f1cbd452-2d8c-428f-98a7-325984950be2-config-data\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.579098 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f1cbd452-2d8c-428f-98a7-325984950be2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.579549 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f1cbd452-2d8c-428f-98a7-325984950be2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.579762 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f1cbd452-2d8c-428f-98a7-325984950be2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.579961 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.579984 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/675b50c71cf4d76e24255932f233e1308e49f3fbbec5594a63f6595cbe644c77/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.585407 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f1cbd452-2d8c-428f-98a7-325984950be2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.585591 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f1cbd452-2d8c-428f-98a7-325984950be2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.585743 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f1cbd452-2d8c-428f-98a7-325984950be2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.585645 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f1cbd452-2d8c-428f-98a7-325984950be2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.599839 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb5tf\" (UniqueName: \"kubernetes.io/projected/f1cbd452-2d8c-428f-98a7-325984950be2-kube-api-access-lb5tf\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.629472 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b89a5107-6462-4b47-a184-fa89dba5a1dd\") pod \"rabbitmq-server-0\" (UID: \"f1cbd452-2d8c-428f-98a7-325984950be2\") " pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.678107 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-config-data\") pod \"cc30d930-50c7-4002-b44d-80f76828c9c1\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.678197 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-erlang-cookie\") pod \"cc30d930-50c7-4002-b44d-80f76828c9c1\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.678287 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-server-conf\") pod \"cc30d930-50c7-4002-b44d-80f76828c9c1\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.678317 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-plugins-conf\") pod \"cc30d930-50c7-4002-b44d-80f76828c9c1\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.678862 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cc30d930-50c7-4002-b44d-80f76828c9c1" (UID: "cc30d930-50c7-4002-b44d-80f76828c9c1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.678903 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8xm4\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-kube-api-access-q8xm4\") pod \"cc30d930-50c7-4002-b44d-80f76828c9c1\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.678949 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-plugins\") pod \"cc30d930-50c7-4002-b44d-80f76828c9c1\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.679017 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc30d930-50c7-4002-b44d-80f76828c9c1-pod-info\") pod \"cc30d930-50c7-4002-b44d-80f76828c9c1\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.679017 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cc30d930-50c7-4002-b44d-80f76828c9c1" (UID: "cc30d930-50c7-4002-b44d-80f76828c9c1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.679313 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-526ca893-675e-4752-952d-e9936927c34d\") pod \"cc30d930-50c7-4002-b44d-80f76828c9c1\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.679363 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-confd\") pod \"cc30d930-50c7-4002-b44d-80f76828c9c1\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.679381 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cc30d930-50c7-4002-b44d-80f76828c9c1" (UID: "cc30d930-50c7-4002-b44d-80f76828c9c1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.679452 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-tls\") pod \"cc30d930-50c7-4002-b44d-80f76828c9c1\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.679488 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc30d930-50c7-4002-b44d-80f76828c9c1-erlang-cookie-secret\") pod \"cc30d930-50c7-4002-b44d-80f76828c9c1\" (UID: \"cc30d930-50c7-4002-b44d-80f76828c9c1\") " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.680028 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.680059 4856 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.680073 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.683019 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-kube-api-access-q8xm4" (OuterVolumeSpecName: "kube-api-access-q8xm4") pod "cc30d930-50c7-4002-b44d-80f76828c9c1" (UID: "cc30d930-50c7-4002-b44d-80f76828c9c1"). InnerVolumeSpecName "kube-api-access-q8xm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.683198 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cc30d930-50c7-4002-b44d-80f76828c9c1-pod-info" (OuterVolumeSpecName: "pod-info") pod "cc30d930-50c7-4002-b44d-80f76828c9c1" (UID: "cc30d930-50c7-4002-b44d-80f76828c9c1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.684298 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc30d930-50c7-4002-b44d-80f76828c9c1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cc30d930-50c7-4002-b44d-80f76828c9c1" (UID: "cc30d930-50c7-4002-b44d-80f76828c9c1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.686041 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cc30d930-50c7-4002-b44d-80f76828c9c1" (UID: "cc30d930-50c7-4002-b44d-80f76828c9c1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.693781 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-526ca893-675e-4752-952d-e9936927c34d" (OuterVolumeSpecName: "persistence") pod "cc30d930-50c7-4002-b44d-80f76828c9c1" (UID: "cc30d930-50c7-4002-b44d-80f76828c9c1"). InnerVolumeSpecName "pvc-526ca893-675e-4752-952d-e9936927c34d". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.694012 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.709255 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-config-data" (OuterVolumeSpecName: "config-data") pod "cc30d930-50c7-4002-b44d-80f76828c9c1" (UID: "cc30d930-50c7-4002-b44d-80f76828c9c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.724607 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0996884b-451f-4d66-85bf-680b8be0d7ee" path="/var/lib/kubelet/pods/0996884b-451f-4d66-85bf-680b8be0d7ee/volumes" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.725815 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b673936f-4f1b-42ea-b1da-12b855b8ee6d" path="/var/lib/kubelet/pods/b673936f-4f1b-42ea-b1da-12b855b8ee6d/volumes" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.732024 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-server-conf" (OuterVolumeSpecName: "server-conf") pod "cc30d930-50c7-4002-b44d-80f76828c9c1" (UID: "cc30d930-50c7-4002-b44d-80f76828c9c1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.776139 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cc30d930-50c7-4002-b44d-80f76828c9c1" (UID: "cc30d930-50c7-4002-b44d-80f76828c9c1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.781125 4856 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-server-conf\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.781149 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8xm4\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-kube-api-access-q8xm4\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.781161 4856 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cc30d930-50c7-4002-b44d-80f76828c9c1-pod-info\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.781197 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-526ca893-675e-4752-952d-e9936927c34d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-526ca893-675e-4752-952d-e9936927c34d\") on node \"crc\" " Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.781210 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.781220 4856 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cc30d930-50c7-4002-b44d-80f76828c9c1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.781229 4856 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cc30d930-50c7-4002-b44d-80f76828c9c1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.781240 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cc30d930-50c7-4002-b44d-80f76828c9c1-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.800881 4856 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.801073 4856 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-526ca893-675e-4752-952d-e9936927c34d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-526ca893-675e-4752-952d-e9936927c34d") on node "crc" Nov 22 08:28:08 crc kubenswrapper[4856]: I1122 08:28:08.882782 4856 reconciler_common.go:293] "Volume detached for volume \"pvc-526ca893-675e-4752-952d-e9936927c34d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-526ca893-675e-4752-952d-e9936927c34d\") on node \"crc\" DevicePath \"\"" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.118940 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.256075 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cc30d930-50c7-4002-b44d-80f76828c9c1","Type":"ContainerDied","Data":"a4117eccfabce705ccf87ea40af7a273faed3b30cc35e9cb86f68c462b19bf34"} Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.256389 4856 scope.go:117] "RemoveContainer" containerID="d1436087cb2f191c64008b598b04da2c1a30e99756102d7f3442975765740cc1" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.256102 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.257594 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f1cbd452-2d8c-428f-98a7-325984950be2","Type":"ContainerStarted","Data":"637188b78539b629e20c8f6f3d31a6ca4eff0f5264327b24c220b79cb612b5e4"} Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.278050 4856 scope.go:117] "RemoveContainer" containerID="9c200b94b2d3848ec5b3d78e5c0d48024bb092ae1bff2f09f1faf0c32f8466f0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.294285 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.297432 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.316392 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 08:28:09 crc kubenswrapper[4856]: E1122 08:28:09.316858 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc30d930-50c7-4002-b44d-80f76828c9c1" containerName="rabbitmq" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.316880 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc30d930-50c7-4002-b44d-80f76828c9c1" containerName="rabbitmq" Nov 22 08:28:09 crc kubenswrapper[4856]: E1122 08:28:09.316894 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc30d930-50c7-4002-b44d-80f76828c9c1" containerName="setup-container" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.316903 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc30d930-50c7-4002-b44d-80f76828c9c1" containerName="setup-container" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.317078 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc30d930-50c7-4002-b44d-80f76828c9c1" containerName="rabbitmq" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.318435 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.322155 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.322191 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.322337 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.322596 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hd4vk" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.322741 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.322773 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.323957 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.330380 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.391959 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ea8e244-352d-4f27-86b8-2036996316e2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.392059 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ea8e244-352d-4f27-86b8-2036996316e2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.392084 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ea8e244-352d-4f27-86b8-2036996316e2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.392110 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-526ca893-675e-4752-952d-e9936927c34d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-526ca893-675e-4752-952d-e9936927c34d\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.392164 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ea8e244-352d-4f27-86b8-2036996316e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.392198 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ea8e244-352d-4f27-86b8-2036996316e2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.392222 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ea8e244-352d-4f27-86b8-2036996316e2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.392259 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ea8e244-352d-4f27-86b8-2036996316e2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.392287 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ea8e244-352d-4f27-86b8-2036996316e2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.392310 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ea8e244-352d-4f27-86b8-2036996316e2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.392333 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4nk7\" (UniqueName: \"kubernetes.io/projected/7ea8e244-352d-4f27-86b8-2036996316e2-kube-api-access-w4nk7\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.493619 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ea8e244-352d-4f27-86b8-2036996316e2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.493677 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ea8e244-352d-4f27-86b8-2036996316e2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.493740 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-526ca893-675e-4752-952d-e9936927c34d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-526ca893-675e-4752-952d-e9936927c34d\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.493805 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ea8e244-352d-4f27-86b8-2036996316e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.493846 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ea8e244-352d-4f27-86b8-2036996316e2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.493865 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ea8e244-352d-4f27-86b8-2036996316e2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.493902 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ea8e244-352d-4f27-86b8-2036996316e2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.493935 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ea8e244-352d-4f27-86b8-2036996316e2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.493956 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ea8e244-352d-4f27-86b8-2036996316e2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.493981 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4nk7\" (UniqueName: \"kubernetes.io/projected/7ea8e244-352d-4f27-86b8-2036996316e2-kube-api-access-w4nk7\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.494047 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ea8e244-352d-4f27-86b8-2036996316e2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.495013 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ea8e244-352d-4f27-86b8-2036996316e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.495029 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ea8e244-352d-4f27-86b8-2036996316e2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.495689 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ea8e244-352d-4f27-86b8-2036996316e2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.496740 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ea8e244-352d-4f27-86b8-2036996316e2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.496905 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ea8e244-352d-4f27-86b8-2036996316e2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.497293 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ea8e244-352d-4f27-86b8-2036996316e2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.498364 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ea8e244-352d-4f27-86b8-2036996316e2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.499004 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ea8e244-352d-4f27-86b8-2036996316e2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.499027 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ea8e244-352d-4f27-86b8-2036996316e2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.499841 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.499874 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-526ca893-675e-4752-952d-e9936927c34d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-526ca893-675e-4752-952d-e9936927c34d\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/de4312679e98056a9e2352058a52abe0161146fbcf6616a28fa4f9c792b7b37c/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.514403 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4nk7\" (UniqueName: \"kubernetes.io/projected/7ea8e244-352d-4f27-86b8-2036996316e2-kube-api-access-w4nk7\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.530377 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-526ca893-675e-4752-952d-e9936927c34d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-526ca893-675e-4752-952d-e9936927c34d\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ea8e244-352d-4f27-86b8-2036996316e2\") " pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:09 crc kubenswrapper[4856]: I1122 08:28:09.658665 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:10 crc kubenswrapper[4856]: I1122 08:28:10.056249 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 22 08:28:10 crc kubenswrapper[4856]: W1122 08:28:10.061147 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ea8e244_352d_4f27_86b8_2036996316e2.slice/crio-9055bc6f0cbd0b9adf97c8274c6d299abb14296439071b24c5761bc2c8bcc754 WatchSource:0}: Error finding container 9055bc6f0cbd0b9adf97c8274c6d299abb14296439071b24c5761bc2c8bcc754: Status 404 returned error can't find the container with id 9055bc6f0cbd0b9adf97c8274c6d299abb14296439071b24c5761bc2c8bcc754 Nov 22 08:28:10 crc kubenswrapper[4856]: I1122 08:28:10.270454 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f1cbd452-2d8c-428f-98a7-325984950be2","Type":"ContainerStarted","Data":"9d57496dd718dcb28051169b2fca823f5a90716d351b9c8eb0905ca507d8154d"} Nov 22 08:28:10 crc kubenswrapper[4856]: I1122 08:28:10.271669 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7ea8e244-352d-4f27-86b8-2036996316e2","Type":"ContainerStarted","Data":"9055bc6f0cbd0b9adf97c8274c6d299abb14296439071b24c5761bc2c8bcc754"} Nov 22 08:28:10 crc kubenswrapper[4856]: I1122 08:28:10.730294 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc30d930-50c7-4002-b44d-80f76828c9c1" path="/var/lib/kubelet/pods/cc30d930-50c7-4002-b44d-80f76828c9c1/volumes" Nov 22 08:28:11 crc kubenswrapper[4856]: I1122 08:28:11.284364 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7ea8e244-352d-4f27-86b8-2036996316e2","Type":"ContainerStarted","Data":"8c00391de8d0ab6115676ee272149a7d7c982fd415d52ecfd7a0756b466d2cae"} Nov 22 08:28:42 crc kubenswrapper[4856]: I1122 08:28:42.516328 4856 generic.go:334] "Generic (PLEG): container finished" podID="f1cbd452-2d8c-428f-98a7-325984950be2" containerID="9d57496dd718dcb28051169b2fca823f5a90716d351b9c8eb0905ca507d8154d" exitCode=0 Nov 22 08:28:42 crc kubenswrapper[4856]: I1122 08:28:42.516427 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f1cbd452-2d8c-428f-98a7-325984950be2","Type":"ContainerDied","Data":"9d57496dd718dcb28051169b2fca823f5a90716d351b9c8eb0905ca507d8154d"} Nov 22 08:28:43 crc kubenswrapper[4856]: I1122 08:28:43.527852 4856 generic.go:334] "Generic (PLEG): container finished" podID="7ea8e244-352d-4f27-86b8-2036996316e2" containerID="8c00391de8d0ab6115676ee272149a7d7c982fd415d52ecfd7a0756b466d2cae" exitCode=0 Nov 22 08:28:43 crc kubenswrapper[4856]: I1122 08:28:43.527936 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7ea8e244-352d-4f27-86b8-2036996316e2","Type":"ContainerDied","Data":"8c00391de8d0ab6115676ee272149a7d7c982fd415d52ecfd7a0756b466d2cae"} Nov 22 08:28:43 crc kubenswrapper[4856]: I1122 08:28:43.531212 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f1cbd452-2d8c-428f-98a7-325984950be2","Type":"ContainerStarted","Data":"d75d50bd31f6040d9dfb955158fa704a472113f8d889ad59697901846d4c387c"} Nov 22 08:28:43 crc kubenswrapper[4856]: I1122 08:28:43.531493 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 22 08:28:43 crc kubenswrapper[4856]: I1122 08:28:43.603312 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=35.603284678 podStartE2EDuration="35.603284678s" podCreationTimestamp="2025-11-22 08:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:28:43.602326382 +0000 UTC m=+5166.015719640" watchObservedRunningTime="2025-11-22 08:28:43.603284678 +0000 UTC m=+5166.016677936" Nov 22 08:28:44 crc kubenswrapper[4856]: I1122 08:28:44.548020 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7ea8e244-352d-4f27-86b8-2036996316e2","Type":"ContainerStarted","Data":"378286700e0b681515670a765fe06e304d3b5ad3cd10b4f882cd870f27d962f1"} Nov 22 08:28:44 crc kubenswrapper[4856]: I1122 08:28:44.548903 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:28:44 crc kubenswrapper[4856]: I1122 08:28:44.573200 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=35.573181112 podStartE2EDuration="35.573181112s" podCreationTimestamp="2025-11-22 08:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:28:44.569607346 +0000 UTC m=+5166.983000624" watchObservedRunningTime="2025-11-22 08:28:44.573181112 +0000 UTC m=+5166.986574370" Nov 22 08:28:58 crc kubenswrapper[4856]: I1122 08:28:58.696773 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 22 08:28:59 crc kubenswrapper[4856]: I1122 08:28:59.661741 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 22 08:29:04 crc kubenswrapper[4856]: I1122 08:29:04.582246 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 08:29:04 crc kubenswrapper[4856]: I1122 08:29:04.583676 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 22 08:29:04 crc kubenswrapper[4856]: I1122 08:29:04.586461 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nvzxv" Nov 22 08:29:04 crc kubenswrapper[4856]: I1122 08:29:04.592905 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 08:29:04 crc kubenswrapper[4856]: I1122 08:29:04.679085 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77xxq\" (UniqueName: \"kubernetes.io/projected/d5dd309b-2457-4914-ac17-3d34fd69f518-kube-api-access-77xxq\") pod \"mariadb-client-1-default\" (UID: \"d5dd309b-2457-4914-ac17-3d34fd69f518\") " pod="openstack/mariadb-client-1-default" Nov 22 08:29:04 crc kubenswrapper[4856]: I1122 08:29:04.781035 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77xxq\" (UniqueName: \"kubernetes.io/projected/d5dd309b-2457-4914-ac17-3d34fd69f518-kube-api-access-77xxq\") pod \"mariadb-client-1-default\" (UID: \"d5dd309b-2457-4914-ac17-3d34fd69f518\") " pod="openstack/mariadb-client-1-default" Nov 22 08:29:04 crc kubenswrapper[4856]: I1122 08:29:04.799888 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77xxq\" (UniqueName: \"kubernetes.io/projected/d5dd309b-2457-4914-ac17-3d34fd69f518-kube-api-access-77xxq\") pod \"mariadb-client-1-default\" (UID: \"d5dd309b-2457-4914-ac17-3d34fd69f518\") " pod="openstack/mariadb-client-1-default" Nov 22 08:29:04 crc kubenswrapper[4856]: I1122 08:29:04.921887 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 22 08:29:05 crc kubenswrapper[4856]: I1122 08:29:05.396579 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 08:29:05 crc kubenswrapper[4856]: W1122 08:29:05.402665 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5dd309b_2457_4914_ac17_3d34fd69f518.slice/crio-cc57fd16fc82f7954438381ec7e80c3d4c44b3bafc5f0d3ef00e435e755709e7 WatchSource:0}: Error finding container cc57fd16fc82f7954438381ec7e80c3d4c44b3bafc5f0d3ef00e435e755709e7: Status 404 returned error can't find the container with id cc57fd16fc82f7954438381ec7e80c3d4c44b3bafc5f0d3ef00e435e755709e7 Nov 22 08:29:05 crc kubenswrapper[4856]: I1122 08:29:05.712330 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"d5dd309b-2457-4914-ac17-3d34fd69f518","Type":"ContainerStarted","Data":"cc57fd16fc82f7954438381ec7e80c3d4c44b3bafc5f0d3ef00e435e755709e7"} Nov 22 08:29:06 crc kubenswrapper[4856]: I1122 08:29:06.728739 4856 generic.go:334] "Generic (PLEG): container finished" podID="d5dd309b-2457-4914-ac17-3d34fd69f518" containerID="06f500865215ac4b33e61007a998e35c9ee1172ac21a52145582acc249d52854" exitCode=0 Nov 22 08:29:06 crc kubenswrapper[4856]: I1122 08:29:06.729129 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"d5dd309b-2457-4914-ac17-3d34fd69f518","Type":"ContainerDied","Data":"06f500865215ac4b33e61007a998e35c9ee1172ac21a52145582acc249d52854"} Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.098154 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.127478 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_d5dd309b-2457-4914-ac17-3d34fd69f518/mariadb-client-1-default/0.log" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.137183 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77xxq\" (UniqueName: \"kubernetes.io/projected/d5dd309b-2457-4914-ac17-3d34fd69f518-kube-api-access-77xxq\") pod \"d5dd309b-2457-4914-ac17-3d34fd69f518\" (UID: \"d5dd309b-2457-4914-ac17-3d34fd69f518\") " Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.144554 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5dd309b-2457-4914-ac17-3d34fd69f518-kube-api-access-77xxq" (OuterVolumeSpecName: "kube-api-access-77xxq") pod "d5dd309b-2457-4914-ac17-3d34fd69f518" (UID: "d5dd309b-2457-4914-ac17-3d34fd69f518"). InnerVolumeSpecName "kube-api-access-77xxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.157892 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.165043 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.239681 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77xxq\" (UniqueName: \"kubernetes.io/projected/d5dd309b-2457-4914-ac17-3d34fd69f518-kube-api-access-77xxq\") on node \"crc\" DevicePath \"\"" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.607230 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 08:29:08 crc kubenswrapper[4856]: E1122 08:29:08.607557 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5dd309b-2457-4914-ac17-3d34fd69f518" containerName="mariadb-client-1-default" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.607576 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5dd309b-2457-4914-ac17-3d34fd69f518" containerName="mariadb-client-1-default" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.607781 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5dd309b-2457-4914-ac17-3d34fd69f518" containerName="mariadb-client-1-default" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.608327 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.623875 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.645830 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttx2f\" (UniqueName: \"kubernetes.io/projected/62f6109a-44db-4e70-991a-732196a6c845-kube-api-access-ttx2f\") pod \"mariadb-client-2-default\" (UID: \"62f6109a-44db-4e70-991a-732196a6c845\") " pod="openstack/mariadb-client-2-default" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.720575 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5dd309b-2457-4914-ac17-3d34fd69f518" path="/var/lib/kubelet/pods/d5dd309b-2457-4914-ac17-3d34fd69f518/volumes" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.745572 4856 scope.go:117] "RemoveContainer" containerID="06f500865215ac4b33e61007a998e35c9ee1172ac21a52145582acc249d52854" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.745632 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.749408 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttx2f\" (UniqueName: \"kubernetes.io/projected/62f6109a-44db-4e70-991a-732196a6c845-kube-api-access-ttx2f\") pod \"mariadb-client-2-default\" (UID: \"62f6109a-44db-4e70-991a-732196a6c845\") " pod="openstack/mariadb-client-2-default" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.771314 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttx2f\" (UniqueName: \"kubernetes.io/projected/62f6109a-44db-4e70-991a-732196a6c845-kube-api-access-ttx2f\") pod \"mariadb-client-2-default\" (UID: \"62f6109a-44db-4e70-991a-732196a6c845\") " pod="openstack/mariadb-client-2-default" Nov 22 08:29:08 crc kubenswrapper[4856]: I1122 08:29:08.927098 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 22 08:29:09 crc kubenswrapper[4856]: I1122 08:29:09.214564 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 08:29:09 crc kubenswrapper[4856]: I1122 08:29:09.755522 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"62f6109a-44db-4e70-991a-732196a6c845","Type":"ContainerStarted","Data":"d86dd1dfeea3162321f597edb3d5af8e5332ad8b41dc112e26a6645a0004ef60"} Nov 22 08:29:09 crc kubenswrapper[4856]: I1122 08:29:09.755574 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"62f6109a-44db-4e70-991a-732196a6c845","Type":"ContainerStarted","Data":"ca44e1627d8ca90c44c537284de3d21f773759eaf6137f28066863c4dcc1f26e"} Nov 22 08:29:09 crc kubenswrapper[4856]: I1122 08:29:09.772969 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=1.772945775 podStartE2EDuration="1.772945775s" podCreationTimestamp="2025-11-22 08:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:29:09.768157715 +0000 UTC m=+5192.181550973" watchObservedRunningTime="2025-11-22 08:29:09.772945775 +0000 UTC m=+5192.186339033" Nov 22 08:29:10 crc kubenswrapper[4856]: I1122 08:29:10.764962 4856 generic.go:334] "Generic (PLEG): container finished" podID="62f6109a-44db-4e70-991a-732196a6c845" containerID="d86dd1dfeea3162321f597edb3d5af8e5332ad8b41dc112e26a6645a0004ef60" exitCode=1 Nov 22 08:29:10 crc kubenswrapper[4856]: I1122 08:29:10.765033 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"62f6109a-44db-4e70-991a-732196a6c845","Type":"ContainerDied","Data":"d86dd1dfeea3162321f597edb3d5af8e5332ad8b41dc112e26a6645a0004ef60"} Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.084797 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.099010 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttx2f\" (UniqueName: \"kubernetes.io/projected/62f6109a-44db-4e70-991a-732196a6c845-kube-api-access-ttx2f\") pod \"62f6109a-44db-4e70-991a-732196a6c845\" (UID: \"62f6109a-44db-4e70-991a-732196a6c845\") " Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.105123 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f6109a-44db-4e70-991a-732196a6c845-kube-api-access-ttx2f" (OuterVolumeSpecName: "kube-api-access-ttx2f") pod "62f6109a-44db-4e70-991a-732196a6c845" (UID: "62f6109a-44db-4e70-991a-732196a6c845"). InnerVolumeSpecName "kube-api-access-ttx2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.116768 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.124293 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.200316 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttx2f\" (UniqueName: \"kubernetes.io/projected/62f6109a-44db-4e70-991a-732196a6c845-kube-api-access-ttx2f\") on node \"crc\" DevicePath \"\"" Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.593686 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Nov 22 08:29:12 crc kubenswrapper[4856]: E1122 08:29:12.594553 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f6109a-44db-4e70-991a-732196a6c845" containerName="mariadb-client-2-default" Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.594573 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f6109a-44db-4e70-991a-732196a6c845" containerName="mariadb-client-2-default" Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.594748 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f6109a-44db-4e70-991a-732196a6c845" containerName="mariadb-client-2-default" Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.595362 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.600462 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.606852 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj7jj\" (UniqueName: \"kubernetes.io/projected/bfe9acd0-501a-424f-bc93-e854871a9221-kube-api-access-gj7jj\") pod \"mariadb-client-1\" (UID: \"bfe9acd0-501a-424f-bc93-e854871a9221\") " pod="openstack/mariadb-client-1" Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.707687 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj7jj\" (UniqueName: \"kubernetes.io/projected/bfe9acd0-501a-424f-bc93-e854871a9221-kube-api-access-gj7jj\") pod \"mariadb-client-1\" (UID: \"bfe9acd0-501a-424f-bc93-e854871a9221\") " pod="openstack/mariadb-client-1" Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.718299 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f6109a-44db-4e70-991a-732196a6c845" path="/var/lib/kubelet/pods/62f6109a-44db-4e70-991a-732196a6c845/volumes" Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.728960 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj7jj\" (UniqueName: \"kubernetes.io/projected/bfe9acd0-501a-424f-bc93-e854871a9221-kube-api-access-gj7jj\") pod \"mariadb-client-1\" (UID: \"bfe9acd0-501a-424f-bc93-e854871a9221\") " pod="openstack/mariadb-client-1" Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.781536 4856 scope.go:117] "RemoveContainer" containerID="d86dd1dfeea3162321f597edb3d5af8e5332ad8b41dc112e26a6645a0004ef60" Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.781698 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Nov 22 08:29:12 crc kubenswrapper[4856]: I1122 08:29:12.912948 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 22 08:29:13 crc kubenswrapper[4856]: I1122 08:29:13.381973 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Nov 22 08:29:13 crc kubenswrapper[4856]: I1122 08:29:13.792569 4856 generic.go:334] "Generic (PLEG): container finished" podID="bfe9acd0-501a-424f-bc93-e854871a9221" containerID="15451d7f3bd2fed80a5408c89c6fd368893c6df453879873e0166e42e1108c07" exitCode=0 Nov 22 08:29:13 crc kubenswrapper[4856]: I1122 08:29:13.792803 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"bfe9acd0-501a-424f-bc93-e854871a9221","Type":"ContainerDied","Data":"15451d7f3bd2fed80a5408c89c6fd368893c6df453879873e0166e42e1108c07"} Nov 22 08:29:13 crc kubenswrapper[4856]: I1122 08:29:13.793011 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"bfe9acd0-501a-424f-bc93-e854871a9221","Type":"ContainerStarted","Data":"c75d83179b7da9164f8ae82e291a94d8fd92c249756cd008ae17e3d8209651f2"} Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.135305 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.154319 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_bfe9acd0-501a-424f-bc93-e854871a9221/mariadb-client-1/0.log" Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.183460 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.190149 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.242750 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj7jj\" (UniqueName: \"kubernetes.io/projected/bfe9acd0-501a-424f-bc93-e854871a9221-kube-api-access-gj7jj\") pod \"bfe9acd0-501a-424f-bc93-e854871a9221\" (UID: \"bfe9acd0-501a-424f-bc93-e854871a9221\") " Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.248746 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe9acd0-501a-424f-bc93-e854871a9221-kube-api-access-gj7jj" (OuterVolumeSpecName: "kube-api-access-gj7jj") pod "bfe9acd0-501a-424f-bc93-e854871a9221" (UID: "bfe9acd0-501a-424f-bc93-e854871a9221"). InnerVolumeSpecName "kube-api-access-gj7jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.344492 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj7jj\" (UniqueName: \"kubernetes.io/projected/bfe9acd0-501a-424f-bc93-e854871a9221-kube-api-access-gj7jj\") on node \"crc\" DevicePath \"\"" Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.675146 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 08:29:15 crc kubenswrapper[4856]: E1122 08:29:15.675579 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe9acd0-501a-424f-bc93-e854871a9221" containerName="mariadb-client-1" Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.675596 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe9acd0-501a-424f-bc93-e854871a9221" containerName="mariadb-client-1" Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.675771 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe9acd0-501a-424f-bc93-e854871a9221" containerName="mariadb-client-1" Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.676349 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.682174 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.809357 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c75d83179b7da9164f8ae82e291a94d8fd92c249756cd008ae17e3d8209651f2" Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.809443 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.852245 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj86n\" (UniqueName: \"kubernetes.io/projected/2b9206ed-bf03-4c42-b203-2184e58c6453-kube-api-access-gj86n\") pod \"mariadb-client-4-default\" (UID: \"2b9206ed-bf03-4c42-b203-2184e58c6453\") " pod="openstack/mariadb-client-4-default" Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.953654 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj86n\" (UniqueName: \"kubernetes.io/projected/2b9206ed-bf03-4c42-b203-2184e58c6453-kube-api-access-gj86n\") pod \"mariadb-client-4-default\" (UID: \"2b9206ed-bf03-4c42-b203-2184e58c6453\") " pod="openstack/mariadb-client-4-default" Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.971953 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj86n\" (UniqueName: \"kubernetes.io/projected/2b9206ed-bf03-4c42-b203-2184e58c6453-kube-api-access-gj86n\") pod \"mariadb-client-4-default\" (UID: \"2b9206ed-bf03-4c42-b203-2184e58c6453\") " pod="openstack/mariadb-client-4-default" Nov 22 08:29:15 crc kubenswrapper[4856]: I1122 08:29:15.991980 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 22 08:29:16 crc kubenswrapper[4856]: I1122 08:29:16.486834 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 08:29:16 crc kubenswrapper[4856]: I1122 08:29:16.721385 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfe9acd0-501a-424f-bc93-e854871a9221" path="/var/lib/kubelet/pods/bfe9acd0-501a-424f-bc93-e854871a9221/volumes" Nov 22 08:29:16 crc kubenswrapper[4856]: I1122 08:29:16.817627 4856 generic.go:334] "Generic (PLEG): container finished" podID="2b9206ed-bf03-4c42-b203-2184e58c6453" containerID="ffed7500c9c061d086131c84cadd807715dc57eb504507081a625ff7ad20cea2" exitCode=0 Nov 22 08:29:16 crc kubenswrapper[4856]: I1122 08:29:16.817679 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"2b9206ed-bf03-4c42-b203-2184e58c6453","Type":"ContainerDied","Data":"ffed7500c9c061d086131c84cadd807715dc57eb504507081a625ff7ad20cea2"} Nov 22 08:29:16 crc kubenswrapper[4856]: I1122 08:29:16.817713 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"2b9206ed-bf03-4c42-b203-2184e58c6453","Type":"ContainerStarted","Data":"7d81c745fdf98453d2d2b880cd7ecf496790e1d235a04a6e34f8ece0acda6502"} Nov 22 08:29:18 crc kubenswrapper[4856]: I1122 08:29:18.169821 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 22 08:29:18 crc kubenswrapper[4856]: I1122 08:29:18.188734 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_2b9206ed-bf03-4c42-b203-2184e58c6453/mariadb-client-4-default/0.log" Nov 22 08:29:18 crc kubenswrapper[4856]: I1122 08:29:18.219370 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 08:29:18 crc kubenswrapper[4856]: I1122 08:29:18.227726 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Nov 22 08:29:18 crc kubenswrapper[4856]: I1122 08:29:18.287385 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj86n\" (UniqueName: \"kubernetes.io/projected/2b9206ed-bf03-4c42-b203-2184e58c6453-kube-api-access-gj86n\") pod \"2b9206ed-bf03-4c42-b203-2184e58c6453\" (UID: \"2b9206ed-bf03-4c42-b203-2184e58c6453\") " Nov 22 08:29:18 crc kubenswrapper[4856]: I1122 08:29:18.292351 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9206ed-bf03-4c42-b203-2184e58c6453-kube-api-access-gj86n" (OuterVolumeSpecName: "kube-api-access-gj86n") pod "2b9206ed-bf03-4c42-b203-2184e58c6453" (UID: "2b9206ed-bf03-4c42-b203-2184e58c6453"). InnerVolumeSpecName "kube-api-access-gj86n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:29:18 crc kubenswrapper[4856]: I1122 08:29:18.389239 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj86n\" (UniqueName: \"kubernetes.io/projected/2b9206ed-bf03-4c42-b203-2184e58c6453-kube-api-access-gj86n\") on node \"crc\" DevicePath \"\"" Nov 22 08:29:18 crc kubenswrapper[4856]: I1122 08:29:18.719611 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b9206ed-bf03-4c42-b203-2184e58c6453" path="/var/lib/kubelet/pods/2b9206ed-bf03-4c42-b203-2184e58c6453/volumes" Nov 22 08:29:18 crc kubenswrapper[4856]: I1122 08:29:18.833284 4856 scope.go:117] "RemoveContainer" containerID="ffed7500c9c061d086131c84cadd807715dc57eb504507081a625ff7ad20cea2" Nov 22 08:29:18 crc kubenswrapper[4856]: I1122 08:29:18.833339 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Nov 22 08:29:21 crc kubenswrapper[4856]: I1122 08:29:21.751225 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 08:29:21 crc kubenswrapper[4856]: E1122 08:29:21.751823 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9206ed-bf03-4c42-b203-2184e58c6453" containerName="mariadb-client-4-default" Nov 22 08:29:21 crc kubenswrapper[4856]: I1122 08:29:21.751836 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9206ed-bf03-4c42-b203-2184e58c6453" containerName="mariadb-client-4-default" Nov 22 08:29:21 crc kubenswrapper[4856]: I1122 08:29:21.752004 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b9206ed-bf03-4c42-b203-2184e58c6453" containerName="mariadb-client-4-default" Nov 22 08:29:21 crc kubenswrapper[4856]: I1122 08:29:21.752523 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 22 08:29:21 crc kubenswrapper[4856]: I1122 08:29:21.754676 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nvzxv" Nov 22 08:29:21 crc kubenswrapper[4856]: I1122 08:29:21.761315 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 08:29:21 crc kubenswrapper[4856]: I1122 08:29:21.841173 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjdgf\" (UniqueName: \"kubernetes.io/projected/69982008-bd3a-488c-826c-ebdd2d3ba93a-kube-api-access-cjdgf\") pod \"mariadb-client-5-default\" (UID: \"69982008-bd3a-488c-826c-ebdd2d3ba93a\") " pod="openstack/mariadb-client-5-default" Nov 22 08:29:21 crc kubenswrapper[4856]: I1122 08:29:21.943359 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjdgf\" (UniqueName: \"kubernetes.io/projected/69982008-bd3a-488c-826c-ebdd2d3ba93a-kube-api-access-cjdgf\") pod \"mariadb-client-5-default\" (UID: \"69982008-bd3a-488c-826c-ebdd2d3ba93a\") " pod="openstack/mariadb-client-5-default" Nov 22 08:29:21 crc kubenswrapper[4856]: I1122 08:29:21.963164 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjdgf\" (UniqueName: \"kubernetes.io/projected/69982008-bd3a-488c-826c-ebdd2d3ba93a-kube-api-access-cjdgf\") pod \"mariadb-client-5-default\" (UID: \"69982008-bd3a-488c-826c-ebdd2d3ba93a\") " pod="openstack/mariadb-client-5-default" Nov 22 08:29:22 crc kubenswrapper[4856]: I1122 08:29:22.075170 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 22 08:29:22 crc kubenswrapper[4856]: I1122 08:29:22.379001 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 08:29:22 crc kubenswrapper[4856]: I1122 08:29:22.873012 4856 generic.go:334] "Generic (PLEG): container finished" podID="69982008-bd3a-488c-826c-ebdd2d3ba93a" containerID="c025d7e83181e5491b3cb475378bf13d58d685c3809e5960e983d64d5bbe6b28" exitCode=0 Nov 22 08:29:22 crc kubenswrapper[4856]: I1122 08:29:22.873063 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"69982008-bd3a-488c-826c-ebdd2d3ba93a","Type":"ContainerDied","Data":"c025d7e83181e5491b3cb475378bf13d58d685c3809e5960e983d64d5bbe6b28"} Nov 22 08:29:22 crc kubenswrapper[4856]: I1122 08:29:22.873097 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"69982008-bd3a-488c-826c-ebdd2d3ba93a","Type":"ContainerStarted","Data":"e31a84ce8b600f136eaa8a87eafab8fa10d37d922d5f10092d9b3d096aa7cea4"} Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.222179 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.260845 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_69982008-bd3a-488c-826c-ebdd2d3ba93a/mariadb-client-5-default/0.log" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.280372 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjdgf\" (UniqueName: \"kubernetes.io/projected/69982008-bd3a-488c-826c-ebdd2d3ba93a-kube-api-access-cjdgf\") pod \"69982008-bd3a-488c-826c-ebdd2d3ba93a\" (UID: \"69982008-bd3a-488c-826c-ebdd2d3ba93a\") " Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.288910 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69982008-bd3a-488c-826c-ebdd2d3ba93a-kube-api-access-cjdgf" (OuterVolumeSpecName: "kube-api-access-cjdgf") pod "69982008-bd3a-488c-826c-ebdd2d3ba93a" (UID: "69982008-bd3a-488c-826c-ebdd2d3ba93a"). InnerVolumeSpecName "kube-api-access-cjdgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.290271 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.297740 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.382808 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjdgf\" (UniqueName: \"kubernetes.io/projected/69982008-bd3a-488c-826c-ebdd2d3ba93a-kube-api-access-cjdgf\") on node \"crc\" DevicePath \"\"" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.430178 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 08:29:24 crc kubenswrapper[4856]: E1122 08:29:24.430596 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69982008-bd3a-488c-826c-ebdd2d3ba93a" containerName="mariadb-client-5-default" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.430618 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="69982008-bd3a-488c-826c-ebdd2d3ba93a" containerName="mariadb-client-5-default" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.430798 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="69982008-bd3a-488c-826c-ebdd2d3ba93a" containerName="mariadb-client-5-default" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.431438 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.437483 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.485635 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6jqv\" (UniqueName: \"kubernetes.io/projected/2f23064a-600c-48f0-8eee-417998718af8-kube-api-access-m6jqv\") pod \"mariadb-client-6-default\" (UID: \"2f23064a-600c-48f0-8eee-417998718af8\") " pod="openstack/mariadb-client-6-default" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.587061 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6jqv\" (UniqueName: \"kubernetes.io/projected/2f23064a-600c-48f0-8eee-417998718af8-kube-api-access-m6jqv\") pod \"mariadb-client-6-default\" (UID: \"2f23064a-600c-48f0-8eee-417998718af8\") " pod="openstack/mariadb-client-6-default" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.606743 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6jqv\" (UniqueName: \"kubernetes.io/projected/2f23064a-600c-48f0-8eee-417998718af8-kube-api-access-m6jqv\") pod \"mariadb-client-6-default\" (UID: \"2f23064a-600c-48f0-8eee-417998718af8\") " pod="openstack/mariadb-client-6-default" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.719357 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69982008-bd3a-488c-826c-ebdd2d3ba93a" path="/var/lib/kubelet/pods/69982008-bd3a-488c-826c-ebdd2d3ba93a/volumes" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.750906 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.889532 4856 scope.go:117] "RemoveContainer" containerID="c025d7e83181e5491b3cb475378bf13d58d685c3809e5960e983d64d5bbe6b28" Nov 22 08:29:24 crc kubenswrapper[4856]: I1122 08:29:24.889658 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Nov 22 08:29:25 crc kubenswrapper[4856]: I1122 08:29:25.227632 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 08:29:25 crc kubenswrapper[4856]: W1122 08:29:25.235967 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f23064a_600c_48f0_8eee_417998718af8.slice/crio-e3e13b353b7f1601728d58c6b2f70acb9e50ebd7db448887963356049337d36e WatchSource:0}: Error finding container e3e13b353b7f1601728d58c6b2f70acb9e50ebd7db448887963356049337d36e: Status 404 returned error can't find the container with id e3e13b353b7f1601728d58c6b2f70acb9e50ebd7db448887963356049337d36e Nov 22 08:29:25 crc kubenswrapper[4856]: I1122 08:29:25.897924 4856 generic.go:334] "Generic (PLEG): container finished" podID="2f23064a-600c-48f0-8eee-417998718af8" containerID="dfe69eadfdcdf1efa12481e14c25e47522741266d2e28d1e7e37d662e8bf408a" exitCode=1 Nov 22 08:29:25 crc kubenswrapper[4856]: I1122 08:29:25.897994 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"2f23064a-600c-48f0-8eee-417998718af8","Type":"ContainerDied","Data":"dfe69eadfdcdf1efa12481e14c25e47522741266d2e28d1e7e37d662e8bf408a"} Nov 22 08:29:25 crc kubenswrapper[4856]: I1122 08:29:25.898298 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"2f23064a-600c-48f0-8eee-417998718af8","Type":"ContainerStarted","Data":"e3e13b353b7f1601728d58c6b2f70acb9e50ebd7db448887963356049337d36e"} Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.236218 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.254653 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-6-default_2f23064a-600c-48f0-8eee-417998718af8/mariadb-client-6-default/0.log" Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.284446 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.289046 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.426957 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6jqv\" (UniqueName: \"kubernetes.io/projected/2f23064a-600c-48f0-8eee-417998718af8-kube-api-access-m6jqv\") pod \"2f23064a-600c-48f0-8eee-417998718af8\" (UID: \"2f23064a-600c-48f0-8eee-417998718af8\") " Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.433828 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f23064a-600c-48f0-8eee-417998718af8-kube-api-access-m6jqv" (OuterVolumeSpecName: "kube-api-access-m6jqv") pod "2f23064a-600c-48f0-8eee-417998718af8" (UID: "2f23064a-600c-48f0-8eee-417998718af8"). InnerVolumeSpecName "kube-api-access-m6jqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.444706 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 08:29:27 crc kubenswrapper[4856]: E1122 08:29:27.445112 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f23064a-600c-48f0-8eee-417998718af8" containerName="mariadb-client-6-default" Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.445135 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f23064a-600c-48f0-8eee-417998718af8" containerName="mariadb-client-6-default" Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.445284 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f23064a-600c-48f0-8eee-417998718af8" containerName="mariadb-client-6-default" Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.445852 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.453829 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.529495 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6jqv\" (UniqueName: \"kubernetes.io/projected/2f23064a-600c-48f0-8eee-417998718af8-kube-api-access-m6jqv\") on node \"crc\" DevicePath \"\"" Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.631111 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv2tj\" (UniqueName: \"kubernetes.io/projected/cc517f03-e6c4-413c-9b25-1dbba59f56dc-kube-api-access-mv2tj\") pod \"mariadb-client-7-default\" (UID: \"cc517f03-e6c4-413c-9b25-1dbba59f56dc\") " pod="openstack/mariadb-client-7-default" Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.733325 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv2tj\" (UniqueName: \"kubernetes.io/projected/cc517f03-e6c4-413c-9b25-1dbba59f56dc-kube-api-access-mv2tj\") pod \"mariadb-client-7-default\" (UID: \"cc517f03-e6c4-413c-9b25-1dbba59f56dc\") " pod="openstack/mariadb-client-7-default" Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.757042 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv2tj\" (UniqueName: \"kubernetes.io/projected/cc517f03-e6c4-413c-9b25-1dbba59f56dc-kube-api-access-mv2tj\") pod \"mariadb-client-7-default\" (UID: \"cc517f03-e6c4-413c-9b25-1dbba59f56dc\") " pod="openstack/mariadb-client-7-default" Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.780901 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.917804 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3e13b353b7f1601728d58c6b2f70acb9e50ebd7db448887963356049337d36e" Nov 22 08:29:27 crc kubenswrapper[4856]: I1122 08:29:27.917943 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Nov 22 08:29:28 crc kubenswrapper[4856]: I1122 08:29:28.260762 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 08:29:28 crc kubenswrapper[4856]: W1122 08:29:28.263800 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc517f03_e6c4_413c_9b25_1dbba59f56dc.slice/crio-89f137a5d8f3ced4bb4a360cb40b313c0827a6bec7185615758c9b0eb9a0ee25 WatchSource:0}: Error finding container 89f137a5d8f3ced4bb4a360cb40b313c0827a6bec7185615758c9b0eb9a0ee25: Status 404 returned error can't find the container with id 89f137a5d8f3ced4bb4a360cb40b313c0827a6bec7185615758c9b0eb9a0ee25 Nov 22 08:29:28 crc kubenswrapper[4856]: I1122 08:29:28.718985 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f23064a-600c-48f0-8eee-417998718af8" path="/var/lib/kubelet/pods/2f23064a-600c-48f0-8eee-417998718af8/volumes" Nov 22 08:29:28 crc kubenswrapper[4856]: I1122 08:29:28.933040 4856 generic.go:334] "Generic (PLEG): container finished" podID="cc517f03-e6c4-413c-9b25-1dbba59f56dc" containerID="19d64df02a03986fefbc8e48912a8defb386f4696643a4c7d905139b92e0df09" exitCode=0 Nov 22 08:29:28 crc kubenswrapper[4856]: I1122 08:29:28.933088 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"cc517f03-e6c4-413c-9b25-1dbba59f56dc","Type":"ContainerDied","Data":"19d64df02a03986fefbc8e48912a8defb386f4696643a4c7d905139b92e0df09"} Nov 22 08:29:28 crc kubenswrapper[4856]: I1122 08:29:28.933119 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"cc517f03-e6c4-413c-9b25-1dbba59f56dc","Type":"ContainerStarted","Data":"89f137a5d8f3ced4bb4a360cb40b313c0827a6bec7185615758c9b0eb9a0ee25"} Nov 22 08:29:29 crc kubenswrapper[4856]: I1122 08:29:29.754606 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:29:29 crc kubenswrapper[4856]: I1122 08:29:29.755161 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.277806 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.296822 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_cc517f03-e6c4-413c-9b25-1dbba59f56dc/mariadb-client-7-default/0.log" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.321306 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.329303 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.474682 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv2tj\" (UniqueName: \"kubernetes.io/projected/cc517f03-e6c4-413c-9b25-1dbba59f56dc-kube-api-access-mv2tj\") pod \"cc517f03-e6c4-413c-9b25-1dbba59f56dc\" (UID: \"cc517f03-e6c4-413c-9b25-1dbba59f56dc\") " Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.482894 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc517f03-e6c4-413c-9b25-1dbba59f56dc-kube-api-access-mv2tj" (OuterVolumeSpecName: "kube-api-access-mv2tj") pod "cc517f03-e6c4-413c-9b25-1dbba59f56dc" (UID: "cc517f03-e6c4-413c-9b25-1dbba59f56dc"). InnerVolumeSpecName "kube-api-access-mv2tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.487678 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Nov 22 08:29:30 crc kubenswrapper[4856]: E1122 08:29:30.488085 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc517f03-e6c4-413c-9b25-1dbba59f56dc" containerName="mariadb-client-7-default" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.488111 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc517f03-e6c4-413c-9b25-1dbba59f56dc" containerName="mariadb-client-7-default" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.488319 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc517f03-e6c4-413c-9b25-1dbba59f56dc" containerName="mariadb-client-7-default" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.489072 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.495734 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.576146 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w95n\" (UniqueName: \"kubernetes.io/projected/843b6a31-ed41-4956-a49e-24514d09a44a-kube-api-access-9w95n\") pod \"mariadb-client-2\" (UID: \"843b6a31-ed41-4956-a49e-24514d09a44a\") " pod="openstack/mariadb-client-2" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.576288 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv2tj\" (UniqueName: \"kubernetes.io/projected/cc517f03-e6c4-413c-9b25-1dbba59f56dc-kube-api-access-mv2tj\") on node \"crc\" DevicePath \"\"" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.677725 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w95n\" (UniqueName: \"kubernetes.io/projected/843b6a31-ed41-4956-a49e-24514d09a44a-kube-api-access-9w95n\") pod \"mariadb-client-2\" (UID: \"843b6a31-ed41-4956-a49e-24514d09a44a\") " pod="openstack/mariadb-client-2" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.694440 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w95n\" (UniqueName: \"kubernetes.io/projected/843b6a31-ed41-4956-a49e-24514d09a44a-kube-api-access-9w95n\") pod \"mariadb-client-2\" (UID: \"843b6a31-ed41-4956-a49e-24514d09a44a\") " pod="openstack/mariadb-client-2" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.721319 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc517f03-e6c4-413c-9b25-1dbba59f56dc" path="/var/lib/kubelet/pods/cc517f03-e6c4-413c-9b25-1dbba59f56dc/volumes" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.829094 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.951819 4856 scope.go:117] "RemoveContainer" containerID="19d64df02a03986fefbc8e48912a8defb386f4696643a4c7d905139b92e0df09" Nov 22 08:29:30 crc kubenswrapper[4856]: I1122 08:29:30.951990 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Nov 22 08:29:31 crc kubenswrapper[4856]: I1122 08:29:31.303277 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Nov 22 08:29:31 crc kubenswrapper[4856]: W1122 08:29:31.312117 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod843b6a31_ed41_4956_a49e_24514d09a44a.slice/crio-195430cc8afa6d53d7a5d33d720c0f0c50fa248c6ca4ff7c129ebde5b7a1a818 WatchSource:0}: Error finding container 195430cc8afa6d53d7a5d33d720c0f0c50fa248c6ca4ff7c129ebde5b7a1a818: Status 404 returned error can't find the container with id 195430cc8afa6d53d7a5d33d720c0f0c50fa248c6ca4ff7c129ebde5b7a1a818 Nov 22 08:29:31 crc kubenswrapper[4856]: I1122 08:29:31.961262 4856 generic.go:334] "Generic (PLEG): container finished" podID="843b6a31-ed41-4956-a49e-24514d09a44a" containerID="2038a0c4aa339cc24469bcbf53f3f99e34d6cc99f46ee590600ba2617ee2e28e" exitCode=0 Nov 22 08:29:31 crc kubenswrapper[4856]: I1122 08:29:31.961343 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"843b6a31-ed41-4956-a49e-24514d09a44a","Type":"ContainerDied","Data":"2038a0c4aa339cc24469bcbf53f3f99e34d6cc99f46ee590600ba2617ee2e28e"} Nov 22 08:29:31 crc kubenswrapper[4856]: I1122 08:29:31.961370 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"843b6a31-ed41-4956-a49e-24514d09a44a","Type":"ContainerStarted","Data":"195430cc8afa6d53d7a5d33d720c0f0c50fa248c6ca4ff7c129ebde5b7a1a818"} Nov 22 08:29:33 crc kubenswrapper[4856]: I1122 08:29:33.299842 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 22 08:29:33 crc kubenswrapper[4856]: I1122 08:29:33.316562 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_843b6a31-ed41-4956-a49e-24514d09a44a/mariadb-client-2/0.log" Nov 22 08:29:33 crc kubenswrapper[4856]: I1122 08:29:33.348961 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Nov 22 08:29:33 crc kubenswrapper[4856]: I1122 08:29:33.353599 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Nov 22 08:29:33 crc kubenswrapper[4856]: I1122 08:29:33.416363 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w95n\" (UniqueName: \"kubernetes.io/projected/843b6a31-ed41-4956-a49e-24514d09a44a-kube-api-access-9w95n\") pod \"843b6a31-ed41-4956-a49e-24514d09a44a\" (UID: \"843b6a31-ed41-4956-a49e-24514d09a44a\") " Nov 22 08:29:33 crc kubenswrapper[4856]: I1122 08:29:33.424464 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/843b6a31-ed41-4956-a49e-24514d09a44a-kube-api-access-9w95n" (OuterVolumeSpecName: "kube-api-access-9w95n") pod "843b6a31-ed41-4956-a49e-24514d09a44a" (UID: "843b6a31-ed41-4956-a49e-24514d09a44a"). InnerVolumeSpecName "kube-api-access-9w95n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:29:33 crc kubenswrapper[4856]: I1122 08:29:33.518726 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w95n\" (UniqueName: \"kubernetes.io/projected/843b6a31-ed41-4956-a49e-24514d09a44a-kube-api-access-9w95n\") on node \"crc\" DevicePath \"\"" Nov 22 08:29:33 crc kubenswrapper[4856]: I1122 08:29:33.982443 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="195430cc8afa6d53d7a5d33d720c0f0c50fa248c6ca4ff7c129ebde5b7a1a818" Nov 22 08:29:33 crc kubenswrapper[4856]: I1122 08:29:33.982502 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Nov 22 08:29:34 crc kubenswrapper[4856]: I1122 08:29:34.719951 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="843b6a31-ed41-4956-a49e-24514d09a44a" path="/var/lib/kubelet/pods/843b6a31-ed41-4956-a49e-24514d09a44a/volumes" Nov 22 08:29:59 crc kubenswrapper[4856]: I1122 08:29:59.754971 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:29:59 crc kubenswrapper[4856]: I1122 08:29:59.756092 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.144514 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t"] Nov 22 08:30:00 crc kubenswrapper[4856]: E1122 08:30:00.145725 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843b6a31-ed41-4956-a49e-24514d09a44a" containerName="mariadb-client-2" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.145772 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="843b6a31-ed41-4956-a49e-24514d09a44a" containerName="mariadb-client-2" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.146149 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="843b6a31-ed41-4956-a49e-24514d09a44a" containerName="mariadb-client-2" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.147034 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.150304 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.150502 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.154658 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t"] Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.318194 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pbsj\" (UniqueName: \"kubernetes.io/projected/9036fc97-e929-4add-b263-f40f8374bb33-kube-api-access-9pbsj\") pod \"collect-profiles-29396670-wlh6t\" (UID: \"9036fc97-e929-4add-b263-f40f8374bb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.318267 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9036fc97-e929-4add-b263-f40f8374bb33-config-volume\") pod \"collect-profiles-29396670-wlh6t\" (UID: \"9036fc97-e929-4add-b263-f40f8374bb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.318376 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9036fc97-e929-4add-b263-f40f8374bb33-secret-volume\") pod \"collect-profiles-29396670-wlh6t\" (UID: \"9036fc97-e929-4add-b263-f40f8374bb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.419469 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9036fc97-e929-4add-b263-f40f8374bb33-secret-volume\") pod \"collect-profiles-29396670-wlh6t\" (UID: \"9036fc97-e929-4add-b263-f40f8374bb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.419588 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pbsj\" (UniqueName: \"kubernetes.io/projected/9036fc97-e929-4add-b263-f40f8374bb33-kube-api-access-9pbsj\") pod \"collect-profiles-29396670-wlh6t\" (UID: \"9036fc97-e929-4add-b263-f40f8374bb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.419612 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9036fc97-e929-4add-b263-f40f8374bb33-config-volume\") pod \"collect-profiles-29396670-wlh6t\" (UID: \"9036fc97-e929-4add-b263-f40f8374bb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.420729 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9036fc97-e929-4add-b263-f40f8374bb33-config-volume\") pod \"collect-profiles-29396670-wlh6t\" (UID: \"9036fc97-e929-4add-b263-f40f8374bb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.427150 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9036fc97-e929-4add-b263-f40f8374bb33-secret-volume\") pod \"collect-profiles-29396670-wlh6t\" (UID: \"9036fc97-e929-4add-b263-f40f8374bb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.438412 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pbsj\" (UniqueName: \"kubernetes.io/projected/9036fc97-e929-4add-b263-f40f8374bb33-kube-api-access-9pbsj\") pod \"collect-profiles-29396670-wlh6t\" (UID: \"9036fc97-e929-4add-b263-f40f8374bb33\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.469611 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" Nov 22 08:30:00 crc kubenswrapper[4856]: I1122 08:30:00.882170 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t"] Nov 22 08:30:01 crc kubenswrapper[4856]: I1122 08:30:01.183295 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" event={"ID":"9036fc97-e929-4add-b263-f40f8374bb33","Type":"ContainerStarted","Data":"cda8080a120af309508d245c33e163d8158e8ea617945b08f4b6c9a30ca8b5f6"} Nov 22 08:30:01 crc kubenswrapper[4856]: I1122 08:30:01.183345 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" event={"ID":"9036fc97-e929-4add-b263-f40f8374bb33","Type":"ContainerStarted","Data":"bd6c1b5fbe835442ebe56549fea2bb74c30c49ddac05330b954201aa4cf39354"} Nov 22 08:30:01 crc kubenswrapper[4856]: I1122 08:30:01.217245 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" podStartSLOduration=1.217223934 podStartE2EDuration="1.217223934s" podCreationTimestamp="2025-11-22 08:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:30:01.215116167 +0000 UTC m=+5243.628509435" watchObservedRunningTime="2025-11-22 08:30:01.217223934 +0000 UTC m=+5243.630617192" Nov 22 08:30:02 crc kubenswrapper[4856]: I1122 08:30:02.194372 4856 generic.go:334] "Generic (PLEG): container finished" podID="9036fc97-e929-4add-b263-f40f8374bb33" containerID="cda8080a120af309508d245c33e163d8158e8ea617945b08f4b6c9a30ca8b5f6" exitCode=0 Nov 22 08:30:02 crc kubenswrapper[4856]: I1122 08:30:02.194479 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" event={"ID":"9036fc97-e929-4add-b263-f40f8374bb33","Type":"ContainerDied","Data":"cda8080a120af309508d245c33e163d8158e8ea617945b08f4b6c9a30ca8b5f6"} Nov 22 08:30:03 crc kubenswrapper[4856]: I1122 08:30:03.467558 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" Nov 22 08:30:03 crc kubenswrapper[4856]: I1122 08:30:03.573352 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9036fc97-e929-4add-b263-f40f8374bb33-secret-volume\") pod \"9036fc97-e929-4add-b263-f40f8374bb33\" (UID: \"9036fc97-e929-4add-b263-f40f8374bb33\") " Nov 22 08:30:03 crc kubenswrapper[4856]: I1122 08:30:03.573528 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9036fc97-e929-4add-b263-f40f8374bb33-config-volume\") pod \"9036fc97-e929-4add-b263-f40f8374bb33\" (UID: \"9036fc97-e929-4add-b263-f40f8374bb33\") " Nov 22 08:30:03 crc kubenswrapper[4856]: I1122 08:30:03.573605 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pbsj\" (UniqueName: \"kubernetes.io/projected/9036fc97-e929-4add-b263-f40f8374bb33-kube-api-access-9pbsj\") pod \"9036fc97-e929-4add-b263-f40f8374bb33\" (UID: \"9036fc97-e929-4add-b263-f40f8374bb33\") " Nov 22 08:30:03 crc kubenswrapper[4856]: I1122 08:30:03.574213 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9036fc97-e929-4add-b263-f40f8374bb33-config-volume" (OuterVolumeSpecName: "config-volume") pod "9036fc97-e929-4add-b263-f40f8374bb33" (UID: "9036fc97-e929-4add-b263-f40f8374bb33"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:30:03 crc kubenswrapper[4856]: I1122 08:30:03.578438 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9036fc97-e929-4add-b263-f40f8374bb33-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9036fc97-e929-4add-b263-f40f8374bb33" (UID: "9036fc97-e929-4add-b263-f40f8374bb33"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:30:03 crc kubenswrapper[4856]: I1122 08:30:03.579499 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9036fc97-e929-4add-b263-f40f8374bb33-kube-api-access-9pbsj" (OuterVolumeSpecName: "kube-api-access-9pbsj") pod "9036fc97-e929-4add-b263-f40f8374bb33" (UID: "9036fc97-e929-4add-b263-f40f8374bb33"). InnerVolumeSpecName "kube-api-access-9pbsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:30:03 crc kubenswrapper[4856]: I1122 08:30:03.676185 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9036fc97-e929-4add-b263-f40f8374bb33-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:30:03 crc kubenswrapper[4856]: I1122 08:30:03.676259 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9036fc97-e929-4add-b263-f40f8374bb33-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:30:03 crc kubenswrapper[4856]: I1122 08:30:03.676274 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pbsj\" (UniqueName: \"kubernetes.io/projected/9036fc97-e929-4add-b263-f40f8374bb33-kube-api-access-9pbsj\") on node \"crc\" DevicePath \"\"" Nov 22 08:30:04 crc kubenswrapper[4856]: I1122 08:30:04.214754 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" event={"ID":"9036fc97-e929-4add-b263-f40f8374bb33","Type":"ContainerDied","Data":"bd6c1b5fbe835442ebe56549fea2bb74c30c49ddac05330b954201aa4cf39354"} Nov 22 08:30:04 crc kubenswrapper[4856]: I1122 08:30:04.214799 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t" Nov 22 08:30:04 crc kubenswrapper[4856]: I1122 08:30:04.214817 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6c1b5fbe835442ebe56549fea2bb74c30c49ddac05330b954201aa4cf39354" Nov 22 08:30:04 crc kubenswrapper[4856]: I1122 08:30:04.535493 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg"] Nov 22 08:30:04 crc kubenswrapper[4856]: I1122 08:30:04.542010 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396625-hwmlg"] Nov 22 08:30:04 crc kubenswrapper[4856]: I1122 08:30:04.719911 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f152037-3ab0-425a-9bec-a1f0c06dc808" path="/var/lib/kubelet/pods/9f152037-3ab0-425a-9bec-a1f0c06dc808/volumes" Nov 22 08:30:21 crc kubenswrapper[4856]: I1122 08:30:21.898138 4856 scope.go:117] "RemoveContainer" containerID="5cb3cc7f64c76e128baf9671310b5089aa84855b0cff1d241021f610973c772f" Nov 22 08:30:21 crc kubenswrapper[4856]: I1122 08:30:21.919492 4856 scope.go:117] "RemoveContainer" containerID="0715e32e4800a2c98cdde51b7576d1d29174b7256ce9850762932d568e3491db" Nov 22 08:30:29 crc kubenswrapper[4856]: I1122 08:30:29.754471 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:30:29 crc kubenswrapper[4856]: I1122 08:30:29.754844 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:30:29 crc kubenswrapper[4856]: I1122 08:30:29.754887 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 08:30:29 crc kubenswrapper[4856]: I1122 08:30:29.755625 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:30:29 crc kubenswrapper[4856]: I1122 08:30:29.755678 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" gracePeriod=600 Nov 22 08:30:29 crc kubenswrapper[4856]: E1122 08:30:29.879388 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:30:30 crc kubenswrapper[4856]: I1122 08:30:30.411026 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" exitCode=0 Nov 22 08:30:30 crc kubenswrapper[4856]: I1122 08:30:30.411080 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650"} Nov 22 08:30:30 crc kubenswrapper[4856]: I1122 08:30:30.411145 4856 scope.go:117] "RemoveContainer" containerID="9ce3e3934fdbe90bae0874d6336f40d827becf7fe198989484e4604fe47fb112" Nov 22 08:30:30 crc kubenswrapper[4856]: I1122 08:30:30.411838 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:30:30 crc kubenswrapper[4856]: E1122 08:30:30.412222 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:30:42 crc kubenswrapper[4856]: I1122 08:30:42.709737 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:30:42 crc kubenswrapper[4856]: E1122 08:30:42.710919 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:30:55 crc kubenswrapper[4856]: I1122 08:30:55.710643 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:30:55 crc kubenswrapper[4856]: E1122 08:30:55.711563 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:31:04 crc kubenswrapper[4856]: I1122 08:31:04.766455 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m8zhh"] Nov 22 08:31:04 crc kubenswrapper[4856]: E1122 08:31:04.767449 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9036fc97-e929-4add-b263-f40f8374bb33" containerName="collect-profiles" Nov 22 08:31:04 crc kubenswrapper[4856]: I1122 08:31:04.767529 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9036fc97-e929-4add-b263-f40f8374bb33" containerName="collect-profiles" Nov 22 08:31:04 crc kubenswrapper[4856]: I1122 08:31:04.767738 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9036fc97-e929-4add-b263-f40f8374bb33" containerName="collect-profiles" Nov 22 08:31:04 crc kubenswrapper[4856]: I1122 08:31:04.769043 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:04 crc kubenswrapper[4856]: I1122 08:31:04.781500 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8zhh"] Nov 22 08:31:04 crc kubenswrapper[4856]: I1122 08:31:04.902019 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d7b79e-2d45-424f-a9fd-477cebc40298-utilities\") pod \"community-operators-m8zhh\" (UID: \"30d7b79e-2d45-424f-a9fd-477cebc40298\") " pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:04 crc kubenswrapper[4856]: I1122 08:31:04.902217 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh9c4\" (UniqueName: \"kubernetes.io/projected/30d7b79e-2d45-424f-a9fd-477cebc40298-kube-api-access-bh9c4\") pod \"community-operators-m8zhh\" (UID: \"30d7b79e-2d45-424f-a9fd-477cebc40298\") " pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:04 crc kubenswrapper[4856]: I1122 08:31:04.902296 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d7b79e-2d45-424f-a9fd-477cebc40298-catalog-content\") pod \"community-operators-m8zhh\" (UID: \"30d7b79e-2d45-424f-a9fd-477cebc40298\") " pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:05 crc kubenswrapper[4856]: I1122 08:31:05.004005 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh9c4\" (UniqueName: \"kubernetes.io/projected/30d7b79e-2d45-424f-a9fd-477cebc40298-kube-api-access-bh9c4\") pod \"community-operators-m8zhh\" (UID: \"30d7b79e-2d45-424f-a9fd-477cebc40298\") " pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:05 crc kubenswrapper[4856]: I1122 08:31:05.004086 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d7b79e-2d45-424f-a9fd-477cebc40298-catalog-content\") pod \"community-operators-m8zhh\" (UID: \"30d7b79e-2d45-424f-a9fd-477cebc40298\") " pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:05 crc kubenswrapper[4856]: I1122 08:31:05.004136 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d7b79e-2d45-424f-a9fd-477cebc40298-utilities\") pod \"community-operators-m8zhh\" (UID: \"30d7b79e-2d45-424f-a9fd-477cebc40298\") " pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:05 crc kubenswrapper[4856]: I1122 08:31:05.004676 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d7b79e-2d45-424f-a9fd-477cebc40298-utilities\") pod \"community-operators-m8zhh\" (UID: \"30d7b79e-2d45-424f-a9fd-477cebc40298\") " pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:05 crc kubenswrapper[4856]: I1122 08:31:05.004760 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d7b79e-2d45-424f-a9fd-477cebc40298-catalog-content\") pod \"community-operators-m8zhh\" (UID: \"30d7b79e-2d45-424f-a9fd-477cebc40298\") " pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:05 crc kubenswrapper[4856]: I1122 08:31:05.026433 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh9c4\" (UniqueName: \"kubernetes.io/projected/30d7b79e-2d45-424f-a9fd-477cebc40298-kube-api-access-bh9c4\") pod \"community-operators-m8zhh\" (UID: \"30d7b79e-2d45-424f-a9fd-477cebc40298\") " pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:05 crc kubenswrapper[4856]: I1122 08:31:05.090392 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:05 crc kubenswrapper[4856]: I1122 08:31:05.557168 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8zhh"] Nov 22 08:31:05 crc kubenswrapper[4856]: I1122 08:31:05.649757 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8zhh" event={"ID":"30d7b79e-2d45-424f-a9fd-477cebc40298","Type":"ContainerStarted","Data":"47bf1618494008c45f61758d15c70dbdffcae24685241206c4902f6cb182568a"} Nov 22 08:31:06 crc kubenswrapper[4856]: I1122 08:31:06.658463 4856 generic.go:334] "Generic (PLEG): container finished" podID="30d7b79e-2d45-424f-a9fd-477cebc40298" containerID="7d2b3cc2dc1eefa3214289e070a63f247b95dbc1e4dea539c656533fa73b7d2f" exitCode=0 Nov 22 08:31:06 crc kubenswrapper[4856]: I1122 08:31:06.658556 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8zhh" event={"ID":"30d7b79e-2d45-424f-a9fd-477cebc40298","Type":"ContainerDied","Data":"7d2b3cc2dc1eefa3214289e070a63f247b95dbc1e4dea539c656533fa73b7d2f"} Nov 22 08:31:07 crc kubenswrapper[4856]: I1122 08:31:07.671789 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8zhh" event={"ID":"30d7b79e-2d45-424f-a9fd-477cebc40298","Type":"ContainerStarted","Data":"f7d1dde0bf4691b59a12fc80afb0031aff82c145d4416a7290e3a53e41f853ad"} Nov 22 08:31:08 crc kubenswrapper[4856]: I1122 08:31:08.682435 4856 generic.go:334] "Generic (PLEG): container finished" podID="30d7b79e-2d45-424f-a9fd-477cebc40298" containerID="f7d1dde0bf4691b59a12fc80afb0031aff82c145d4416a7290e3a53e41f853ad" exitCode=0 Nov 22 08:31:08 crc kubenswrapper[4856]: I1122 08:31:08.682544 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8zhh" event={"ID":"30d7b79e-2d45-424f-a9fd-477cebc40298","Type":"ContainerDied","Data":"f7d1dde0bf4691b59a12fc80afb0031aff82c145d4416a7290e3a53e41f853ad"} Nov 22 08:31:08 crc kubenswrapper[4856]: I1122 08:31:08.715119 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:31:08 crc kubenswrapper[4856]: E1122 08:31:08.715459 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:31:09 crc kubenswrapper[4856]: I1122 08:31:09.692485 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8zhh" event={"ID":"30d7b79e-2d45-424f-a9fd-477cebc40298","Type":"ContainerStarted","Data":"b87e517ef9745f76584bd324723f87ae3fa0efd59c92043300fc4ec71bb70f8b"} Nov 22 08:31:09 crc kubenswrapper[4856]: I1122 08:31:09.709991 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m8zhh" podStartSLOduration=3.281825489 podStartE2EDuration="5.70996729s" podCreationTimestamp="2025-11-22 08:31:04 +0000 UTC" firstStartedPulling="2025-11-22 08:31:06.660072928 +0000 UTC m=+5309.073466186" lastFinishedPulling="2025-11-22 08:31:09.088214719 +0000 UTC m=+5311.501607987" observedRunningTime="2025-11-22 08:31:09.70807135 +0000 UTC m=+5312.121464598" watchObservedRunningTime="2025-11-22 08:31:09.70996729 +0000 UTC m=+5312.123360548" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.562585 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-468vd"] Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.565273 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.586967 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-468vd"] Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.598077 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-catalog-content\") pod \"certified-operators-468vd\" (UID: \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\") " pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.598191 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q45b\" (UniqueName: \"kubernetes.io/projected/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-kube-api-access-4q45b\") pod \"certified-operators-468vd\" (UID: \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\") " pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.598230 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-utilities\") pod \"certified-operators-468vd\" (UID: \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\") " pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.700004 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4q45b\" (UniqueName: \"kubernetes.io/projected/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-kube-api-access-4q45b\") pod \"certified-operators-468vd\" (UID: \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\") " pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.700057 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-utilities\") pod \"certified-operators-468vd\" (UID: \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\") " pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.700156 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-catalog-content\") pod \"certified-operators-468vd\" (UID: \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\") " pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.700757 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-catalog-content\") pod \"certified-operators-468vd\" (UID: \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\") " pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.701217 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-utilities\") pod \"certified-operators-468vd\" (UID: \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\") " pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.726960 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4q45b\" (UniqueName: \"kubernetes.io/projected/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-kube-api-access-4q45b\") pod \"certified-operators-468vd\" (UID: \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\") " pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.753472 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f5bln"] Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.755193 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.765072 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f5bln"] Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.801813 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-catalog-content\") pod \"redhat-operators-f5bln\" (UID: \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\") " pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.801886 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqrk9\" (UniqueName: \"kubernetes.io/projected/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-kube-api-access-dqrk9\") pod \"redhat-operators-f5bln\" (UID: \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\") " pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.801910 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-utilities\") pod \"redhat-operators-f5bln\" (UID: \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\") " pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.885640 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.908916 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-catalog-content\") pod \"redhat-operators-f5bln\" (UID: \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\") " pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.908976 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqrk9\" (UniqueName: \"kubernetes.io/projected/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-kube-api-access-dqrk9\") pod \"redhat-operators-f5bln\" (UID: \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\") " pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.909011 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-utilities\") pod \"redhat-operators-f5bln\" (UID: \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\") " pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.909467 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-catalog-content\") pod \"redhat-operators-f5bln\" (UID: \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\") " pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.909501 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-utilities\") pod \"redhat-operators-f5bln\" (UID: \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\") " pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:11 crc kubenswrapper[4856]: I1122 08:31:11.928533 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqrk9\" (UniqueName: \"kubernetes.io/projected/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-kube-api-access-dqrk9\") pod \"redhat-operators-f5bln\" (UID: \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\") " pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:12 crc kubenswrapper[4856]: I1122 08:31:12.091836 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:12 crc kubenswrapper[4856]: I1122 08:31:12.154590 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-468vd"] Nov 22 08:31:12 crc kubenswrapper[4856]: W1122 08:31:12.169480 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebda7f97_aea9_4166_9a1c_1e365ec0e31c.slice/crio-57bbfa336038c0c890133c33f9bef23ac5b29d41569ecf0a57edb15ad0e1e292 WatchSource:0}: Error finding container 57bbfa336038c0c890133c33f9bef23ac5b29d41569ecf0a57edb15ad0e1e292: Status 404 returned error can't find the container with id 57bbfa336038c0c890133c33f9bef23ac5b29d41569ecf0a57edb15ad0e1e292 Nov 22 08:31:12 crc kubenswrapper[4856]: I1122 08:31:12.612978 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f5bln"] Nov 22 08:31:12 crc kubenswrapper[4856]: W1122 08:31:12.616961 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9b826cc_0b90_44df_b522_01b2a3a6c2a3.slice/crio-82bf25656c80666684bc03602a7dfa3e616fd9828d8141936782bc01b91097e2 WatchSource:0}: Error finding container 82bf25656c80666684bc03602a7dfa3e616fd9828d8141936782bc01b91097e2: Status 404 returned error can't find the container with id 82bf25656c80666684bc03602a7dfa3e616fd9828d8141936782bc01b91097e2 Nov 22 08:31:12 crc kubenswrapper[4856]: I1122 08:31:12.717170 4856 generic.go:334] "Generic (PLEG): container finished" podID="ebda7f97-aea9-4166-9a1c-1e365ec0e31c" containerID="ab823d4bb1b732ed0ceb6931cc201494a58072e5d8cfda7531c76c833d1d4a16" exitCode=0 Nov 22 08:31:12 crc kubenswrapper[4856]: I1122 08:31:12.720552 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5bln" event={"ID":"e9b826cc-0b90-44df-b522-01b2a3a6c2a3","Type":"ContainerStarted","Data":"82bf25656c80666684bc03602a7dfa3e616fd9828d8141936782bc01b91097e2"} Nov 22 08:31:12 crc kubenswrapper[4856]: I1122 08:31:12.720599 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-468vd" event={"ID":"ebda7f97-aea9-4166-9a1c-1e365ec0e31c","Type":"ContainerDied","Data":"ab823d4bb1b732ed0ceb6931cc201494a58072e5d8cfda7531c76c833d1d4a16"} Nov 22 08:31:12 crc kubenswrapper[4856]: I1122 08:31:12.720627 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-468vd" event={"ID":"ebda7f97-aea9-4166-9a1c-1e365ec0e31c","Type":"ContainerStarted","Data":"57bbfa336038c0c890133c33f9bef23ac5b29d41569ecf0a57edb15ad0e1e292"} Nov 22 08:31:13 crc kubenswrapper[4856]: I1122 08:31:13.727203 4856 generic.go:334] "Generic (PLEG): container finished" podID="e9b826cc-0b90-44df-b522-01b2a3a6c2a3" containerID="84e5c0cd0c0c84345c82a3156ae2c6be2db5233e6f1331826352f056f3d1af65" exitCode=0 Nov 22 08:31:13 crc kubenswrapper[4856]: I1122 08:31:13.727304 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5bln" event={"ID":"e9b826cc-0b90-44df-b522-01b2a3a6c2a3","Type":"ContainerDied","Data":"84e5c0cd0c0c84345c82a3156ae2c6be2db5233e6f1331826352f056f3d1af65"} Nov 22 08:31:13 crc kubenswrapper[4856]: I1122 08:31:13.730255 4856 generic.go:334] "Generic (PLEG): container finished" podID="ebda7f97-aea9-4166-9a1c-1e365ec0e31c" containerID="aa9eba672538de675e920d6537ff9cefc21d26251995792d78c6adc2244d03de" exitCode=0 Nov 22 08:31:13 crc kubenswrapper[4856]: I1122 08:31:13.730311 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-468vd" event={"ID":"ebda7f97-aea9-4166-9a1c-1e365ec0e31c","Type":"ContainerDied","Data":"aa9eba672538de675e920d6537ff9cefc21d26251995792d78c6adc2244d03de"} Nov 22 08:31:13 crc kubenswrapper[4856]: I1122 08:31:13.950350 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lxld8"] Nov 22 08:31:13 crc kubenswrapper[4856]: I1122 08:31:13.952231 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:13 crc kubenswrapper[4856]: I1122 08:31:13.967426 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxld8"] Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.040757 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-utilities\") pod \"redhat-marketplace-lxld8\" (UID: \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\") " pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.040811 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-catalog-content\") pod \"redhat-marketplace-lxld8\" (UID: \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\") " pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.040906 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7kh2\" (UniqueName: \"kubernetes.io/projected/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-kube-api-access-d7kh2\") pod \"redhat-marketplace-lxld8\" (UID: \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\") " pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.142181 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-utilities\") pod \"redhat-marketplace-lxld8\" (UID: \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\") " pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.142744 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-catalog-content\") pod \"redhat-marketplace-lxld8\" (UID: \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\") " pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.142677 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-utilities\") pod \"redhat-marketplace-lxld8\" (UID: \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\") " pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.143232 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-catalog-content\") pod \"redhat-marketplace-lxld8\" (UID: \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\") " pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.143308 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7kh2\" (UniqueName: \"kubernetes.io/projected/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-kube-api-access-d7kh2\") pod \"redhat-marketplace-lxld8\" (UID: \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\") " pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.167655 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7kh2\" (UniqueName: \"kubernetes.io/projected/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-kube-api-access-d7kh2\") pod \"redhat-marketplace-lxld8\" (UID: \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\") " pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.273602 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.727470 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxld8"] Nov 22 08:31:14 crc kubenswrapper[4856]: W1122 08:31:14.734121 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff6d6594_402d_4494_8324_a2cfdcb9d8e6.slice/crio-fc6bd4ad254ba32c3dc65106e1471edcfe5e1de7a95db5acc64d360c7bb386a3 WatchSource:0}: Error finding container fc6bd4ad254ba32c3dc65106e1471edcfe5e1de7a95db5acc64d360c7bb386a3: Status 404 returned error can't find the container with id fc6bd4ad254ba32c3dc65106e1471edcfe5e1de7a95db5acc64d360c7bb386a3 Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.739608 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5bln" event={"ID":"e9b826cc-0b90-44df-b522-01b2a3a6c2a3","Type":"ContainerStarted","Data":"20c734534ad132cf24b4c2c8de89cc5079e1c1ff49b9db5367cf15b543b3e8c3"} Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.742107 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-468vd" event={"ID":"ebda7f97-aea9-4166-9a1c-1e365ec0e31c","Type":"ContainerStarted","Data":"3e71299c17c399f061564b2f4e7f4e43317942b01a04e94c35ebae38c1893adb"} Nov 22 08:31:14 crc kubenswrapper[4856]: I1122 08:31:14.775979 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-468vd" podStartSLOduration=2.128418474 podStartE2EDuration="3.775957572s" podCreationTimestamp="2025-11-22 08:31:11 +0000 UTC" firstStartedPulling="2025-11-22 08:31:12.71847449 +0000 UTC m=+5315.131867748" lastFinishedPulling="2025-11-22 08:31:14.366013588 +0000 UTC m=+5316.779406846" observedRunningTime="2025-11-22 08:31:14.77398325 +0000 UTC m=+5317.187376528" watchObservedRunningTime="2025-11-22 08:31:14.775957572 +0000 UTC m=+5317.189350820" Nov 22 08:31:15 crc kubenswrapper[4856]: I1122 08:31:15.091191 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:15 crc kubenswrapper[4856]: I1122 08:31:15.091245 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:15 crc kubenswrapper[4856]: I1122 08:31:15.137203 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:15 crc kubenswrapper[4856]: I1122 08:31:15.750681 4856 generic.go:334] "Generic (PLEG): container finished" podID="ff6d6594-402d-4494-8324-a2cfdcb9d8e6" containerID="1b6b147ce1a13203c0204ca3eac491da59db8f0c95c059b3db1d6567b9b444cc" exitCode=0 Nov 22 08:31:15 crc kubenswrapper[4856]: I1122 08:31:15.750816 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxld8" event={"ID":"ff6d6594-402d-4494-8324-a2cfdcb9d8e6","Type":"ContainerDied","Data":"1b6b147ce1a13203c0204ca3eac491da59db8f0c95c059b3db1d6567b9b444cc"} Nov 22 08:31:15 crc kubenswrapper[4856]: I1122 08:31:15.751072 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxld8" event={"ID":"ff6d6594-402d-4494-8324-a2cfdcb9d8e6","Type":"ContainerStarted","Data":"fc6bd4ad254ba32c3dc65106e1471edcfe5e1de7a95db5acc64d360c7bb386a3"} Nov 22 08:31:15 crc kubenswrapper[4856]: I1122 08:31:15.753470 4856 generic.go:334] "Generic (PLEG): container finished" podID="e9b826cc-0b90-44df-b522-01b2a3a6c2a3" containerID="20c734534ad132cf24b4c2c8de89cc5079e1c1ff49b9db5367cf15b543b3e8c3" exitCode=0 Nov 22 08:31:15 crc kubenswrapper[4856]: I1122 08:31:15.753536 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5bln" event={"ID":"e9b826cc-0b90-44df-b522-01b2a3a6c2a3","Type":"ContainerDied","Data":"20c734534ad132cf24b4c2c8de89cc5079e1c1ff49b9db5367cf15b543b3e8c3"} Nov 22 08:31:15 crc kubenswrapper[4856]: I1122 08:31:15.801324 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:16 crc kubenswrapper[4856]: I1122 08:31:16.765713 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5bln" event={"ID":"e9b826cc-0b90-44df-b522-01b2a3a6c2a3","Type":"ContainerStarted","Data":"a8604be7d6c8c00ea4c903fc96353da88a3b278c829cad5259a341e9acab7bc6"} Nov 22 08:31:16 crc kubenswrapper[4856]: I1122 08:31:16.768600 4856 generic.go:334] "Generic (PLEG): container finished" podID="ff6d6594-402d-4494-8324-a2cfdcb9d8e6" containerID="9cf7d68873e80764d08e86ca2b7155a36274d45a41648f3077bd1db2270c2fa4" exitCode=0 Nov 22 08:31:16 crc kubenswrapper[4856]: I1122 08:31:16.768695 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxld8" event={"ID":"ff6d6594-402d-4494-8324-a2cfdcb9d8e6","Type":"ContainerDied","Data":"9cf7d68873e80764d08e86ca2b7155a36274d45a41648f3077bd1db2270c2fa4"} Nov 22 08:31:16 crc kubenswrapper[4856]: I1122 08:31:16.792354 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f5bln" podStartSLOduration=3.178221227 podStartE2EDuration="5.792331409s" podCreationTimestamp="2025-11-22 08:31:11 +0000 UTC" firstStartedPulling="2025-11-22 08:31:13.729216563 +0000 UTC m=+5316.142609811" lastFinishedPulling="2025-11-22 08:31:16.343326735 +0000 UTC m=+5318.756719993" observedRunningTime="2025-11-22 08:31:16.786498272 +0000 UTC m=+5319.199891540" watchObservedRunningTime="2025-11-22 08:31:16.792331409 +0000 UTC m=+5319.205724677" Nov 22 08:31:17 crc kubenswrapper[4856]: I1122 08:31:17.780390 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxld8" event={"ID":"ff6d6594-402d-4494-8324-a2cfdcb9d8e6","Type":"ContainerStarted","Data":"e87a55c1cb58d50051855ad316de38a4b79b2e1b3468b596acebb30b80cd4800"} Nov 22 08:31:18 crc kubenswrapper[4856]: I1122 08:31:18.938336 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lxld8" podStartSLOduration=4.515240871 podStartE2EDuration="5.938311263s" podCreationTimestamp="2025-11-22 08:31:13 +0000 UTC" firstStartedPulling="2025-11-22 08:31:15.753532092 +0000 UTC m=+5318.166925340" lastFinishedPulling="2025-11-22 08:31:17.176602484 +0000 UTC m=+5319.589995732" observedRunningTime="2025-11-22 08:31:17.801156751 +0000 UTC m=+5320.214550009" watchObservedRunningTime="2025-11-22 08:31:18.938311263 +0000 UTC m=+5321.351704541" Nov 22 08:31:18 crc kubenswrapper[4856]: I1122 08:31:18.943949 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8zhh"] Nov 22 08:31:18 crc kubenswrapper[4856]: I1122 08:31:18.944478 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m8zhh" podUID="30d7b79e-2d45-424f-a9fd-477cebc40298" containerName="registry-server" containerID="cri-o://b87e517ef9745f76584bd324723f87ae3fa0efd59c92043300fc4ec71bb70f8b" gracePeriod=2 Nov 22 08:31:19 crc kubenswrapper[4856]: I1122 08:31:19.799778 4856 generic.go:334] "Generic (PLEG): container finished" podID="30d7b79e-2d45-424f-a9fd-477cebc40298" containerID="b87e517ef9745f76584bd324723f87ae3fa0efd59c92043300fc4ec71bb70f8b" exitCode=0 Nov 22 08:31:19 crc kubenswrapper[4856]: I1122 08:31:19.799827 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8zhh" event={"ID":"30d7b79e-2d45-424f-a9fd-477cebc40298","Type":"ContainerDied","Data":"b87e517ef9745f76584bd324723f87ae3fa0efd59c92043300fc4ec71bb70f8b"} Nov 22 08:31:19 crc kubenswrapper[4856]: I1122 08:31:19.799859 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8zhh" event={"ID":"30d7b79e-2d45-424f-a9fd-477cebc40298","Type":"ContainerDied","Data":"47bf1618494008c45f61758d15c70dbdffcae24685241206c4902f6cb182568a"} Nov 22 08:31:19 crc kubenswrapper[4856]: I1122 08:31:19.799873 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47bf1618494008c45f61758d15c70dbdffcae24685241206c4902f6cb182568a" Nov 22 08:31:19 crc kubenswrapper[4856]: I1122 08:31:19.866588 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:20 crc kubenswrapper[4856]: I1122 08:31:20.030892 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d7b79e-2d45-424f-a9fd-477cebc40298-catalog-content\") pod \"30d7b79e-2d45-424f-a9fd-477cebc40298\" (UID: \"30d7b79e-2d45-424f-a9fd-477cebc40298\") " Nov 22 08:31:20 crc kubenswrapper[4856]: I1122 08:31:20.031334 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d7b79e-2d45-424f-a9fd-477cebc40298-utilities\") pod \"30d7b79e-2d45-424f-a9fd-477cebc40298\" (UID: \"30d7b79e-2d45-424f-a9fd-477cebc40298\") " Nov 22 08:31:20 crc kubenswrapper[4856]: I1122 08:31:20.031582 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh9c4\" (UniqueName: \"kubernetes.io/projected/30d7b79e-2d45-424f-a9fd-477cebc40298-kube-api-access-bh9c4\") pod \"30d7b79e-2d45-424f-a9fd-477cebc40298\" (UID: \"30d7b79e-2d45-424f-a9fd-477cebc40298\") " Nov 22 08:31:20 crc kubenswrapper[4856]: I1122 08:31:20.032356 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30d7b79e-2d45-424f-a9fd-477cebc40298-utilities" (OuterVolumeSpecName: "utilities") pod "30d7b79e-2d45-424f-a9fd-477cebc40298" (UID: "30d7b79e-2d45-424f-a9fd-477cebc40298"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:31:20 crc kubenswrapper[4856]: I1122 08:31:20.039495 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d7b79e-2d45-424f-a9fd-477cebc40298-kube-api-access-bh9c4" (OuterVolumeSpecName: "kube-api-access-bh9c4") pod "30d7b79e-2d45-424f-a9fd-477cebc40298" (UID: "30d7b79e-2d45-424f-a9fd-477cebc40298"). InnerVolumeSpecName "kube-api-access-bh9c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:31:20 crc kubenswrapper[4856]: I1122 08:31:20.080117 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30d7b79e-2d45-424f-a9fd-477cebc40298-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30d7b79e-2d45-424f-a9fd-477cebc40298" (UID: "30d7b79e-2d45-424f-a9fd-477cebc40298"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:31:20 crc kubenswrapper[4856]: I1122 08:31:20.134865 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh9c4\" (UniqueName: \"kubernetes.io/projected/30d7b79e-2d45-424f-a9fd-477cebc40298-kube-api-access-bh9c4\") on node \"crc\" DevicePath \"\"" Nov 22 08:31:20 crc kubenswrapper[4856]: I1122 08:31:20.134919 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d7b79e-2d45-424f-a9fd-477cebc40298-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:31:20 crc kubenswrapper[4856]: I1122 08:31:20.134930 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d7b79e-2d45-424f-a9fd-477cebc40298-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:31:20 crc kubenswrapper[4856]: I1122 08:31:20.709969 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:31:20 crc kubenswrapper[4856]: E1122 08:31:20.710298 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:31:20 crc kubenswrapper[4856]: I1122 08:31:20.807269 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8zhh" Nov 22 08:31:20 crc kubenswrapper[4856]: I1122 08:31:20.826831 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8zhh"] Nov 22 08:31:20 crc kubenswrapper[4856]: I1122 08:31:20.833593 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m8zhh"] Nov 22 08:31:21 crc kubenswrapper[4856]: I1122 08:31:21.886241 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:21 crc kubenswrapper[4856]: I1122 08:31:21.886569 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:21 crc kubenswrapper[4856]: I1122 08:31:21.935668 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:22 crc kubenswrapper[4856]: I1122 08:31:22.092560 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:22 crc kubenswrapper[4856]: I1122 08:31:22.092620 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:22 crc kubenswrapper[4856]: I1122 08:31:22.132329 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:22 crc kubenswrapper[4856]: I1122 08:31:22.719557 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d7b79e-2d45-424f-a9fd-477cebc40298" path="/var/lib/kubelet/pods/30d7b79e-2d45-424f-a9fd-477cebc40298/volumes" Nov 22 08:31:22 crc kubenswrapper[4856]: I1122 08:31:22.868671 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:22 crc kubenswrapper[4856]: I1122 08:31:22.871547 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:24 crc kubenswrapper[4856]: I1122 08:31:24.274237 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:24 crc kubenswrapper[4856]: I1122 08:31:24.274327 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:24 crc kubenswrapper[4856]: I1122 08:31:24.331133 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:24 crc kubenswrapper[4856]: I1122 08:31:24.339770 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-468vd"] Nov 22 08:31:24 crc kubenswrapper[4856]: I1122 08:31:24.839894 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-468vd" podUID="ebda7f97-aea9-4166-9a1c-1e365ec0e31c" containerName="registry-server" containerID="cri-o://3e71299c17c399f061564b2f4e7f4e43317942b01a04e94c35ebae38c1893adb" gracePeriod=2 Nov 22 08:31:24 crc kubenswrapper[4856]: I1122 08:31:24.886867 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:25 crc kubenswrapper[4856]: I1122 08:31:25.340960 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f5bln"] Nov 22 08:31:25 crc kubenswrapper[4856]: I1122 08:31:25.341225 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f5bln" podUID="e9b826cc-0b90-44df-b522-01b2a3a6c2a3" containerName="registry-server" containerID="cri-o://a8604be7d6c8c00ea4c903fc96353da88a3b278c829cad5259a341e9acab7bc6" gracePeriod=2 Nov 22 08:31:26 crc kubenswrapper[4856]: I1122 08:31:26.862223 4856 generic.go:334] "Generic (PLEG): container finished" podID="ebda7f97-aea9-4166-9a1c-1e365ec0e31c" containerID="3e71299c17c399f061564b2f4e7f4e43317942b01a04e94c35ebae38c1893adb" exitCode=0 Nov 22 08:31:26 crc kubenswrapper[4856]: I1122 08:31:26.862364 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-468vd" event={"ID":"ebda7f97-aea9-4166-9a1c-1e365ec0e31c","Type":"ContainerDied","Data":"3e71299c17c399f061564b2f4e7f4e43317942b01a04e94c35ebae38c1893adb"} Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.202931 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.349491 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4q45b\" (UniqueName: \"kubernetes.io/projected/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-kube-api-access-4q45b\") pod \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\" (UID: \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\") " Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.349618 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-catalog-content\") pod \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\" (UID: \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\") " Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.349724 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-utilities\") pod \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\" (UID: \"ebda7f97-aea9-4166-9a1c-1e365ec0e31c\") " Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.351035 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-utilities" (OuterVolumeSpecName: "utilities") pod "ebda7f97-aea9-4166-9a1c-1e365ec0e31c" (UID: "ebda7f97-aea9-4166-9a1c-1e365ec0e31c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.359422 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-kube-api-access-4q45b" (OuterVolumeSpecName: "kube-api-access-4q45b") pod "ebda7f97-aea9-4166-9a1c-1e365ec0e31c" (UID: "ebda7f97-aea9-4166-9a1c-1e365ec0e31c"). InnerVolumeSpecName "kube-api-access-4q45b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.410257 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebda7f97-aea9-4166-9a1c-1e365ec0e31c" (UID: "ebda7f97-aea9-4166-9a1c-1e365ec0e31c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.451131 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.451167 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4q45b\" (UniqueName: \"kubernetes.io/projected/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-kube-api-access-4q45b\") on node \"crc\" DevicePath \"\"" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.451177 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebda7f97-aea9-4166-9a1c-1e365ec0e31c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.518609 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.653615 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqrk9\" (UniqueName: \"kubernetes.io/projected/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-kube-api-access-dqrk9\") pod \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\" (UID: \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\") " Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.653701 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-catalog-content\") pod \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\" (UID: \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\") " Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.653807 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-utilities\") pod \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\" (UID: \"e9b826cc-0b90-44df-b522-01b2a3a6c2a3\") " Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.655258 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-utilities" (OuterVolumeSpecName: "utilities") pod "e9b826cc-0b90-44df-b522-01b2a3a6c2a3" (UID: "e9b826cc-0b90-44df-b522-01b2a3a6c2a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.657275 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-kube-api-access-dqrk9" (OuterVolumeSpecName: "kube-api-access-dqrk9") pod "e9b826cc-0b90-44df-b522-01b2a3a6c2a3" (UID: "e9b826cc-0b90-44df-b522-01b2a3a6c2a3"). InnerVolumeSpecName "kube-api-access-dqrk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.740604 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxld8"] Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.740828 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lxld8" podUID="ff6d6594-402d-4494-8324-a2cfdcb9d8e6" containerName="registry-server" containerID="cri-o://e87a55c1cb58d50051855ad316de38a4b79b2e1b3468b596acebb30b80cd4800" gracePeriod=2 Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.750914 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9b826cc-0b90-44df-b522-01b2a3a6c2a3" (UID: "e9b826cc-0b90-44df-b522-01b2a3a6c2a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.755984 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqrk9\" (UniqueName: \"kubernetes.io/projected/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-kube-api-access-dqrk9\") on node \"crc\" DevicePath \"\"" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.756024 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.756045 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9b826cc-0b90-44df-b522-01b2a3a6c2a3-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.875208 4856 generic.go:334] "Generic (PLEG): container finished" podID="ff6d6594-402d-4494-8324-a2cfdcb9d8e6" containerID="e87a55c1cb58d50051855ad316de38a4b79b2e1b3468b596acebb30b80cd4800" exitCode=0 Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.875258 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxld8" event={"ID":"ff6d6594-402d-4494-8324-a2cfdcb9d8e6","Type":"ContainerDied","Data":"e87a55c1cb58d50051855ad316de38a4b79b2e1b3468b596acebb30b80cd4800"} Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.877959 4856 generic.go:334] "Generic (PLEG): container finished" podID="e9b826cc-0b90-44df-b522-01b2a3a6c2a3" containerID="a8604be7d6c8c00ea4c903fc96353da88a3b278c829cad5259a341e9acab7bc6" exitCode=0 Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.878019 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5bln" event={"ID":"e9b826cc-0b90-44df-b522-01b2a3a6c2a3","Type":"ContainerDied","Data":"a8604be7d6c8c00ea4c903fc96353da88a3b278c829cad5259a341e9acab7bc6"} Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.878052 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f5bln" event={"ID":"e9b826cc-0b90-44df-b522-01b2a3a6c2a3","Type":"ContainerDied","Data":"82bf25656c80666684bc03602a7dfa3e616fd9828d8141936782bc01b91097e2"} Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.878073 4856 scope.go:117] "RemoveContainer" containerID="a8604be7d6c8c00ea4c903fc96353da88a3b278c829cad5259a341e9acab7bc6" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.878076 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f5bln" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.881123 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-468vd" event={"ID":"ebda7f97-aea9-4166-9a1c-1e365ec0e31c","Type":"ContainerDied","Data":"57bbfa336038c0c890133c33f9bef23ac5b29d41569ecf0a57edb15ad0e1e292"} Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.881289 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-468vd" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.897871 4856 scope.go:117] "RemoveContainer" containerID="20c734534ad132cf24b4c2c8de89cc5079e1c1ff49b9db5367cf15b543b3e8c3" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.918696 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f5bln"] Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.924108 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f5bln"] Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.937976 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-468vd"] Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.943194 4856 scope.go:117] "RemoveContainer" containerID="84e5c0cd0c0c84345c82a3156ae2c6be2db5233e6f1331826352f056f3d1af65" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.944204 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-468vd"] Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.967421 4856 scope.go:117] "RemoveContainer" containerID="a8604be7d6c8c00ea4c903fc96353da88a3b278c829cad5259a341e9acab7bc6" Nov 22 08:31:27 crc kubenswrapper[4856]: E1122 08:31:27.967907 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8604be7d6c8c00ea4c903fc96353da88a3b278c829cad5259a341e9acab7bc6\": container with ID starting with a8604be7d6c8c00ea4c903fc96353da88a3b278c829cad5259a341e9acab7bc6 not found: ID does not exist" containerID="a8604be7d6c8c00ea4c903fc96353da88a3b278c829cad5259a341e9acab7bc6" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.967940 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8604be7d6c8c00ea4c903fc96353da88a3b278c829cad5259a341e9acab7bc6"} err="failed to get container status \"a8604be7d6c8c00ea4c903fc96353da88a3b278c829cad5259a341e9acab7bc6\": rpc error: code = NotFound desc = could not find container \"a8604be7d6c8c00ea4c903fc96353da88a3b278c829cad5259a341e9acab7bc6\": container with ID starting with a8604be7d6c8c00ea4c903fc96353da88a3b278c829cad5259a341e9acab7bc6 not found: ID does not exist" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.967963 4856 scope.go:117] "RemoveContainer" containerID="20c734534ad132cf24b4c2c8de89cc5079e1c1ff49b9db5367cf15b543b3e8c3" Nov 22 08:31:27 crc kubenswrapper[4856]: E1122 08:31:27.968764 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20c734534ad132cf24b4c2c8de89cc5079e1c1ff49b9db5367cf15b543b3e8c3\": container with ID starting with 20c734534ad132cf24b4c2c8de89cc5079e1c1ff49b9db5367cf15b543b3e8c3 not found: ID does not exist" containerID="20c734534ad132cf24b4c2c8de89cc5079e1c1ff49b9db5367cf15b543b3e8c3" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.968825 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20c734534ad132cf24b4c2c8de89cc5079e1c1ff49b9db5367cf15b543b3e8c3"} err="failed to get container status \"20c734534ad132cf24b4c2c8de89cc5079e1c1ff49b9db5367cf15b543b3e8c3\": rpc error: code = NotFound desc = could not find container \"20c734534ad132cf24b4c2c8de89cc5079e1c1ff49b9db5367cf15b543b3e8c3\": container with ID starting with 20c734534ad132cf24b4c2c8de89cc5079e1c1ff49b9db5367cf15b543b3e8c3 not found: ID does not exist" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.968866 4856 scope.go:117] "RemoveContainer" containerID="84e5c0cd0c0c84345c82a3156ae2c6be2db5233e6f1331826352f056f3d1af65" Nov 22 08:31:27 crc kubenswrapper[4856]: E1122 08:31:27.969274 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e5c0cd0c0c84345c82a3156ae2c6be2db5233e6f1331826352f056f3d1af65\": container with ID starting with 84e5c0cd0c0c84345c82a3156ae2c6be2db5233e6f1331826352f056f3d1af65 not found: ID does not exist" containerID="84e5c0cd0c0c84345c82a3156ae2c6be2db5233e6f1331826352f056f3d1af65" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.969304 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e5c0cd0c0c84345c82a3156ae2c6be2db5233e6f1331826352f056f3d1af65"} err="failed to get container status \"84e5c0cd0c0c84345c82a3156ae2c6be2db5233e6f1331826352f056f3d1af65\": rpc error: code = NotFound desc = could not find container \"84e5c0cd0c0c84345c82a3156ae2c6be2db5233e6f1331826352f056f3d1af65\": container with ID starting with 84e5c0cd0c0c84345c82a3156ae2c6be2db5233e6f1331826352f056f3d1af65 not found: ID does not exist" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.969323 4856 scope.go:117] "RemoveContainer" containerID="3e71299c17c399f061564b2f4e7f4e43317942b01a04e94c35ebae38c1893adb" Nov 22 08:31:27 crc kubenswrapper[4856]: I1122 08:31:27.992737 4856 scope.go:117] "RemoveContainer" containerID="aa9eba672538de675e920d6537ff9cefc21d26251995792d78c6adc2244d03de" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.016729 4856 scope.go:117] "RemoveContainer" containerID="ab823d4bb1b732ed0ceb6931cc201494a58072e5d8cfda7531c76c833d1d4a16" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.077843 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.262472 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-catalog-content\") pod \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\" (UID: \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\") " Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.262992 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-utilities\") pod \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\" (UID: \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\") " Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.263048 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7kh2\" (UniqueName: \"kubernetes.io/projected/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-kube-api-access-d7kh2\") pod \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\" (UID: \"ff6d6594-402d-4494-8324-a2cfdcb9d8e6\") " Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.263886 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-utilities" (OuterVolumeSpecName: "utilities") pod "ff6d6594-402d-4494-8324-a2cfdcb9d8e6" (UID: "ff6d6594-402d-4494-8324-a2cfdcb9d8e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.266590 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-kube-api-access-d7kh2" (OuterVolumeSpecName: "kube-api-access-d7kh2") pod "ff6d6594-402d-4494-8324-a2cfdcb9d8e6" (UID: "ff6d6594-402d-4494-8324-a2cfdcb9d8e6"). InnerVolumeSpecName "kube-api-access-d7kh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.279601 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff6d6594-402d-4494-8324-a2cfdcb9d8e6" (UID: "ff6d6594-402d-4494-8324-a2cfdcb9d8e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.364390 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.364440 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7kh2\" (UniqueName: \"kubernetes.io/projected/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-kube-api-access-d7kh2\") on node \"crc\" DevicePath \"\"" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.364451 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff6d6594-402d-4494-8324-a2cfdcb9d8e6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.717871 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9b826cc-0b90-44df-b522-01b2a3a6c2a3" path="/var/lib/kubelet/pods/e9b826cc-0b90-44df-b522-01b2a3a6c2a3/volumes" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.718471 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebda7f97-aea9-4166-9a1c-1e365ec0e31c" path="/var/lib/kubelet/pods/ebda7f97-aea9-4166-9a1c-1e365ec0e31c/volumes" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.892589 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxld8" event={"ID":"ff6d6594-402d-4494-8324-a2cfdcb9d8e6","Type":"ContainerDied","Data":"fc6bd4ad254ba32c3dc65106e1471edcfe5e1de7a95db5acc64d360c7bb386a3"} Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.892635 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxld8" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.892642 4856 scope.go:117] "RemoveContainer" containerID="e87a55c1cb58d50051855ad316de38a4b79b2e1b3468b596acebb30b80cd4800" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.911195 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxld8"] Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.914093 4856 scope.go:117] "RemoveContainer" containerID="9cf7d68873e80764d08e86ca2b7155a36274d45a41648f3077bd1db2270c2fa4" Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.922128 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxld8"] Nov 22 08:31:28 crc kubenswrapper[4856]: I1122 08:31:28.934819 4856 scope.go:117] "RemoveContainer" containerID="1b6b147ce1a13203c0204ca3eac491da59db8f0c95c059b3db1d6567b9b444cc" Nov 22 08:31:30 crc kubenswrapper[4856]: I1122 08:31:30.718735 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff6d6594-402d-4494-8324-a2cfdcb9d8e6" path="/var/lib/kubelet/pods/ff6d6594-402d-4494-8324-a2cfdcb9d8e6/volumes" Nov 22 08:31:35 crc kubenswrapper[4856]: I1122 08:31:35.709847 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:31:35 crc kubenswrapper[4856]: E1122 08:31:35.710594 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:31:50 crc kubenswrapper[4856]: I1122 08:31:50.710405 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:31:50 crc kubenswrapper[4856]: E1122 08:31:50.711218 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:32:05 crc kubenswrapper[4856]: I1122 08:32:05.710358 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:32:05 crc kubenswrapper[4856]: E1122 08:32:05.711275 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.885639 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 08:32:18 crc kubenswrapper[4856]: E1122 08:32:18.887339 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebda7f97-aea9-4166-9a1c-1e365ec0e31c" containerName="registry-server" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.887377 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebda7f97-aea9-4166-9a1c-1e365ec0e31c" containerName="registry-server" Nov 22 08:32:18 crc kubenswrapper[4856]: E1122 08:32:18.887423 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff6d6594-402d-4494-8324-a2cfdcb9d8e6" containerName="registry-server" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.887445 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff6d6594-402d-4494-8324-a2cfdcb9d8e6" containerName="registry-server" Nov 22 08:32:18 crc kubenswrapper[4856]: E1122 08:32:18.887480 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff6d6594-402d-4494-8324-a2cfdcb9d8e6" containerName="extract-content" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.887500 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff6d6594-402d-4494-8324-a2cfdcb9d8e6" containerName="extract-content" Nov 22 08:32:18 crc kubenswrapper[4856]: E1122 08:32:18.887563 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebda7f97-aea9-4166-9a1c-1e365ec0e31c" containerName="extract-utilities" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.887582 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebda7f97-aea9-4166-9a1c-1e365ec0e31c" containerName="extract-utilities" Nov 22 08:32:18 crc kubenswrapper[4856]: E1122 08:32:18.887610 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b826cc-0b90-44df-b522-01b2a3a6c2a3" containerName="extract-content" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.887629 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b826cc-0b90-44df-b522-01b2a3a6c2a3" containerName="extract-content" Nov 22 08:32:18 crc kubenswrapper[4856]: E1122 08:32:18.887668 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b826cc-0b90-44df-b522-01b2a3a6c2a3" containerName="registry-server" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.887685 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b826cc-0b90-44df-b522-01b2a3a6c2a3" containerName="registry-server" Nov 22 08:32:18 crc kubenswrapper[4856]: E1122 08:32:18.887730 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff6d6594-402d-4494-8324-a2cfdcb9d8e6" containerName="extract-utilities" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.887748 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff6d6594-402d-4494-8324-a2cfdcb9d8e6" containerName="extract-utilities" Nov 22 08:32:18 crc kubenswrapper[4856]: E1122 08:32:18.887776 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7b79e-2d45-424f-a9fd-477cebc40298" containerName="registry-server" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.887797 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7b79e-2d45-424f-a9fd-477cebc40298" containerName="registry-server" Nov 22 08:32:18 crc kubenswrapper[4856]: E1122 08:32:18.887833 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7b79e-2d45-424f-a9fd-477cebc40298" containerName="extract-utilities" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.887850 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7b79e-2d45-424f-a9fd-477cebc40298" containerName="extract-utilities" Nov 22 08:32:18 crc kubenswrapper[4856]: E1122 08:32:18.887877 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9b826cc-0b90-44df-b522-01b2a3a6c2a3" containerName="extract-utilities" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.887894 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9b826cc-0b90-44df-b522-01b2a3a6c2a3" containerName="extract-utilities" Nov 22 08:32:18 crc kubenswrapper[4856]: E1122 08:32:18.887924 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d7b79e-2d45-424f-a9fd-477cebc40298" containerName="extract-content" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.887943 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d7b79e-2d45-424f-a9fd-477cebc40298" containerName="extract-content" Nov 22 08:32:18 crc kubenswrapper[4856]: E1122 08:32:18.887978 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebda7f97-aea9-4166-9a1c-1e365ec0e31c" containerName="extract-content" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.887997 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebda7f97-aea9-4166-9a1c-1e365ec0e31c" containerName="extract-content" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.888473 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d7b79e-2d45-424f-a9fd-477cebc40298" containerName="registry-server" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.888558 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebda7f97-aea9-4166-9a1c-1e365ec0e31c" containerName="registry-server" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.888617 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff6d6594-402d-4494-8324-a2cfdcb9d8e6" containerName="registry-server" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.888661 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9b826cc-0b90-44df-b522-01b2a3a6c2a3" containerName="registry-server" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.890272 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.892679 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 08:32:18 crc kubenswrapper[4856]: I1122 08:32:18.893717 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nvzxv" Nov 22 08:32:19 crc kubenswrapper[4856]: I1122 08:32:19.003482 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54\") pod \"mariadb-copy-data\" (UID: \"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb\") " pod="openstack/mariadb-copy-data" Nov 22 08:32:19 crc kubenswrapper[4856]: I1122 08:32:19.003652 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4qvs\" (UniqueName: \"kubernetes.io/projected/7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb-kube-api-access-q4qvs\") pod \"mariadb-copy-data\" (UID: \"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb\") " pod="openstack/mariadb-copy-data" Nov 22 08:32:19 crc kubenswrapper[4856]: I1122 08:32:19.104966 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4qvs\" (UniqueName: \"kubernetes.io/projected/7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb-kube-api-access-q4qvs\") pod \"mariadb-copy-data\" (UID: \"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb\") " pod="openstack/mariadb-copy-data" Nov 22 08:32:19 crc kubenswrapper[4856]: I1122 08:32:19.105076 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54\") pod \"mariadb-copy-data\" (UID: \"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb\") " pod="openstack/mariadb-copy-data" Nov 22 08:32:19 crc kubenswrapper[4856]: I1122 08:32:19.108748 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:32:19 crc kubenswrapper[4856]: I1122 08:32:19.108783 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54\") pod \"mariadb-copy-data\" (UID: \"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e94acb930a77837d3d8a2a658beb1d4748d6239599d18568facadbbe20051d5d/globalmount\"" pod="openstack/mariadb-copy-data" Nov 22 08:32:19 crc kubenswrapper[4856]: I1122 08:32:19.131048 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4qvs\" (UniqueName: \"kubernetes.io/projected/7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb-kube-api-access-q4qvs\") pod \"mariadb-copy-data\" (UID: \"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb\") " pod="openstack/mariadb-copy-data" Nov 22 08:32:19 crc kubenswrapper[4856]: I1122 08:32:19.138870 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54\") pod \"mariadb-copy-data\" (UID: \"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb\") " pod="openstack/mariadb-copy-data" Nov 22 08:32:19 crc kubenswrapper[4856]: I1122 08:32:19.219801 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 22 08:32:19 crc kubenswrapper[4856]: I1122 08:32:19.726706 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 08:32:20 crc kubenswrapper[4856]: I1122 08:32:20.340011 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb","Type":"ContainerStarted","Data":"2a730927cabd91eacda30b64ce903cd66ad57b902645157e1718e4c206b6427f"} Nov 22 08:32:20 crc kubenswrapper[4856]: I1122 08:32:20.340071 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb","Type":"ContainerStarted","Data":"0152727fc3fa9b518a4fd8341da94d7f298aadb5f6d950b8eaafbd0bc7e344b5"} Nov 22 08:32:20 crc kubenswrapper[4856]: I1122 08:32:20.360251 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=3.360229545 podStartE2EDuration="3.360229545s" podCreationTimestamp="2025-11-22 08:32:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:32:20.354893472 +0000 UTC m=+5382.768286740" watchObservedRunningTime="2025-11-22 08:32:20.360229545 +0000 UTC m=+5382.773622803" Nov 22 08:32:20 crc kubenswrapper[4856]: I1122 08:32:20.709361 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:32:20 crc kubenswrapper[4856]: E1122 08:32:20.709842 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:32:23 crc kubenswrapper[4856]: I1122 08:32:23.586232 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 22 08:32:23 crc kubenswrapper[4856]: I1122 08:32:23.587670 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 08:32:23 crc kubenswrapper[4856]: I1122 08:32:23.597627 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 22 08:32:23 crc kubenswrapper[4856]: I1122 08:32:23.682739 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxs8v\" (UniqueName: \"kubernetes.io/projected/6b2ac78f-08a4-41b8-913f-d89c279e1f28-kube-api-access-rxs8v\") pod \"mariadb-client\" (UID: \"6b2ac78f-08a4-41b8-913f-d89c279e1f28\") " pod="openstack/mariadb-client" Nov 22 08:32:23 crc kubenswrapper[4856]: I1122 08:32:23.784485 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxs8v\" (UniqueName: \"kubernetes.io/projected/6b2ac78f-08a4-41b8-913f-d89c279e1f28-kube-api-access-rxs8v\") pod \"mariadb-client\" (UID: \"6b2ac78f-08a4-41b8-913f-d89c279e1f28\") " pod="openstack/mariadb-client" Nov 22 08:32:23 crc kubenswrapper[4856]: I1122 08:32:23.804682 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxs8v\" (UniqueName: \"kubernetes.io/projected/6b2ac78f-08a4-41b8-913f-d89c279e1f28-kube-api-access-rxs8v\") pod \"mariadb-client\" (UID: \"6b2ac78f-08a4-41b8-913f-d89c279e1f28\") " pod="openstack/mariadb-client" Nov 22 08:32:23 crc kubenswrapper[4856]: I1122 08:32:23.917653 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 08:32:24 crc kubenswrapper[4856]: I1122 08:32:24.319138 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 22 08:32:24 crc kubenswrapper[4856]: W1122 08:32:24.328775 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b2ac78f_08a4_41b8_913f_d89c279e1f28.slice/crio-db18c999e452211f381f206055752e4b6608f35362fcdb928a12e9ba27d23a85 WatchSource:0}: Error finding container db18c999e452211f381f206055752e4b6608f35362fcdb928a12e9ba27d23a85: Status 404 returned error can't find the container with id db18c999e452211f381f206055752e4b6608f35362fcdb928a12e9ba27d23a85 Nov 22 08:32:24 crc kubenswrapper[4856]: I1122 08:32:24.386426 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"6b2ac78f-08a4-41b8-913f-d89c279e1f28","Type":"ContainerStarted","Data":"db18c999e452211f381f206055752e4b6608f35362fcdb928a12e9ba27d23a85"} Nov 22 08:32:25 crc kubenswrapper[4856]: I1122 08:32:25.401744 4856 generic.go:334] "Generic (PLEG): container finished" podID="6b2ac78f-08a4-41b8-913f-d89c279e1f28" containerID="ce527cca8d428e0648124727f44117977f3074c8b00841872043c814faf1c91f" exitCode=0 Nov 22 08:32:25 crc kubenswrapper[4856]: I1122 08:32:25.401889 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"6b2ac78f-08a4-41b8-913f-d89c279e1f28","Type":"ContainerDied","Data":"ce527cca8d428e0648124727f44117977f3074c8b00841872043c814faf1c91f"} Nov 22 08:32:26 crc kubenswrapper[4856]: I1122 08:32:26.711626 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 08:32:26 crc kubenswrapper[4856]: I1122 08:32:26.735725 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_6b2ac78f-08a4-41b8-913f-d89c279e1f28/mariadb-client/0.log" Nov 22 08:32:26 crc kubenswrapper[4856]: I1122 08:32:26.763294 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 22 08:32:26 crc kubenswrapper[4856]: I1122 08:32:26.768425 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 22 08:32:26 crc kubenswrapper[4856]: I1122 08:32:26.840781 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxs8v\" (UniqueName: \"kubernetes.io/projected/6b2ac78f-08a4-41b8-913f-d89c279e1f28-kube-api-access-rxs8v\") pod \"6b2ac78f-08a4-41b8-913f-d89c279e1f28\" (UID: \"6b2ac78f-08a4-41b8-913f-d89c279e1f28\") " Nov 22 08:32:26 crc kubenswrapper[4856]: I1122 08:32:26.846255 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b2ac78f-08a4-41b8-913f-d89c279e1f28-kube-api-access-rxs8v" (OuterVolumeSpecName: "kube-api-access-rxs8v") pod "6b2ac78f-08a4-41b8-913f-d89c279e1f28" (UID: "6b2ac78f-08a4-41b8-913f-d89c279e1f28"). InnerVolumeSpecName "kube-api-access-rxs8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:32:26 crc kubenswrapper[4856]: I1122 08:32:26.942953 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxs8v\" (UniqueName: \"kubernetes.io/projected/6b2ac78f-08a4-41b8-913f-d89c279e1f28-kube-api-access-rxs8v\") on node \"crc\" DevicePath \"\"" Nov 22 08:32:26 crc kubenswrapper[4856]: I1122 08:32:26.955105 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Nov 22 08:32:26 crc kubenswrapper[4856]: E1122 08:32:26.955577 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b2ac78f-08a4-41b8-913f-d89c279e1f28" containerName="mariadb-client" Nov 22 08:32:26 crc kubenswrapper[4856]: I1122 08:32:26.955602 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b2ac78f-08a4-41b8-913f-d89c279e1f28" containerName="mariadb-client" Nov 22 08:32:26 crc kubenswrapper[4856]: I1122 08:32:26.955810 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b2ac78f-08a4-41b8-913f-d89c279e1f28" containerName="mariadb-client" Nov 22 08:32:26 crc kubenswrapper[4856]: I1122 08:32:26.956493 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 08:32:26 crc kubenswrapper[4856]: I1122 08:32:26.972063 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 22 08:32:27 crc kubenswrapper[4856]: I1122 08:32:27.045196 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qhj2\" (UniqueName: \"kubernetes.io/projected/3490d65f-4730-41ea-a787-cc97f97f1dcc-kube-api-access-8qhj2\") pod \"mariadb-client\" (UID: \"3490d65f-4730-41ea-a787-cc97f97f1dcc\") " pod="openstack/mariadb-client" Nov 22 08:32:27 crc kubenswrapper[4856]: I1122 08:32:27.147063 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qhj2\" (UniqueName: \"kubernetes.io/projected/3490d65f-4730-41ea-a787-cc97f97f1dcc-kube-api-access-8qhj2\") pod \"mariadb-client\" (UID: \"3490d65f-4730-41ea-a787-cc97f97f1dcc\") " pod="openstack/mariadb-client" Nov 22 08:32:27 crc kubenswrapper[4856]: I1122 08:32:27.175937 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qhj2\" (UniqueName: \"kubernetes.io/projected/3490d65f-4730-41ea-a787-cc97f97f1dcc-kube-api-access-8qhj2\") pod \"mariadb-client\" (UID: \"3490d65f-4730-41ea-a787-cc97f97f1dcc\") " pod="openstack/mariadb-client" Nov 22 08:32:27 crc kubenswrapper[4856]: I1122 08:32:27.279195 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 08:32:27 crc kubenswrapper[4856]: I1122 08:32:27.420209 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db18c999e452211f381f206055752e4b6608f35362fcdb928a12e9ba27d23a85" Nov 22 08:32:27 crc kubenswrapper[4856]: I1122 08:32:27.420381 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 08:32:27 crc kubenswrapper[4856]: I1122 08:32:27.448467 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="6b2ac78f-08a4-41b8-913f-d89c279e1f28" podUID="3490d65f-4730-41ea-a787-cc97f97f1dcc" Nov 22 08:32:27 crc kubenswrapper[4856]: I1122 08:32:27.714250 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Nov 22 08:32:27 crc kubenswrapper[4856]: W1122 08:32:27.725503 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3490d65f_4730_41ea_a787_cc97f97f1dcc.slice/crio-3bf9000a3308d2986e1c40676cad9d0a846b1f7f1df06a1427b0aa04a3a757e9 WatchSource:0}: Error finding container 3bf9000a3308d2986e1c40676cad9d0a846b1f7f1df06a1427b0aa04a3a757e9: Status 404 returned error can't find the container with id 3bf9000a3308d2986e1c40676cad9d0a846b1f7f1df06a1427b0aa04a3a757e9 Nov 22 08:32:28 crc kubenswrapper[4856]: I1122 08:32:28.430341 4856 generic.go:334] "Generic (PLEG): container finished" podID="3490d65f-4730-41ea-a787-cc97f97f1dcc" containerID="82dee7ba9b70faf2acdcf9403ef9b6dd28fa4226764a0c7388dfe58fefc7d0ee" exitCode=0 Nov 22 08:32:28 crc kubenswrapper[4856]: I1122 08:32:28.430401 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"3490d65f-4730-41ea-a787-cc97f97f1dcc","Type":"ContainerDied","Data":"82dee7ba9b70faf2acdcf9403ef9b6dd28fa4226764a0c7388dfe58fefc7d0ee"} Nov 22 08:32:28 crc kubenswrapper[4856]: I1122 08:32:28.430744 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"3490d65f-4730-41ea-a787-cc97f97f1dcc","Type":"ContainerStarted","Data":"3bf9000a3308d2986e1c40676cad9d0a846b1f7f1df06a1427b0aa04a3a757e9"} Nov 22 08:32:28 crc kubenswrapper[4856]: I1122 08:32:28.727267 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b2ac78f-08a4-41b8-913f-d89c279e1f28" path="/var/lib/kubelet/pods/6b2ac78f-08a4-41b8-913f-d89c279e1f28/volumes" Nov 22 08:32:29 crc kubenswrapper[4856]: I1122 08:32:29.764635 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 08:32:29 crc kubenswrapper[4856]: I1122 08:32:29.783163 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_3490d65f-4730-41ea-a787-cc97f97f1dcc/mariadb-client/0.log" Nov 22 08:32:29 crc kubenswrapper[4856]: I1122 08:32:29.810935 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Nov 22 08:32:29 crc kubenswrapper[4856]: I1122 08:32:29.822079 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Nov 22 08:32:29 crc kubenswrapper[4856]: I1122 08:32:29.895218 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qhj2\" (UniqueName: \"kubernetes.io/projected/3490d65f-4730-41ea-a787-cc97f97f1dcc-kube-api-access-8qhj2\") pod \"3490d65f-4730-41ea-a787-cc97f97f1dcc\" (UID: \"3490d65f-4730-41ea-a787-cc97f97f1dcc\") " Nov 22 08:32:29 crc kubenswrapper[4856]: I1122 08:32:29.902711 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3490d65f-4730-41ea-a787-cc97f97f1dcc-kube-api-access-8qhj2" (OuterVolumeSpecName: "kube-api-access-8qhj2") pod "3490d65f-4730-41ea-a787-cc97f97f1dcc" (UID: "3490d65f-4730-41ea-a787-cc97f97f1dcc"). InnerVolumeSpecName "kube-api-access-8qhj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:32:29 crc kubenswrapper[4856]: I1122 08:32:29.997125 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qhj2\" (UniqueName: \"kubernetes.io/projected/3490d65f-4730-41ea-a787-cc97f97f1dcc-kube-api-access-8qhj2\") on node \"crc\" DevicePath \"\"" Nov 22 08:32:30 crc kubenswrapper[4856]: I1122 08:32:30.450966 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf9000a3308d2986e1c40676cad9d0a846b1f7f1df06a1427b0aa04a3a757e9" Nov 22 08:32:30 crc kubenswrapper[4856]: I1122 08:32:30.451383 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Nov 22 08:32:30 crc kubenswrapper[4856]: I1122 08:32:30.725688 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3490d65f-4730-41ea-a787-cc97f97f1dcc" path="/var/lib/kubelet/pods/3490d65f-4730-41ea-a787-cc97f97f1dcc/volumes" Nov 22 08:32:34 crc kubenswrapper[4856]: I1122 08:32:34.710472 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:32:34 crc kubenswrapper[4856]: E1122 08:32:34.711401 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:32:48 crc kubenswrapper[4856]: I1122 08:32:48.715555 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:32:48 crc kubenswrapper[4856]: E1122 08:32:48.717309 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:33:00 crc kubenswrapper[4856]: I1122 08:33:00.709646 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:33:00 crc kubenswrapper[4856]: E1122 08:33:00.710610 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:33:11 crc kubenswrapper[4856]: I1122 08:33:11.710229 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:33:11 crc kubenswrapper[4856]: E1122 08:33:11.711156 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.270687 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 08:33:13 crc kubenswrapper[4856]: E1122 08:33:13.271772 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3490d65f-4730-41ea-a787-cc97f97f1dcc" containerName="mariadb-client" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.271795 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3490d65f-4730-41ea-a787-cc97f97f1dcc" containerName="mariadb-client" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.272062 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3490d65f-4730-41ea-a787-cc97f97f1dcc" containerName="mariadb-client" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.275972 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.283659 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.286345 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.286708 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.286747 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-8zths" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.295689 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.301114 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.312598 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.315107 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.330892 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.332299 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.347716 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.353145 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.425870 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdda5e7f-56ae-4427-8099-7e1291cc5296-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.425953 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/837136ea-05b1-42f9-8af2-806dba026c53-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.426197 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdda5e7f-56ae-4427-8099-7e1291cc5296-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.426254 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837136ea-05b1-42f9-8af2-806dba026c53-config\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.426350 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/837136ea-05b1-42f9-8af2-806dba026c53-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.426411 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdda5e7f-56ae-4427-8099-7e1291cc5296-config\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.426480 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/837136ea-05b1-42f9-8af2-806dba026c53-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.426543 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bdda5e7f-56ae-4427-8099-7e1291cc5296-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.426614 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdda5e7f-56ae-4427-8099-7e1291cc5296-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.426695 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbrms\" (UniqueName: \"kubernetes.io/projected/837136ea-05b1-42f9-8af2-806dba026c53-kube-api-access-pbrms\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.426742 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-25778c9c-8fdf-4155-be27-0d7b1ae3992f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25778c9c-8fdf-4155-be27-0d7b1ae3992f\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.426822 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdda5e7f-56ae-4427-8099-7e1291cc5296-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.426971 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-90a2735e-b061-4c01-8d08-8ab430f4e92f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90a2735e-b061-4c01-8d08-8ab430f4e92f\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.427014 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/837136ea-05b1-42f9-8af2-806dba026c53-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.427103 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chtwj\" (UniqueName: \"kubernetes.io/projected/bdda5e7f-56ae-4427-8099-7e1291cc5296-kube-api-access-chtwj\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.427254 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/837136ea-05b1-42f9-8af2-806dba026c53-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.517406 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.520276 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.527828 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.528127 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.530722 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbrms\" (UniqueName: \"kubernetes.io/projected/837136ea-05b1-42f9-8af2-806dba026c53-kube-api-access-pbrms\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.530785 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6728e6f-b4b0-45fc-8745-d9c657c6146f-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.530830 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-25778c9c-8fdf-4155-be27-0d7b1ae3992f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25778c9c-8fdf-4155-be27-0d7b1ae3992f\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.530865 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6728e6f-b4b0-45fc-8745-d9c657c6146f-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.530911 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6728e6f-b4b0-45fc-8745-d9c657c6146f-config\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.530944 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdda5e7f-56ae-4427-8099-7e1291cc5296-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.530980 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-90a2735e-b061-4c01-8d08-8ab430f4e92f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90a2735e-b061-4c01-8d08-8ab430f4e92f\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531018 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/837136ea-05b1-42f9-8af2-806dba026c53-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531065 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chtwj\" (UniqueName: \"kubernetes.io/projected/bdda5e7f-56ae-4427-8099-7e1291cc5296-kube-api-access-chtwj\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531111 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6728e6f-b4b0-45fc-8745-d9c657c6146f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531175 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/837136ea-05b1-42f9-8af2-806dba026c53-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531220 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c37340d7-4801-4280-80d6-48237a659646\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c37340d7-4801-4280-80d6-48237a659646\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531291 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6728e6f-b4b0-45fc-8745-d9c657c6146f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531326 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdda5e7f-56ae-4427-8099-7e1291cc5296-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531372 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/837136ea-05b1-42f9-8af2-806dba026c53-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531441 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdda5e7f-56ae-4427-8099-7e1291cc5296-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531470 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837136ea-05b1-42f9-8af2-806dba026c53-config\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531532 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/837136ea-05b1-42f9-8af2-806dba026c53-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531571 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdda5e7f-56ae-4427-8099-7e1291cc5296-config\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531610 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/837136ea-05b1-42f9-8af2-806dba026c53-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531643 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bdda5e7f-56ae-4427-8099-7e1291cc5296-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531672 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6728e6f-b4b0-45fc-8745-d9c657c6146f-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531708 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb8rk\" (UniqueName: \"kubernetes.io/projected/a6728e6f-b4b0-45fc-8745-d9c657c6146f-kube-api-access-qb8rk\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.531740 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdda5e7f-56ae-4427-8099-7e1291cc5296-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.532254 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.533983 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/837136ea-05b1-42f9-8af2-806dba026c53-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.534112 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bdda5e7f-56ae-4427-8099-7e1291cc5296-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.534826 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/bdda5e7f-56ae-4427-8099-7e1291cc5296-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.535339 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdda5e7f-56ae-4427-8099-7e1291cc5296-config\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.535853 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/837136ea-05b1-42f9-8af2-806dba026c53-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.536334 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.536626 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-4q5qb" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.541685 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/837136ea-05b1-42f9-8af2-806dba026c53-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.542091 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/837136ea-05b1-42f9-8af2-806dba026c53-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.543136 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/837136ea-05b1-42f9-8af2-806dba026c53-config\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.547304 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.547555 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-90a2735e-b061-4c01-8d08-8ab430f4e92f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90a2735e-b061-4c01-8d08-8ab430f4e92f\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5f841470e4df68f4ebd377d4433a2e2781080ad7c17ec812290cbfd972b02d9a/globalmount\"" pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.547304 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.547593 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-25778c9c-8fdf-4155-be27-0d7b1ae3992f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25778c9c-8fdf-4155-be27-0d7b1ae3992f\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f8c1b73c50f77392068c259582ae70bac722e69eb8268a65495c28b1a849d9ba/globalmount\"" pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.548864 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/837136ea-05b1-42f9-8af2-806dba026c53-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.550014 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdda5e7f-56ae-4427-8099-7e1291cc5296-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.554982 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.556632 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.558064 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdda5e7f-56ae-4427-8099-7e1291cc5296-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.562493 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bdda5e7f-56ae-4427-8099-7e1291cc5296-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.578604 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbrms\" (UniqueName: \"kubernetes.io/projected/837136ea-05b1-42f9-8af2-806dba026c53-kube-api-access-pbrms\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.580498 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.581669 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chtwj\" (UniqueName: \"kubernetes.io/projected/bdda5e7f-56ae-4427-8099-7e1291cc5296-kube-api-access-chtwj\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.582721 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.593756 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.605287 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.630896 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-25778c9c-8fdf-4155-be27-0d7b1ae3992f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25778c9c-8fdf-4155-be27-0d7b1ae3992f\") pod \"ovsdbserver-nb-1\" (UID: \"bdda5e7f-56ae-4427-8099-7e1291cc5296\") " pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.633165 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/60cf15d0-8906-47ae-8fb0-ca49be28e48d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.633264 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6728e6f-b4b0-45fc-8745-d9c657c6146f-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.633328 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd6s6\" (UniqueName: \"kubernetes.io/projected/60cf15d0-8906-47ae-8fb0-ca49be28e48d-kube-api-access-sd6s6\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.633355 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb8rk\" (UniqueName: \"kubernetes.io/projected/a6728e6f-b4b0-45fc-8745-d9c657c6146f-kube-api-access-qb8rk\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.633382 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-979039a1-0b55-4f3b-b7de-1a73212cc253\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-979039a1-0b55-4f3b-b7de-1a73212cc253\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.633414 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6728e6f-b4b0-45fc-8745-d9c657c6146f-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.633438 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6728e6f-b4b0-45fc-8745-d9c657c6146f-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.633467 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6728e6f-b4b0-45fc-8745-d9c657c6146f-config\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.633998 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6728e6f-b4b0-45fc-8745-d9c657c6146f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.633911 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a6728e6f-b4b0-45fc-8745-d9c657c6146f-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.634057 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60cf15d0-8906-47ae-8fb0-ca49be28e48d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.634118 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/60cf15d0-8906-47ae-8fb0-ca49be28e48d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.634271 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c37340d7-4801-4280-80d6-48237a659646\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c37340d7-4801-4280-80d6-48237a659646\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.634364 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60cf15d0-8906-47ae-8fb0-ca49be28e48d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.635218 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60cf15d0-8906-47ae-8fb0-ca49be28e48d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.635013 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-90a2735e-b061-4c01-8d08-8ab430f4e92f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-90a2735e-b061-4c01-8d08-8ab430f4e92f\") pod \"ovsdbserver-nb-0\" (UID: \"837136ea-05b1-42f9-8af2-806dba026c53\") " pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.635340 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6728e6f-b4b0-45fc-8745-d9c657c6146f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.635395 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cf15d0-8906-47ae-8fb0-ca49be28e48d-config\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.635127 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6728e6f-b4b0-45fc-8745-d9c657c6146f-config\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.636484 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.636537 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c37340d7-4801-4280-80d6-48237a659646\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c37340d7-4801-4280-80d6-48237a659646\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/00ff32f003f864bb20d26acbbceb8421aef0a5a95408fe55ab71790c0445f5bd/globalmount\"" pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.636676 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6728e6f-b4b0-45fc-8745-d9c657c6146f-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.637346 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6728e6f-b4b0-45fc-8745-d9c657c6146f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.638106 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.638319 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6728e6f-b4b0-45fc-8745-d9c657c6146f-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.643156 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6728e6f-b4b0-45fc-8745-d9c657c6146f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.647936 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb8rk\" (UniqueName: \"kubernetes.io/projected/a6728e6f-b4b0-45fc-8745-d9c657c6146f-kube-api-access-qb8rk\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.661753 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c37340d7-4801-4280-80d6-48237a659646\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c37340d7-4801-4280-80d6-48237a659646\") pod \"ovsdbserver-nb-2\" (UID: \"a6728e6f-b4b0-45fc-8745-d9c657c6146f\") " pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.737815 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-537c1ba5-68f2-4181-8d10-dfcc73e3f2c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-537c1ba5-68f2-4181-8d10-dfcc73e3f2c6\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738347 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/789e84c3-d8c1-43e1-8024-de34dc89e648-config\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738408 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60cf15d0-8906-47ae-8fb0-ca49be28e48d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738474 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738522 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/60cf15d0-8906-47ae-8fb0-ca49be28e48d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738550 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4glhq\" (UniqueName: \"kubernetes.io/projected/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-kube-api-access-4glhq\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738569 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738603 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8feb2f1f-e903-48ee-ae4a-fb623d2758e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8feb2f1f-e903-48ee-ae4a-fb623d2758e2\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738621 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/789e84c3-d8c1-43e1-8024-de34dc89e648-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738644 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/789e84c3-d8c1-43e1-8024-de34dc89e648-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738662 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60cf15d0-8906-47ae-8fb0-ca49be28e48d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738686 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60cf15d0-8906-47ae-8fb0-ca49be28e48d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738712 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/789e84c3-d8c1-43e1-8024-de34dc89e648-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738732 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8njg\" (UniqueName: \"kubernetes.io/projected/789e84c3-d8c1-43e1-8024-de34dc89e648-kube-api-access-r8njg\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738771 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789e84c3-d8c1-43e1-8024-de34dc89e648-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738790 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cf15d0-8906-47ae-8fb0-ca49be28e48d-config\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738810 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738827 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738862 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/60cf15d0-8906-47ae-8fb0-ca49be28e48d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738877 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-config\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738899 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/789e84c3-d8c1-43e1-8024-de34dc89e648-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738930 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-979039a1-0b55-4f3b-b7de-1a73212cc253\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-979039a1-0b55-4f3b-b7de-1a73212cc253\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738947 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd6s6\" (UniqueName: \"kubernetes.io/projected/60cf15d0-8906-47ae-8fb0-ca49be28e48d-kube-api-access-sd6s6\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.738977 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.740576 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/60cf15d0-8906-47ae-8fb0-ca49be28e48d-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.743134 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/60cf15d0-8906-47ae-8fb0-ca49be28e48d-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.743258 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cf15d0-8906-47ae-8fb0-ca49be28e48d-config\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.744407 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/60cf15d0-8906-47ae-8fb0-ca49be28e48d-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.745367 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.745406 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-979039a1-0b55-4f3b-b7de-1a73212cc253\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-979039a1-0b55-4f3b-b7de-1a73212cc253\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/694582084b8b98d9f872277ddeb4c65bb7730fd5cd1fc3c9aff59d38fdd7b7b1/globalmount\"" pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.748331 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/60cf15d0-8906-47ae-8fb0-ca49be28e48d-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.748554 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60cf15d0-8906-47ae-8fb0-ca49be28e48d-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.758582 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd6s6\" (UniqueName: \"kubernetes.io/projected/60cf15d0-8906-47ae-8fb0-ca49be28e48d-kube-api-access-sd6s6\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.786386 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-979039a1-0b55-4f3b-b7de-1a73212cc253\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-979039a1-0b55-4f3b-b7de-1a73212cc253\") pod \"ovsdbserver-sb-0\" (UID: \"60cf15d0-8906-47ae-8fb0-ca49be28e48d\") " pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841089 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841182 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4glhq\" (UniqueName: \"kubernetes.io/projected/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-kube-api-access-4glhq\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841217 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841269 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8feb2f1f-e903-48ee-ae4a-fb623d2758e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8feb2f1f-e903-48ee-ae4a-fb623d2758e2\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841304 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/789e84c3-d8c1-43e1-8024-de34dc89e648-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841332 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/789e84c3-d8c1-43e1-8024-de34dc89e648-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841403 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/789e84c3-d8c1-43e1-8024-de34dc89e648-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841437 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8njg\" (UniqueName: \"kubernetes.io/projected/789e84c3-d8c1-43e1-8024-de34dc89e648-kube-api-access-r8njg\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841465 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789e84c3-d8c1-43e1-8024-de34dc89e648-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841536 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841569 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841619 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-config\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841658 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/789e84c3-d8c1-43e1-8024-de34dc89e648-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841769 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841813 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-537c1ba5-68f2-4181-8d10-dfcc73e3f2c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-537c1ba5-68f2-4181-8d10-dfcc73e3f2c6\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.841843 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/789e84c3-d8c1-43e1-8024-de34dc89e648-config\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.843624 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.846169 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/789e84c3-d8c1-43e1-8024-de34dc89e648-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.846325 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/789e84c3-d8c1-43e1-8024-de34dc89e648-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.848851 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.849479 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-config\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.849693 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/789e84c3-d8c1-43e1-8024-de34dc89e648-config\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.851071 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.851624 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.851762 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-537c1ba5-68f2-4181-8d10-dfcc73e3f2c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-537c1ba5-68f2-4181-8d10-dfcc73e3f2c6\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4dfa404e7a8780eebeb835e7cc02e5a485c4ebe04d60ea0be3103d560ccc19c4/globalmount\"" pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.851916 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/789e84c3-d8c1-43e1-8024-de34dc89e648-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.851665 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.852150 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8feb2f1f-e903-48ee-ae4a-fb623d2758e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8feb2f1f-e903-48ee-ae4a-fb623d2758e2\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/393508e834848f1c1fa639b66bef9549168e5884b2c85efbabe4e3b5f48d8889/globalmount\"" pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.852243 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/789e84c3-d8c1-43e1-8024-de34dc89e648-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.852588 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.852888 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.856357 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/789e84c3-d8c1-43e1-8024-de34dc89e648-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.860124 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4glhq\" (UniqueName: \"kubernetes.io/projected/f9ef9a9e-2b5f-4833-ae0c-9b205e862eda-kube-api-access-4glhq\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.862256 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8njg\" (UniqueName: \"kubernetes.io/projected/789e84c3-d8c1-43e1-8024-de34dc89e648-kube-api-access-r8njg\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.883482 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-537c1ba5-68f2-4181-8d10-dfcc73e3f2c6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-537c1ba5-68f2-4181-8d10-dfcc73e3f2c6\") pod \"ovsdbserver-sb-1\" (UID: \"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda\") " pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.884265 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8feb2f1f-e903-48ee-ae4a-fb623d2758e2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8feb2f1f-e903-48ee-ae4a-fb623d2758e2\") pod \"ovsdbserver-sb-2\" (UID: \"789e84c3-d8c1-43e1-8024-de34dc89e648\") " pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.907307 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:13 crc kubenswrapper[4856]: I1122 08:33:13.956632 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.008930 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.018993 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.026053 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.172041 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.197219 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.258010 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 22 08:33:14 crc kubenswrapper[4856]: W1122 08:33:14.264932 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod837136ea_05b1_42f9_8af2_806dba026c53.slice/crio-a4b66d636a516949df44c5f78bf5902c5990dc5597bc5fdb3738b1128257b35b WatchSource:0}: Error finding container a4b66d636a516949df44c5f78bf5902c5990dc5597bc5fdb3738b1128257b35b: Status 404 returned error can't find the container with id a4b66d636a516949df44c5f78bf5902c5990dc5597bc5fdb3738b1128257b35b Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.476240 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Nov 22 08:33:14 crc kubenswrapper[4856]: W1122 08:33:14.495953 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6728e6f_b4b0_45fc_8745_d9c657c6146f.slice/crio-da17e2b98454313a1ce80c64d98f34596dc294e178438e7ced697176c2e40da3 WatchSource:0}: Error finding container da17e2b98454313a1ce80c64d98f34596dc294e178438e7ced697176c2e40da3: Status 404 returned error can't find the container with id da17e2b98454313a1ce80c64d98f34596dc294e178438e7ced697176c2e40da3 Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.599757 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.702990 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Nov 22 08:33:14 crc kubenswrapper[4856]: W1122 08:33:14.711529 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9ef9a9e_2b5f_4833_ae0c_9b205e862eda.slice/crio-62887c6040bc30b300ea020269adc6821affac05a5eeb504d9e97060b106d104 WatchSource:0}: Error finding container 62887c6040bc30b300ea020269adc6821affac05a5eeb504d9e97060b106d104: Status 404 returned error can't find the container with id 62887c6040bc30b300ea020269adc6821affac05a5eeb504d9e97060b106d104 Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.847658 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"60cf15d0-8906-47ae-8fb0-ca49be28e48d","Type":"ContainerStarted","Data":"97082ae197c9c8c53da2f18a2cb028e967f0bdbfbed7756f8e8f0c5cc4132f08"} Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.852068 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"a6728e6f-b4b0-45fc-8745-d9c657c6146f","Type":"ContainerStarted","Data":"da17e2b98454313a1ce80c64d98f34596dc294e178438e7ced697176c2e40da3"} Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.854367 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda","Type":"ContainerStarted","Data":"62887c6040bc30b300ea020269adc6821affac05a5eeb504d9e97060b106d104"} Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.855457 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"837136ea-05b1-42f9-8af2-806dba026c53","Type":"ContainerStarted","Data":"a4b66d636a516949df44c5f78bf5902c5990dc5597bc5fdb3738b1128257b35b"} Nov 22 08:33:14 crc kubenswrapper[4856]: I1122 08:33:14.856388 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"bdda5e7f-56ae-4427-8099-7e1291cc5296","Type":"ContainerStarted","Data":"f597cf0a44e68367dfc15e2db9d619c8bb8a3e05f7f736ea0210c770d90aed9c"} Nov 22 08:33:15 crc kubenswrapper[4856]: I1122 08:33:15.431206 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Nov 22 08:33:15 crc kubenswrapper[4856]: W1122 08:33:15.434798 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod789e84c3_d8c1_43e1_8024_de34dc89e648.slice/crio-7e99d40d70dd477f1f00d3b3699eba59bbce1c3be14e68983fc5abcfeed9e68a WatchSource:0}: Error finding container 7e99d40d70dd477f1f00d3b3699eba59bbce1c3be14e68983fc5abcfeed9e68a: Status 404 returned error can't find the container with id 7e99d40d70dd477f1f00d3b3699eba59bbce1c3be14e68983fc5abcfeed9e68a Nov 22 08:33:15 crc kubenswrapper[4856]: I1122 08:33:15.870865 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"789e84c3-d8c1-43e1-8024-de34dc89e648","Type":"ContainerStarted","Data":"7e99d40d70dd477f1f00d3b3699eba59bbce1c3be14e68983fc5abcfeed9e68a"} Nov 22 08:33:18 crc kubenswrapper[4856]: I1122 08:33:18.902961 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"a6728e6f-b4b0-45fc-8745-d9c657c6146f","Type":"ContainerStarted","Data":"5f7b01f6732021f5e4817a9ea4fe48ed0b3a8e59f5523c1bbcc4a1caa031a2d8"} Nov 22 08:33:18 crc kubenswrapper[4856]: I1122 08:33:18.906273 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"bdda5e7f-56ae-4427-8099-7e1291cc5296","Type":"ContainerStarted","Data":"738105378a964c45ecc32315d3f3e13c153e61a272b5770dc2cf28322ae78639"} Nov 22 08:33:18 crc kubenswrapper[4856]: I1122 08:33:18.909485 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"60cf15d0-8906-47ae-8fb0-ca49be28e48d","Type":"ContainerStarted","Data":"48a401026ad1bb9c2845d1ccd3f8db1fcced0dd800b9377da2429add24b819db"} Nov 22 08:33:19 crc kubenswrapper[4856]: I1122 08:33:19.922976 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"60cf15d0-8906-47ae-8fb0-ca49be28e48d","Type":"ContainerStarted","Data":"347d3022aee21b4888af164445a380d6a54dcb747f8f000a3d56a10f5346c46b"} Nov 22 08:33:19 crc kubenswrapper[4856]: I1122 08:33:19.926310 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"a6728e6f-b4b0-45fc-8745-d9c657c6146f","Type":"ContainerStarted","Data":"c142ac7ddbe77018406fbb4191dc0964555c60c89d480a5b5198b82a3f3d3ae0"} Nov 22 08:33:19 crc kubenswrapper[4856]: I1122 08:33:19.929165 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"837136ea-05b1-42f9-8af2-806dba026c53","Type":"ContainerStarted","Data":"f59e74c5d59f1ecfba1f81af235c836f55661d6445e2c507f20bf766b758b00a"} Nov 22 08:33:19 crc kubenswrapper[4856]: I1122 08:33:19.929199 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"837136ea-05b1-42f9-8af2-806dba026c53","Type":"ContainerStarted","Data":"a853fdaf1e63603e9113c2c1eba1b5b7882f0f0bb8a64a7c545aac5d5a0cd1cf"} Nov 22 08:33:19 crc kubenswrapper[4856]: I1122 08:33:19.931592 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"bdda5e7f-56ae-4427-8099-7e1291cc5296","Type":"ContainerStarted","Data":"1e912a53997ccb7476f93a5d23bfa63fd298d1bb755634006377c594e6c3df8f"} Nov 22 08:33:19 crc kubenswrapper[4856]: I1122 08:33:19.948105 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.06540741 podStartE2EDuration="7.948079619s" podCreationTimestamp="2025-11-22 08:33:12 +0000 UTC" firstStartedPulling="2025-11-22 08:33:14.620947063 +0000 UTC m=+5437.034340321" lastFinishedPulling="2025-11-22 08:33:18.503619272 +0000 UTC m=+5440.917012530" observedRunningTime="2025-11-22 08:33:19.942382886 +0000 UTC m=+5442.355776164" watchObservedRunningTime="2025-11-22 08:33:19.948079619 +0000 UTC m=+5442.361472877" Nov 22 08:33:19 crc kubenswrapper[4856]: I1122 08:33:19.957669 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:19 crc kubenswrapper[4856]: I1122 08:33:19.965189 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.640553663 podStartE2EDuration="7.965147768s" podCreationTimestamp="2025-11-22 08:33:12 +0000 UTC" firstStartedPulling="2025-11-22 08:33:14.197000731 +0000 UTC m=+5436.610393989" lastFinishedPulling="2025-11-22 08:33:18.521594836 +0000 UTC m=+5440.934988094" observedRunningTime="2025-11-22 08:33:19.964252394 +0000 UTC m=+5442.377645682" watchObservedRunningTime="2025-11-22 08:33:19.965147768 +0000 UTC m=+5442.378541026" Nov 22 08:33:20 crc kubenswrapper[4856]: I1122 08:33:20.003276 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.994815622 podStartE2EDuration="8.003255313s" podCreationTimestamp="2025-11-22 08:33:12 +0000 UTC" firstStartedPulling="2025-11-22 08:33:14.501295055 +0000 UTC m=+5436.914688303" lastFinishedPulling="2025-11-22 08:33:18.509734736 +0000 UTC m=+5440.923127994" observedRunningTime="2025-11-22 08:33:19.984848218 +0000 UTC m=+5442.398241496" watchObservedRunningTime="2025-11-22 08:33:20.003255313 +0000 UTC m=+5442.416648571" Nov 22 08:33:20 crc kubenswrapper[4856]: I1122 08:33:20.009145 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:20 crc kubenswrapper[4856]: I1122 08:33:20.944831 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda","Type":"ContainerStarted","Data":"81e6431c6c0a174ad3f8757f3eee6c02d611ee2dd1509f489c6150879b03a55b"} Nov 22 08:33:20 crc kubenswrapper[4856]: I1122 08:33:20.945294 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"f9ef9a9e-2b5f-4833-ae0c-9b205e862eda","Type":"ContainerStarted","Data":"0aef7cef901b8ddff13b62cff9d0e804f14aab35551ed76a41325ed74431091e"} Nov 22 08:33:20 crc kubenswrapper[4856]: I1122 08:33:20.948870 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"789e84c3-d8c1-43e1-8024-de34dc89e648","Type":"ContainerStarted","Data":"4ea26f85194e56d399122c1c0f18e20c7f1f60d5156398cae8c19914091b89e4"} Nov 22 08:33:20 crc kubenswrapper[4856]: I1122 08:33:20.948947 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"789e84c3-d8c1-43e1-8024-de34dc89e648","Type":"ContainerStarted","Data":"fed86ede3318f325ad8f37d106a326148bedd3f5a72407bd4667d56136f81a4e"} Nov 22 08:33:20 crc kubenswrapper[4856]: I1122 08:33:20.976980 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.913059323 podStartE2EDuration="8.97695439s" podCreationTimestamp="2025-11-22 08:33:12 +0000 UTC" firstStartedPulling="2025-11-22 08:33:14.71455376 +0000 UTC m=+5437.127947018" lastFinishedPulling="2025-11-22 08:33:19.778448837 +0000 UTC m=+5442.191842085" observedRunningTime="2025-11-22 08:33:20.967549976 +0000 UTC m=+5443.380943264" watchObservedRunningTime="2025-11-22 08:33:20.97695439 +0000 UTC m=+5443.390347668" Nov 22 08:33:20 crc kubenswrapper[4856]: I1122 08:33:20.981777 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=4.307181131 podStartE2EDuration="8.981740348s" podCreationTimestamp="2025-11-22 08:33:12 +0000 UTC" firstStartedPulling="2025-11-22 08:33:14.267410864 +0000 UTC m=+5436.680804122" lastFinishedPulling="2025-11-22 08:33:18.941970081 +0000 UTC m=+5441.355363339" observedRunningTime="2025-11-22 08:33:20.004136707 +0000 UTC m=+5442.417529965" watchObservedRunningTime="2025-11-22 08:33:20.981740348 +0000 UTC m=+5443.395133646" Nov 22 08:33:21 crc kubenswrapper[4856]: I1122 08:33:21.007581 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.333105858 podStartE2EDuration="9.00747942s" podCreationTimestamp="2025-11-22 08:33:12 +0000 UTC" firstStartedPulling="2025-11-22 08:33:15.437664027 +0000 UTC m=+5437.851057285" lastFinishedPulling="2025-11-22 08:33:20.112037589 +0000 UTC m=+5442.525430847" observedRunningTime="2025-11-22 08:33:21.000897383 +0000 UTC m=+5443.414290681" watchObservedRunningTime="2025-11-22 08:33:21.00747942 +0000 UTC m=+5443.420872708" Nov 22 08:33:22 crc kubenswrapper[4856]: I1122 08:33:22.639220 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:22 crc kubenswrapper[4856]: I1122 08:33:22.697992 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:22 crc kubenswrapper[4856]: I1122 08:33:22.908619 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:22 crc kubenswrapper[4856]: I1122 08:33:22.952223 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:22 crc kubenswrapper[4856]: I1122 08:33:22.964074 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:22 crc kubenswrapper[4856]: I1122 08:33:22.964257 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.012424 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.012913 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.019238 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.027055 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.059947 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.062270 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.063932 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.064306 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.073687 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.114760 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.275270 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-546cdb7f99-r2mhh"] Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.276673 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.278310 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.286137 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-546cdb7f99-r2mhh"] Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.324407 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9q82\" (UniqueName: \"kubernetes.io/projected/6a23e1c5-a203-44ce-8850-cf97ca1f2418-kube-api-access-g9q82\") pod \"dnsmasq-dns-546cdb7f99-r2mhh\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.324525 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-config\") pod \"dnsmasq-dns-546cdb7f99-r2mhh\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.324551 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-ovsdbserver-nb\") pod \"dnsmasq-dns-546cdb7f99-r2mhh\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.324573 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-dns-svc\") pod \"dnsmasq-dns-546cdb7f99-r2mhh\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.426343 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-config\") pod \"dnsmasq-dns-546cdb7f99-r2mhh\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.426412 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-ovsdbserver-nb\") pod \"dnsmasq-dns-546cdb7f99-r2mhh\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.426446 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-dns-svc\") pod \"dnsmasq-dns-546cdb7f99-r2mhh\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.426541 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9q82\" (UniqueName: \"kubernetes.io/projected/6a23e1c5-a203-44ce-8850-cf97ca1f2418-kube-api-access-g9q82\") pod \"dnsmasq-dns-546cdb7f99-r2mhh\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.427379 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-ovsdbserver-nb\") pod \"dnsmasq-dns-546cdb7f99-r2mhh\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.427404 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-dns-svc\") pod \"dnsmasq-dns-546cdb7f99-r2mhh\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.427920 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-config\") pod \"dnsmasq-dns-546cdb7f99-r2mhh\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.447683 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9q82\" (UniqueName: \"kubernetes.io/projected/6a23e1c5-a203-44ce-8850-cf97ca1f2418-kube-api-access-g9q82\") pod \"dnsmasq-dns-546cdb7f99-r2mhh\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.544401 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-546cdb7f99-r2mhh"] Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.545306 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.572208 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9b4bf459-hswcq"] Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.573867 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.576849 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.585578 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9b4bf459-hswcq"] Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.629130 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrbqf\" (UniqueName: \"kubernetes.io/projected/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-kube-api-access-wrbqf\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.629224 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-dns-svc\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.629279 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.629334 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-config\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.629357 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.680435 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.713963 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:33:23 crc kubenswrapper[4856]: E1122 08:33:23.714338 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.749171 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-config\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.749978 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.750321 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrbqf\" (UniqueName: \"kubernetes.io/projected/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-kube-api-access-wrbqf\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.750500 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-dns-svc\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.750701 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.752132 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.752219 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-dns-svc\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.755425 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-config\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.763197 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.774683 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrbqf\" (UniqueName: \"kubernetes.io/projected/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-kube-api-access-wrbqf\") pod \"dnsmasq-dns-5c9b4bf459-hswcq\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.946971 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.972433 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:23 crc kubenswrapper[4856]: I1122 08:33:23.972470 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:24 crc kubenswrapper[4856]: I1122 08:33:24.055377 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-546cdb7f99-r2mhh"] Nov 22 08:33:24 crc kubenswrapper[4856]: I1122 08:33:24.065475 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Nov 22 08:33:24 crc kubenswrapper[4856]: W1122 08:33:24.115264 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a23e1c5_a203_44ce_8850_cf97ca1f2418.slice/crio-01bbd9f3e7abc8e79176f65206dd96f467763ff62df42a3f17538253563b28b8 WatchSource:0}: Error finding container 01bbd9f3e7abc8e79176f65206dd96f467763ff62df42a3f17538253563b28b8: Status 404 returned error can't find the container with id 01bbd9f3e7abc8e79176f65206dd96f467763ff62df42a3f17538253563b28b8 Nov 22 08:33:24 crc kubenswrapper[4856]: I1122 08:33:24.415283 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9b4bf459-hswcq"] Nov 22 08:33:24 crc kubenswrapper[4856]: W1122 08:33:24.416983 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b0ed567_75cd_4287_acbc_7ee38aa82f2c.slice/crio-f611df28e4ed07a2ee3ed7933ac6c5415f51d7cc162d2a9867e42a2a0d813ac3 WatchSource:0}: Error finding container f611df28e4ed07a2ee3ed7933ac6c5415f51d7cc162d2a9867e42a2a0d813ac3: Status 404 returned error can't find the container with id f611df28e4ed07a2ee3ed7933ac6c5415f51d7cc162d2a9867e42a2a0d813ac3 Nov 22 08:33:24 crc kubenswrapper[4856]: I1122 08:33:24.982779 4856 generic.go:334] "Generic (PLEG): container finished" podID="2b0ed567-75cd-4287-acbc-7ee38aa82f2c" containerID="42421b71fc2cc2957ae381e9d3ec60fdce4e86d7eaa9cda3115c8602506e210f" exitCode=0 Nov 22 08:33:24 crc kubenswrapper[4856]: I1122 08:33:24.983201 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" event={"ID":"2b0ed567-75cd-4287-acbc-7ee38aa82f2c","Type":"ContainerDied","Data":"42421b71fc2cc2957ae381e9d3ec60fdce4e86d7eaa9cda3115c8602506e210f"} Nov 22 08:33:24 crc kubenswrapper[4856]: I1122 08:33:24.983238 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" event={"ID":"2b0ed567-75cd-4287-acbc-7ee38aa82f2c","Type":"ContainerStarted","Data":"f611df28e4ed07a2ee3ed7933ac6c5415f51d7cc162d2a9867e42a2a0d813ac3"} Nov 22 08:33:24 crc kubenswrapper[4856]: I1122 08:33:24.987180 4856 generic.go:334] "Generic (PLEG): container finished" podID="6a23e1c5-a203-44ce-8850-cf97ca1f2418" containerID="32fe1590de4200eb2a2a4a98676ef36ba69716a6a5446dd9882d892355d9875f" exitCode=0 Nov 22 08:33:24 crc kubenswrapper[4856]: I1122 08:33:24.987360 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" event={"ID":"6a23e1c5-a203-44ce-8850-cf97ca1f2418","Type":"ContainerDied","Data":"32fe1590de4200eb2a2a4a98676ef36ba69716a6a5446dd9882d892355d9875f"} Nov 22 08:33:24 crc kubenswrapper[4856]: I1122 08:33:24.987427 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" event={"ID":"6a23e1c5-a203-44ce-8850-cf97ca1f2418","Type":"ContainerStarted","Data":"01bbd9f3e7abc8e79176f65206dd96f467763ff62df42a3f17538253563b28b8"} Nov 22 08:33:25 crc kubenswrapper[4856]: I1122 08:33:25.236249 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:25 crc kubenswrapper[4856]: I1122 08:33:25.376152 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-config\") pod \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " Nov 22 08:33:25 crc kubenswrapper[4856]: I1122 08:33:25.376258 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-dns-svc\") pod \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " Nov 22 08:33:25 crc kubenswrapper[4856]: I1122 08:33:25.376297 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9q82\" (UniqueName: \"kubernetes.io/projected/6a23e1c5-a203-44ce-8850-cf97ca1f2418-kube-api-access-g9q82\") pod \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " Nov 22 08:33:25 crc kubenswrapper[4856]: I1122 08:33:25.376319 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-ovsdbserver-nb\") pod \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\" (UID: \"6a23e1c5-a203-44ce-8850-cf97ca1f2418\") " Nov 22 08:33:25 crc kubenswrapper[4856]: I1122 08:33:25.380566 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a23e1c5-a203-44ce-8850-cf97ca1f2418-kube-api-access-g9q82" (OuterVolumeSpecName: "kube-api-access-g9q82") pod "6a23e1c5-a203-44ce-8850-cf97ca1f2418" (UID: "6a23e1c5-a203-44ce-8850-cf97ca1f2418"). InnerVolumeSpecName "kube-api-access-g9q82". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:33:25 crc kubenswrapper[4856]: I1122 08:33:25.394919 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6a23e1c5-a203-44ce-8850-cf97ca1f2418" (UID: "6a23e1c5-a203-44ce-8850-cf97ca1f2418"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:33:25 crc kubenswrapper[4856]: I1122 08:33:25.395112 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-config" (OuterVolumeSpecName: "config") pod "6a23e1c5-a203-44ce-8850-cf97ca1f2418" (UID: "6a23e1c5-a203-44ce-8850-cf97ca1f2418"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:33:25 crc kubenswrapper[4856]: I1122 08:33:25.415139 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6a23e1c5-a203-44ce-8850-cf97ca1f2418" (UID: "6a23e1c5-a203-44ce-8850-cf97ca1f2418"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:33:25 crc kubenswrapper[4856]: I1122 08:33:25.481166 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:25 crc kubenswrapper[4856]: I1122 08:33:25.481207 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:25 crc kubenswrapper[4856]: I1122 08:33:25.481223 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9q82\" (UniqueName: \"kubernetes.io/projected/6a23e1c5-a203-44ce-8850-cf97ca1f2418-kube-api-access-g9q82\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:25 crc kubenswrapper[4856]: I1122 08:33:25.481236 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6a23e1c5-a203-44ce-8850-cf97ca1f2418-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:26 crc kubenswrapper[4856]: I1122 08:33:26.000565 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" event={"ID":"2b0ed567-75cd-4287-acbc-7ee38aa82f2c","Type":"ContainerStarted","Data":"263222ff655a0aeec4a985bd0a05d94e5023ab7a4a686e486f84876e87f04250"} Nov 22 08:33:26 crc kubenswrapper[4856]: I1122 08:33:26.000862 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:26 crc kubenswrapper[4856]: I1122 08:33:26.002337 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" Nov 22 08:33:26 crc kubenswrapper[4856]: I1122 08:33:26.002356 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-546cdb7f99-r2mhh" event={"ID":"6a23e1c5-a203-44ce-8850-cf97ca1f2418","Type":"ContainerDied","Data":"01bbd9f3e7abc8e79176f65206dd96f467763ff62df42a3f17538253563b28b8"} Nov 22 08:33:26 crc kubenswrapper[4856]: I1122 08:33:26.002427 4856 scope.go:117] "RemoveContainer" containerID="32fe1590de4200eb2a2a4a98676ef36ba69716a6a5446dd9882d892355d9875f" Nov 22 08:33:26 crc kubenswrapper[4856]: I1122 08:33:26.084271 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" podStartSLOduration=3.084227632 podStartE2EDuration="3.084227632s" podCreationTimestamp="2025-11-22 08:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:33:26.030553779 +0000 UTC m=+5448.443947037" watchObservedRunningTime="2025-11-22 08:33:26.084227632 +0000 UTC m=+5448.497620890" Nov 22 08:33:26 crc kubenswrapper[4856]: I1122 08:33:26.098153 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-546cdb7f99-r2mhh"] Nov 22 08:33:26 crc kubenswrapper[4856]: I1122 08:33:26.104175 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-546cdb7f99-r2mhh"] Nov 22 08:33:26 crc kubenswrapper[4856]: I1122 08:33:26.723354 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a23e1c5-a203-44ce-8850-cf97ca1f2418" path="/var/lib/kubelet/pods/6a23e1c5-a203-44ce-8850-cf97ca1f2418/volumes" Nov 22 08:33:28 crc kubenswrapper[4856]: I1122 08:33:28.973867 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 22 08:33:29 crc kubenswrapper[4856]: I1122 08:33:29.075303 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.237831 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Nov 22 08:33:32 crc kubenswrapper[4856]: E1122 08:33:32.238537 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a23e1c5-a203-44ce-8850-cf97ca1f2418" containerName="init" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.238554 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a23e1c5-a203-44ce-8850-cf97ca1f2418" containerName="init" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.238742 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a23e1c5-a203-44ce-8850-cf97ca1f2418" containerName="init" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.239393 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.241332 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.244954 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.406122 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5747b224-26ef-4a31-82e4-f602c81b2617\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5747b224-26ef-4a31-82e4-f602c81b2617\") pod \"ovn-copy-data\" (UID: \"19475584-27e0-4a31-b29f-d93bd563b5ef\") " pod="openstack/ovn-copy-data" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.406215 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/19475584-27e0-4a31-b29f-d93bd563b5ef-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"19475584-27e0-4a31-b29f-d93bd563b5ef\") " pod="openstack/ovn-copy-data" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.406299 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqls8\" (UniqueName: \"kubernetes.io/projected/19475584-27e0-4a31-b29f-d93bd563b5ef-kube-api-access-fqls8\") pod \"ovn-copy-data\" (UID: \"19475584-27e0-4a31-b29f-d93bd563b5ef\") " pod="openstack/ovn-copy-data" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.508357 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5747b224-26ef-4a31-82e4-f602c81b2617\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5747b224-26ef-4a31-82e4-f602c81b2617\") pod \"ovn-copy-data\" (UID: \"19475584-27e0-4a31-b29f-d93bd563b5ef\") " pod="openstack/ovn-copy-data" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.508449 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/19475584-27e0-4a31-b29f-d93bd563b5ef-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"19475584-27e0-4a31-b29f-d93bd563b5ef\") " pod="openstack/ovn-copy-data" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.508494 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqls8\" (UniqueName: \"kubernetes.io/projected/19475584-27e0-4a31-b29f-d93bd563b5ef-kube-api-access-fqls8\") pod \"ovn-copy-data\" (UID: \"19475584-27e0-4a31-b29f-d93bd563b5ef\") " pod="openstack/ovn-copy-data" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.511581 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.511610 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5747b224-26ef-4a31-82e4-f602c81b2617\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5747b224-26ef-4a31-82e4-f602c81b2617\") pod \"ovn-copy-data\" (UID: \"19475584-27e0-4a31-b29f-d93bd563b5ef\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2579ab78fadc535dafd8a315714e0e755bc856fa5c23d4b84c37550ad18b9cbe/globalmount\"" pod="openstack/ovn-copy-data" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.513861 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/19475584-27e0-4a31-b29f-d93bd563b5ef-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"19475584-27e0-4a31-b29f-d93bd563b5ef\") " pod="openstack/ovn-copy-data" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.528011 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqls8\" (UniqueName: \"kubernetes.io/projected/19475584-27e0-4a31-b29f-d93bd563b5ef-kube-api-access-fqls8\") pod \"ovn-copy-data\" (UID: \"19475584-27e0-4a31-b29f-d93bd563b5ef\") " pod="openstack/ovn-copy-data" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.551563 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5747b224-26ef-4a31-82e4-f602c81b2617\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5747b224-26ef-4a31-82e4-f602c81b2617\") pod \"ovn-copy-data\" (UID: \"19475584-27e0-4a31-b29f-d93bd563b5ef\") " pod="openstack/ovn-copy-data" Nov 22 08:33:32 crc kubenswrapper[4856]: I1122 08:33:32.556947 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 22 08:33:33 crc kubenswrapper[4856]: I1122 08:33:33.062960 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Nov 22 08:33:33 crc kubenswrapper[4856]: W1122 08:33:33.071720 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19475584_27e0_4a31_b29f_d93bd563b5ef.slice/crio-0763286be3ab5d2a63be3ca4bd7ca9bcf254cc05cd1c67301d081ec45d05d860 WatchSource:0}: Error finding container 0763286be3ab5d2a63be3ca4bd7ca9bcf254cc05cd1c67301d081ec45d05d860: Status 404 returned error can't find the container with id 0763286be3ab5d2a63be3ca4bd7ca9bcf254cc05cd1c67301d081ec45d05d860 Nov 22 08:33:33 crc kubenswrapper[4856]: I1122 08:33:33.949194 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.010793 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf8f59b77-jhbcc"] Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.011283 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" podUID="57d240db-0f55-4656-97a6-3c1059b7eb76" containerName="dnsmasq-dns" containerID="cri-o://ba7a9f66068d18ef6e7ab0f03d7c83affa030518baba9fa96c07abc8cded8fd9" gracePeriod=10 Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.081564 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"19475584-27e0-4a31-b29f-d93bd563b5ef","Type":"ContainerStarted","Data":"53ec1333c997dba7960dc08afcc0069cf964876c1bbc176773f571f7a78848d3"} Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.082500 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"19475584-27e0-4a31-b29f-d93bd563b5ef","Type":"ContainerStarted","Data":"0763286be3ab5d2a63be3ca4bd7ca9bcf254cc05cd1c67301d081ec45d05d860"} Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.098363 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=2.879824846 podStartE2EDuration="3.098309091s" podCreationTimestamp="2025-11-22 08:33:31 +0000 UTC" firstStartedPulling="2025-11-22 08:33:33.075128004 +0000 UTC m=+5455.488521262" lastFinishedPulling="2025-11-22 08:33:33.293612209 +0000 UTC m=+5455.707005507" observedRunningTime="2025-11-22 08:33:34.093114061 +0000 UTC m=+5456.506507319" watchObservedRunningTime="2025-11-22 08:33:34.098309091 +0000 UTC m=+5456.511702349" Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.430657 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.547174 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkzlp\" (UniqueName: \"kubernetes.io/projected/57d240db-0f55-4656-97a6-3c1059b7eb76-kube-api-access-lkzlp\") pod \"57d240db-0f55-4656-97a6-3c1059b7eb76\" (UID: \"57d240db-0f55-4656-97a6-3c1059b7eb76\") " Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.547257 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57d240db-0f55-4656-97a6-3c1059b7eb76-config\") pod \"57d240db-0f55-4656-97a6-3c1059b7eb76\" (UID: \"57d240db-0f55-4656-97a6-3c1059b7eb76\") " Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.547324 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57d240db-0f55-4656-97a6-3c1059b7eb76-dns-svc\") pod \"57d240db-0f55-4656-97a6-3c1059b7eb76\" (UID: \"57d240db-0f55-4656-97a6-3c1059b7eb76\") " Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.554439 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57d240db-0f55-4656-97a6-3c1059b7eb76-kube-api-access-lkzlp" (OuterVolumeSpecName: "kube-api-access-lkzlp") pod "57d240db-0f55-4656-97a6-3c1059b7eb76" (UID: "57d240db-0f55-4656-97a6-3c1059b7eb76"). InnerVolumeSpecName "kube-api-access-lkzlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.587073 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57d240db-0f55-4656-97a6-3c1059b7eb76-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57d240db-0f55-4656-97a6-3c1059b7eb76" (UID: "57d240db-0f55-4656-97a6-3c1059b7eb76"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.587306 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57d240db-0f55-4656-97a6-3c1059b7eb76-config" (OuterVolumeSpecName: "config") pod "57d240db-0f55-4656-97a6-3c1059b7eb76" (UID: "57d240db-0f55-4656-97a6-3c1059b7eb76"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.648766 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkzlp\" (UniqueName: \"kubernetes.io/projected/57d240db-0f55-4656-97a6-3c1059b7eb76-kube-api-access-lkzlp\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.648984 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57d240db-0f55-4656-97a6-3c1059b7eb76-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:34 crc kubenswrapper[4856]: I1122 08:33:34.648996 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57d240db-0f55-4656-97a6-3c1059b7eb76-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:35 crc kubenswrapper[4856]: I1122 08:33:35.091950 4856 generic.go:334] "Generic (PLEG): container finished" podID="57d240db-0f55-4656-97a6-3c1059b7eb76" containerID="ba7a9f66068d18ef6e7ab0f03d7c83affa030518baba9fa96c07abc8cded8fd9" exitCode=0 Nov 22 08:33:35 crc kubenswrapper[4856]: I1122 08:33:35.091996 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" event={"ID":"57d240db-0f55-4656-97a6-3c1059b7eb76","Type":"ContainerDied","Data":"ba7a9f66068d18ef6e7ab0f03d7c83affa030518baba9fa96c07abc8cded8fd9"} Nov 22 08:33:35 crc kubenswrapper[4856]: I1122 08:33:35.092028 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" Nov 22 08:33:35 crc kubenswrapper[4856]: I1122 08:33:35.092053 4856 scope.go:117] "RemoveContainer" containerID="ba7a9f66068d18ef6e7ab0f03d7c83affa030518baba9fa96c07abc8cded8fd9" Nov 22 08:33:35 crc kubenswrapper[4856]: I1122 08:33:35.092040 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf8f59b77-jhbcc" event={"ID":"57d240db-0f55-4656-97a6-3c1059b7eb76","Type":"ContainerDied","Data":"2e0d4e29c7cc73cfd24255f19ad9266ea32a3c8cd3c86649dda86045bb2307cb"} Nov 22 08:33:35 crc kubenswrapper[4856]: I1122 08:33:35.108435 4856 scope.go:117] "RemoveContainer" containerID="49590df2dffa9a41117b2e00ed7c6700bcdc06fd46788266c03cc1c11019cc42" Nov 22 08:33:35 crc kubenswrapper[4856]: I1122 08:33:35.113619 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf8f59b77-jhbcc"] Nov 22 08:33:35 crc kubenswrapper[4856]: I1122 08:33:35.120850 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf8f59b77-jhbcc"] Nov 22 08:33:35 crc kubenswrapper[4856]: I1122 08:33:35.128207 4856 scope.go:117] "RemoveContainer" containerID="ba7a9f66068d18ef6e7ab0f03d7c83affa030518baba9fa96c07abc8cded8fd9" Nov 22 08:33:35 crc kubenswrapper[4856]: E1122 08:33:35.128742 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7a9f66068d18ef6e7ab0f03d7c83affa030518baba9fa96c07abc8cded8fd9\": container with ID starting with ba7a9f66068d18ef6e7ab0f03d7c83affa030518baba9fa96c07abc8cded8fd9 not found: ID does not exist" containerID="ba7a9f66068d18ef6e7ab0f03d7c83affa030518baba9fa96c07abc8cded8fd9" Nov 22 08:33:35 crc kubenswrapper[4856]: I1122 08:33:35.128777 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7a9f66068d18ef6e7ab0f03d7c83affa030518baba9fa96c07abc8cded8fd9"} err="failed to get container status \"ba7a9f66068d18ef6e7ab0f03d7c83affa030518baba9fa96c07abc8cded8fd9\": rpc error: code = NotFound desc = could not find container \"ba7a9f66068d18ef6e7ab0f03d7c83affa030518baba9fa96c07abc8cded8fd9\": container with ID starting with ba7a9f66068d18ef6e7ab0f03d7c83affa030518baba9fa96c07abc8cded8fd9 not found: ID does not exist" Nov 22 08:33:35 crc kubenswrapper[4856]: I1122 08:33:35.128801 4856 scope.go:117] "RemoveContainer" containerID="49590df2dffa9a41117b2e00ed7c6700bcdc06fd46788266c03cc1c11019cc42" Nov 22 08:33:35 crc kubenswrapper[4856]: E1122 08:33:35.129094 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49590df2dffa9a41117b2e00ed7c6700bcdc06fd46788266c03cc1c11019cc42\": container with ID starting with 49590df2dffa9a41117b2e00ed7c6700bcdc06fd46788266c03cc1c11019cc42 not found: ID does not exist" containerID="49590df2dffa9a41117b2e00ed7c6700bcdc06fd46788266c03cc1c11019cc42" Nov 22 08:33:35 crc kubenswrapper[4856]: I1122 08:33:35.129144 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49590df2dffa9a41117b2e00ed7c6700bcdc06fd46788266c03cc1c11019cc42"} err="failed to get container status \"49590df2dffa9a41117b2e00ed7c6700bcdc06fd46788266c03cc1c11019cc42\": rpc error: code = NotFound desc = could not find container \"49590df2dffa9a41117b2e00ed7c6700bcdc06fd46788266c03cc1c11019cc42\": container with ID starting with 49590df2dffa9a41117b2e00ed7c6700bcdc06fd46788266c03cc1c11019cc42 not found: ID does not exist" Nov 22 08:33:35 crc kubenswrapper[4856]: I1122 08:33:35.710549 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:33:35 crc kubenswrapper[4856]: E1122 08:33:35.713055 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:33:36 crc kubenswrapper[4856]: I1122 08:33:36.719388 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57d240db-0f55-4656-97a6-3c1059b7eb76" path="/var/lib/kubelet/pods/57d240db-0f55-4656-97a6-3c1059b7eb76/volumes" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.140626 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 22 08:33:40 crc kubenswrapper[4856]: E1122 08:33:40.141270 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d240db-0f55-4656-97a6-3c1059b7eb76" containerName="init" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.141287 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d240db-0f55-4656-97a6-3c1059b7eb76" containerName="init" Nov 22 08:33:40 crc kubenswrapper[4856]: E1122 08:33:40.141322 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57d240db-0f55-4656-97a6-3c1059b7eb76" containerName="dnsmasq-dns" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.141330 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="57d240db-0f55-4656-97a6-3c1059b7eb76" containerName="dnsmasq-dns" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.142672 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="57d240db-0f55-4656-97a6-3c1059b7eb76" containerName="dnsmasq-dns" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.145536 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.148637 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-djsj6" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.149193 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.149354 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.150555 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.163235 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.240444 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcm2t\" (UniqueName: \"kubernetes.io/projected/466d6ab8-2d26-4845-85a4-d4e652a857e7-kube-api-access-mcm2t\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.240484 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/466d6ab8-2d26-4845-85a4-d4e652a857e7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.240534 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466d6ab8-2d26-4845-85a4-d4e652a857e7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.240583 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/466d6ab8-2d26-4845-85a4-d4e652a857e7-config\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.240602 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/466d6ab8-2d26-4845-85a4-d4e652a857e7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.240656 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/466d6ab8-2d26-4845-85a4-d4e652a857e7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.240688 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/466d6ab8-2d26-4845-85a4-d4e652a857e7-scripts\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.342206 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcm2t\" (UniqueName: \"kubernetes.io/projected/466d6ab8-2d26-4845-85a4-d4e652a857e7-kube-api-access-mcm2t\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.342246 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/466d6ab8-2d26-4845-85a4-d4e652a857e7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.342285 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466d6ab8-2d26-4845-85a4-d4e652a857e7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.342320 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/466d6ab8-2d26-4845-85a4-d4e652a857e7-config\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.342343 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/466d6ab8-2d26-4845-85a4-d4e652a857e7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.342374 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/466d6ab8-2d26-4845-85a4-d4e652a857e7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.342407 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/466d6ab8-2d26-4845-85a4-d4e652a857e7-scripts\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.343063 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/466d6ab8-2d26-4845-85a4-d4e652a857e7-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.343486 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/466d6ab8-2d26-4845-85a4-d4e652a857e7-scripts\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.343493 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/466d6ab8-2d26-4845-85a4-d4e652a857e7-config\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.348329 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/466d6ab8-2d26-4845-85a4-d4e652a857e7-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.348621 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/466d6ab8-2d26-4845-85a4-d4e652a857e7-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.350061 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466d6ab8-2d26-4845-85a4-d4e652a857e7-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.357904 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcm2t\" (UniqueName: \"kubernetes.io/projected/466d6ab8-2d26-4845-85a4-d4e652a857e7-kube-api-access-mcm2t\") pod \"ovn-northd-0\" (UID: \"466d6ab8-2d26-4845-85a4-d4e652a857e7\") " pod="openstack/ovn-northd-0" Nov 22 08:33:40 crc kubenswrapper[4856]: I1122 08:33:40.484169 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 22 08:33:41 crc kubenswrapper[4856]: I1122 08:33:41.028596 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 22 08:33:41 crc kubenswrapper[4856]: W1122 08:33:41.033995 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod466d6ab8_2d26_4845_85a4_d4e652a857e7.slice/crio-2fffc6fea0cd986e522b12cec768953db8d9e3e85d5bf819bffee7d1d5d190a6 WatchSource:0}: Error finding container 2fffc6fea0cd986e522b12cec768953db8d9e3e85d5bf819bffee7d1d5d190a6: Status 404 returned error can't find the container with id 2fffc6fea0cd986e522b12cec768953db8d9e3e85d5bf819bffee7d1d5d190a6 Nov 22 08:33:41 crc kubenswrapper[4856]: I1122 08:33:41.157718 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"466d6ab8-2d26-4845-85a4-d4e652a857e7","Type":"ContainerStarted","Data":"2fffc6fea0cd986e522b12cec768953db8d9e3e85d5bf819bffee7d1d5d190a6"} Nov 22 08:33:42 crc kubenswrapper[4856]: I1122 08:33:42.166947 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"466d6ab8-2d26-4845-85a4-d4e652a857e7","Type":"ContainerStarted","Data":"a0708353958ba2b26fc9009264c477d59c5cfeb4fd7d5f3628657b462ae01bc7"} Nov 22 08:33:42 crc kubenswrapper[4856]: I1122 08:33:42.167234 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"466d6ab8-2d26-4845-85a4-d4e652a857e7","Type":"ContainerStarted","Data":"34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8"} Nov 22 08:33:42 crc kubenswrapper[4856]: I1122 08:33:42.167422 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 22 08:33:42 crc kubenswrapper[4856]: I1122 08:33:42.185326 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.5666770429999999 podStartE2EDuration="2.18530699s" podCreationTimestamp="2025-11-22 08:33:40 +0000 UTC" firstStartedPulling="2025-11-22 08:33:41.037403349 +0000 UTC m=+5463.450796597" lastFinishedPulling="2025-11-22 08:33:41.656033286 +0000 UTC m=+5464.069426544" observedRunningTime="2025-11-22 08:33:42.183583004 +0000 UTC m=+5464.596976262" watchObservedRunningTime="2025-11-22 08:33:42.18530699 +0000 UTC m=+5464.598700248" Nov 22 08:33:45 crc kubenswrapper[4856]: I1122 08:33:45.946437 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nzqfv"] Nov 22 08:33:45 crc kubenswrapper[4856]: I1122 08:33:45.947798 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nzqfv" Nov 22 08:33:45 crc kubenswrapper[4856]: I1122 08:33:45.961242 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nzqfv"] Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.041873 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bb66f8f-c69f-4526-b599-b5aa8214ad02-operator-scripts\") pod \"keystone-db-create-nzqfv\" (UID: \"4bb66f8f-c69f-4526-b599-b5aa8214ad02\") " pod="openstack/keystone-db-create-nzqfv" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.042082 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcfs6\" (UniqueName: \"kubernetes.io/projected/4bb66f8f-c69f-4526-b599-b5aa8214ad02-kube-api-access-dcfs6\") pod \"keystone-db-create-nzqfv\" (UID: \"4bb66f8f-c69f-4526-b599-b5aa8214ad02\") " pod="openstack/keystone-db-create-nzqfv" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.052403 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a69c-account-create-88z2q"] Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.053563 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a69c-account-create-88z2q" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.055654 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.063737 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a69c-account-create-88z2q"] Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.143419 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26eea8bb-99a6-46d3-8fad-283cad87cd06-operator-scripts\") pod \"keystone-a69c-account-create-88z2q\" (UID: \"26eea8bb-99a6-46d3-8fad-283cad87cd06\") " pod="openstack/keystone-a69c-account-create-88z2q" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.143540 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcfs6\" (UniqueName: \"kubernetes.io/projected/4bb66f8f-c69f-4526-b599-b5aa8214ad02-kube-api-access-dcfs6\") pod \"keystone-db-create-nzqfv\" (UID: \"4bb66f8f-c69f-4526-b599-b5aa8214ad02\") " pod="openstack/keystone-db-create-nzqfv" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.143602 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bb66f8f-c69f-4526-b599-b5aa8214ad02-operator-scripts\") pod \"keystone-db-create-nzqfv\" (UID: \"4bb66f8f-c69f-4526-b599-b5aa8214ad02\") " pod="openstack/keystone-db-create-nzqfv" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.143639 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g6lk\" (UniqueName: \"kubernetes.io/projected/26eea8bb-99a6-46d3-8fad-283cad87cd06-kube-api-access-4g6lk\") pod \"keystone-a69c-account-create-88z2q\" (UID: \"26eea8bb-99a6-46d3-8fad-283cad87cd06\") " pod="openstack/keystone-a69c-account-create-88z2q" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.144711 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bb66f8f-c69f-4526-b599-b5aa8214ad02-operator-scripts\") pod \"keystone-db-create-nzqfv\" (UID: \"4bb66f8f-c69f-4526-b599-b5aa8214ad02\") " pod="openstack/keystone-db-create-nzqfv" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.161969 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcfs6\" (UniqueName: \"kubernetes.io/projected/4bb66f8f-c69f-4526-b599-b5aa8214ad02-kube-api-access-dcfs6\") pod \"keystone-db-create-nzqfv\" (UID: \"4bb66f8f-c69f-4526-b599-b5aa8214ad02\") " pod="openstack/keystone-db-create-nzqfv" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.245398 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g6lk\" (UniqueName: \"kubernetes.io/projected/26eea8bb-99a6-46d3-8fad-283cad87cd06-kube-api-access-4g6lk\") pod \"keystone-a69c-account-create-88z2q\" (UID: \"26eea8bb-99a6-46d3-8fad-283cad87cd06\") " pod="openstack/keystone-a69c-account-create-88z2q" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.245490 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26eea8bb-99a6-46d3-8fad-283cad87cd06-operator-scripts\") pod \"keystone-a69c-account-create-88z2q\" (UID: \"26eea8bb-99a6-46d3-8fad-283cad87cd06\") " pod="openstack/keystone-a69c-account-create-88z2q" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.246212 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26eea8bb-99a6-46d3-8fad-283cad87cd06-operator-scripts\") pod \"keystone-a69c-account-create-88z2q\" (UID: \"26eea8bb-99a6-46d3-8fad-283cad87cd06\") " pod="openstack/keystone-a69c-account-create-88z2q" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.264063 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g6lk\" (UniqueName: \"kubernetes.io/projected/26eea8bb-99a6-46d3-8fad-283cad87cd06-kube-api-access-4g6lk\") pod \"keystone-a69c-account-create-88z2q\" (UID: \"26eea8bb-99a6-46d3-8fad-283cad87cd06\") " pod="openstack/keystone-a69c-account-create-88z2q" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.275083 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nzqfv" Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.368757 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a69c-account-create-88z2q" Nov 22 08:33:46 crc kubenswrapper[4856]: W1122 08:33:46.707086 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bb66f8f_c69f_4526_b599_b5aa8214ad02.slice/crio-23274b9aaddd7a4bbf838d9dac69bcf24680bfa83a362e45023443f4d4717785 WatchSource:0}: Error finding container 23274b9aaddd7a4bbf838d9dac69bcf24680bfa83a362e45023443f4d4717785: Status 404 returned error can't find the container with id 23274b9aaddd7a4bbf838d9dac69bcf24680bfa83a362e45023443f4d4717785 Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.721051 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nzqfv"] Nov 22 08:33:46 crc kubenswrapper[4856]: I1122 08:33:46.845625 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a69c-account-create-88z2q"] Nov 22 08:33:46 crc kubenswrapper[4856]: W1122 08:33:46.847188 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26eea8bb_99a6_46d3_8fad_283cad87cd06.slice/crio-f0352a4bc51e8b48f46cd314ffa6ab1a93f379c963752f431ab5bd4ba6a3ccf3 WatchSource:0}: Error finding container f0352a4bc51e8b48f46cd314ffa6ab1a93f379c963752f431ab5bd4ba6a3ccf3: Status 404 returned error can't find the container with id f0352a4bc51e8b48f46cd314ffa6ab1a93f379c963752f431ab5bd4ba6a3ccf3 Nov 22 08:33:47 crc kubenswrapper[4856]: I1122 08:33:47.206782 4856 generic.go:334] "Generic (PLEG): container finished" podID="26eea8bb-99a6-46d3-8fad-283cad87cd06" containerID="268d901263ed4e1063f1fee215fd020aaccb13e4d1a0f68eb3ba148400263601" exitCode=0 Nov 22 08:33:47 crc kubenswrapper[4856]: I1122 08:33:47.206852 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a69c-account-create-88z2q" event={"ID":"26eea8bb-99a6-46d3-8fad-283cad87cd06","Type":"ContainerDied","Data":"268d901263ed4e1063f1fee215fd020aaccb13e4d1a0f68eb3ba148400263601"} Nov 22 08:33:47 crc kubenswrapper[4856]: I1122 08:33:47.207140 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a69c-account-create-88z2q" event={"ID":"26eea8bb-99a6-46d3-8fad-283cad87cd06","Type":"ContainerStarted","Data":"f0352a4bc51e8b48f46cd314ffa6ab1a93f379c963752f431ab5bd4ba6a3ccf3"} Nov 22 08:33:47 crc kubenswrapper[4856]: I1122 08:33:47.210334 4856 generic.go:334] "Generic (PLEG): container finished" podID="4bb66f8f-c69f-4526-b599-b5aa8214ad02" containerID="6802c3a04d19d09320e10dd4878392cec70a4cde253c8e4cd5bf94b5220e20ac" exitCode=0 Nov 22 08:33:47 crc kubenswrapper[4856]: I1122 08:33:47.210365 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nzqfv" event={"ID":"4bb66f8f-c69f-4526-b599-b5aa8214ad02","Type":"ContainerDied","Data":"6802c3a04d19d09320e10dd4878392cec70a4cde253c8e4cd5bf94b5220e20ac"} Nov 22 08:33:47 crc kubenswrapper[4856]: I1122 08:33:47.210380 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nzqfv" event={"ID":"4bb66f8f-c69f-4526-b599-b5aa8214ad02","Type":"ContainerStarted","Data":"23274b9aaddd7a4bbf838d9dac69bcf24680bfa83a362e45023443f4d4717785"} Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.606854 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a69c-account-create-88z2q" Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.694167 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g6lk\" (UniqueName: \"kubernetes.io/projected/26eea8bb-99a6-46d3-8fad-283cad87cd06-kube-api-access-4g6lk\") pod \"26eea8bb-99a6-46d3-8fad-283cad87cd06\" (UID: \"26eea8bb-99a6-46d3-8fad-283cad87cd06\") " Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.694246 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26eea8bb-99a6-46d3-8fad-283cad87cd06-operator-scripts\") pod \"26eea8bb-99a6-46d3-8fad-283cad87cd06\" (UID: \"26eea8bb-99a6-46d3-8fad-283cad87cd06\") " Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.694987 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26eea8bb-99a6-46d3-8fad-283cad87cd06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26eea8bb-99a6-46d3-8fad-283cad87cd06" (UID: "26eea8bb-99a6-46d3-8fad-283cad87cd06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.699873 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26eea8bb-99a6-46d3-8fad-283cad87cd06-kube-api-access-4g6lk" (OuterVolumeSpecName: "kube-api-access-4g6lk") pod "26eea8bb-99a6-46d3-8fad-283cad87cd06" (UID: "26eea8bb-99a6-46d3-8fad-283cad87cd06"). InnerVolumeSpecName "kube-api-access-4g6lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.717181 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:33:48 crc kubenswrapper[4856]: E1122 08:33:48.717467 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.738617 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nzqfv" Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.795348 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bb66f8f-c69f-4526-b599-b5aa8214ad02-operator-scripts\") pod \"4bb66f8f-c69f-4526-b599-b5aa8214ad02\" (UID: \"4bb66f8f-c69f-4526-b599-b5aa8214ad02\") " Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.795499 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcfs6\" (UniqueName: \"kubernetes.io/projected/4bb66f8f-c69f-4526-b599-b5aa8214ad02-kube-api-access-dcfs6\") pod \"4bb66f8f-c69f-4526-b599-b5aa8214ad02\" (UID: \"4bb66f8f-c69f-4526-b599-b5aa8214ad02\") " Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.795844 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb66f8f-c69f-4526-b599-b5aa8214ad02-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4bb66f8f-c69f-4526-b599-b5aa8214ad02" (UID: "4bb66f8f-c69f-4526-b599-b5aa8214ad02"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.796060 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g6lk\" (UniqueName: \"kubernetes.io/projected/26eea8bb-99a6-46d3-8fad-283cad87cd06-kube-api-access-4g6lk\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.796074 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4bb66f8f-c69f-4526-b599-b5aa8214ad02-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.796085 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26eea8bb-99a6-46d3-8fad-283cad87cd06-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.798232 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb66f8f-c69f-4526-b599-b5aa8214ad02-kube-api-access-dcfs6" (OuterVolumeSpecName: "kube-api-access-dcfs6") pod "4bb66f8f-c69f-4526-b599-b5aa8214ad02" (UID: "4bb66f8f-c69f-4526-b599-b5aa8214ad02"). InnerVolumeSpecName "kube-api-access-dcfs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:33:48 crc kubenswrapper[4856]: I1122 08:33:48.897982 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcfs6\" (UniqueName: \"kubernetes.io/projected/4bb66f8f-c69f-4526-b599-b5aa8214ad02-kube-api-access-dcfs6\") on node \"crc\" DevicePath \"\"" Nov 22 08:33:49 crc kubenswrapper[4856]: I1122 08:33:49.227533 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a69c-account-create-88z2q" Nov 22 08:33:49 crc kubenswrapper[4856]: I1122 08:33:49.227558 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a69c-account-create-88z2q" event={"ID":"26eea8bb-99a6-46d3-8fad-283cad87cd06","Type":"ContainerDied","Data":"f0352a4bc51e8b48f46cd314ffa6ab1a93f379c963752f431ab5bd4ba6a3ccf3"} Nov 22 08:33:49 crc kubenswrapper[4856]: I1122 08:33:49.227605 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0352a4bc51e8b48f46cd314ffa6ab1a93f379c963752f431ab5bd4ba6a3ccf3" Nov 22 08:33:49 crc kubenswrapper[4856]: I1122 08:33:49.229295 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nzqfv" event={"ID":"4bb66f8f-c69f-4526-b599-b5aa8214ad02","Type":"ContainerDied","Data":"23274b9aaddd7a4bbf838d9dac69bcf24680bfa83a362e45023443f4d4717785"} Nov 22 08:33:49 crc kubenswrapper[4856]: I1122 08:33:49.229318 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23274b9aaddd7a4bbf838d9dac69bcf24680bfa83a362e45023443f4d4717785" Nov 22 08:33:49 crc kubenswrapper[4856]: I1122 08:33:49.229337 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nzqfv" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.431793 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-f4hxt"] Nov 22 08:33:51 crc kubenswrapper[4856]: E1122 08:33:51.432877 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb66f8f-c69f-4526-b599-b5aa8214ad02" containerName="mariadb-database-create" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.432903 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb66f8f-c69f-4526-b599-b5aa8214ad02" containerName="mariadb-database-create" Nov 22 08:33:51 crc kubenswrapper[4856]: E1122 08:33:51.432936 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26eea8bb-99a6-46d3-8fad-283cad87cd06" containerName="mariadb-account-create" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.432945 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="26eea8bb-99a6-46d3-8fad-283cad87cd06" containerName="mariadb-account-create" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.447417 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="26eea8bb-99a6-46d3-8fad-283cad87cd06" containerName="mariadb-account-create" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.447552 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb66f8f-c69f-4526-b599-b5aa8214ad02" containerName="mariadb-database-create" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.449161 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f4hxt" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.463727 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.463783 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.463813 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.466590 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-spvxg" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.484529 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-f4hxt"] Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.557933 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v964f\" (UniqueName: \"kubernetes.io/projected/aa7e0985-459a-4527-83a1-595e7344c8fe-kube-api-access-v964f\") pod \"keystone-db-sync-f4hxt\" (UID: \"aa7e0985-459a-4527-83a1-595e7344c8fe\") " pod="openstack/keystone-db-sync-f4hxt" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.558010 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7e0985-459a-4527-83a1-595e7344c8fe-config-data\") pod \"keystone-db-sync-f4hxt\" (UID: \"aa7e0985-459a-4527-83a1-595e7344c8fe\") " pod="openstack/keystone-db-sync-f4hxt" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.558077 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7e0985-459a-4527-83a1-595e7344c8fe-combined-ca-bundle\") pod \"keystone-db-sync-f4hxt\" (UID: \"aa7e0985-459a-4527-83a1-595e7344c8fe\") " pod="openstack/keystone-db-sync-f4hxt" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.659336 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v964f\" (UniqueName: \"kubernetes.io/projected/aa7e0985-459a-4527-83a1-595e7344c8fe-kube-api-access-v964f\") pod \"keystone-db-sync-f4hxt\" (UID: \"aa7e0985-459a-4527-83a1-595e7344c8fe\") " pod="openstack/keystone-db-sync-f4hxt" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.659404 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7e0985-459a-4527-83a1-595e7344c8fe-config-data\") pod \"keystone-db-sync-f4hxt\" (UID: \"aa7e0985-459a-4527-83a1-595e7344c8fe\") " pod="openstack/keystone-db-sync-f4hxt" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.659470 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7e0985-459a-4527-83a1-595e7344c8fe-combined-ca-bundle\") pod \"keystone-db-sync-f4hxt\" (UID: \"aa7e0985-459a-4527-83a1-595e7344c8fe\") " pod="openstack/keystone-db-sync-f4hxt" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.664925 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7e0985-459a-4527-83a1-595e7344c8fe-combined-ca-bundle\") pod \"keystone-db-sync-f4hxt\" (UID: \"aa7e0985-459a-4527-83a1-595e7344c8fe\") " pod="openstack/keystone-db-sync-f4hxt" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.666363 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7e0985-459a-4527-83a1-595e7344c8fe-config-data\") pod \"keystone-db-sync-f4hxt\" (UID: \"aa7e0985-459a-4527-83a1-595e7344c8fe\") " pod="openstack/keystone-db-sync-f4hxt" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.679911 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v964f\" (UniqueName: \"kubernetes.io/projected/aa7e0985-459a-4527-83a1-595e7344c8fe-kube-api-access-v964f\") pod \"keystone-db-sync-f4hxt\" (UID: \"aa7e0985-459a-4527-83a1-595e7344c8fe\") " pod="openstack/keystone-db-sync-f4hxt" Nov 22 08:33:51 crc kubenswrapper[4856]: I1122 08:33:51.787450 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f4hxt" Nov 22 08:33:52 crc kubenswrapper[4856]: I1122 08:33:52.228203 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-f4hxt"] Nov 22 08:33:52 crc kubenswrapper[4856]: W1122 08:33:52.233256 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa7e0985_459a_4527_83a1_595e7344c8fe.slice/crio-cdb110dbbd0e1e92c8f4efe592d77edc66b581fbf17144f8c1fb6ce84901bd0a WatchSource:0}: Error finding container cdb110dbbd0e1e92c8f4efe592d77edc66b581fbf17144f8c1fb6ce84901bd0a: Status 404 returned error can't find the container with id cdb110dbbd0e1e92c8f4efe592d77edc66b581fbf17144f8c1fb6ce84901bd0a Nov 22 08:33:52 crc kubenswrapper[4856]: I1122 08:33:52.256779 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f4hxt" event={"ID":"aa7e0985-459a-4527-83a1-595e7344c8fe","Type":"ContainerStarted","Data":"cdb110dbbd0e1e92c8f4efe592d77edc66b581fbf17144f8c1fb6ce84901bd0a"} Nov 22 08:33:55 crc kubenswrapper[4856]: I1122 08:33:55.552137 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 22 08:33:58 crc kubenswrapper[4856]: I1122 08:33:58.302626 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f4hxt" event={"ID":"aa7e0985-459a-4527-83a1-595e7344c8fe","Type":"ContainerStarted","Data":"e319472080c95830cec12e31aed60ac4e7dd030b360e7baece50b1f50d7f094e"} Nov 22 08:33:58 crc kubenswrapper[4856]: I1122 08:33:58.319449 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-f4hxt" podStartSLOduration=2.051118611 podStartE2EDuration="7.319429705s" podCreationTimestamp="2025-11-22 08:33:51 +0000 UTC" firstStartedPulling="2025-11-22 08:33:52.236728599 +0000 UTC m=+5474.650121877" lastFinishedPulling="2025-11-22 08:33:57.505039703 +0000 UTC m=+5479.918432971" observedRunningTime="2025-11-22 08:33:58.318926612 +0000 UTC m=+5480.732319860" watchObservedRunningTime="2025-11-22 08:33:58.319429705 +0000 UTC m=+5480.732822963" Nov 22 08:34:00 crc kubenswrapper[4856]: I1122 08:34:00.322577 4856 generic.go:334] "Generic (PLEG): container finished" podID="aa7e0985-459a-4527-83a1-595e7344c8fe" containerID="e319472080c95830cec12e31aed60ac4e7dd030b360e7baece50b1f50d7f094e" exitCode=0 Nov 22 08:34:00 crc kubenswrapper[4856]: I1122 08:34:00.322629 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f4hxt" event={"ID":"aa7e0985-459a-4527-83a1-595e7344c8fe","Type":"ContainerDied","Data":"e319472080c95830cec12e31aed60ac4e7dd030b360e7baece50b1f50d7f094e"} Nov 22 08:34:01 crc kubenswrapper[4856]: I1122 08:34:01.613103 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f4hxt" Nov 22 08:34:01 crc kubenswrapper[4856]: I1122 08:34:01.709999 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:34:01 crc kubenswrapper[4856]: E1122 08:34:01.710303 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:34:01 crc kubenswrapper[4856]: I1122 08:34:01.775318 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v964f\" (UniqueName: \"kubernetes.io/projected/aa7e0985-459a-4527-83a1-595e7344c8fe-kube-api-access-v964f\") pod \"aa7e0985-459a-4527-83a1-595e7344c8fe\" (UID: \"aa7e0985-459a-4527-83a1-595e7344c8fe\") " Nov 22 08:34:01 crc kubenswrapper[4856]: I1122 08:34:01.775453 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7e0985-459a-4527-83a1-595e7344c8fe-config-data\") pod \"aa7e0985-459a-4527-83a1-595e7344c8fe\" (UID: \"aa7e0985-459a-4527-83a1-595e7344c8fe\") " Nov 22 08:34:01 crc kubenswrapper[4856]: I1122 08:34:01.775576 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7e0985-459a-4527-83a1-595e7344c8fe-combined-ca-bundle\") pod \"aa7e0985-459a-4527-83a1-595e7344c8fe\" (UID: \"aa7e0985-459a-4527-83a1-595e7344c8fe\") " Nov 22 08:34:01 crc kubenswrapper[4856]: I1122 08:34:01.781294 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa7e0985-459a-4527-83a1-595e7344c8fe-kube-api-access-v964f" (OuterVolumeSpecName: "kube-api-access-v964f") pod "aa7e0985-459a-4527-83a1-595e7344c8fe" (UID: "aa7e0985-459a-4527-83a1-595e7344c8fe"). InnerVolumeSpecName "kube-api-access-v964f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:34:01 crc kubenswrapper[4856]: I1122 08:34:01.806680 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa7e0985-459a-4527-83a1-595e7344c8fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa7e0985-459a-4527-83a1-595e7344c8fe" (UID: "aa7e0985-459a-4527-83a1-595e7344c8fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:34:01 crc kubenswrapper[4856]: I1122 08:34:01.819582 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa7e0985-459a-4527-83a1-595e7344c8fe-config-data" (OuterVolumeSpecName: "config-data") pod "aa7e0985-459a-4527-83a1-595e7344c8fe" (UID: "aa7e0985-459a-4527-83a1-595e7344c8fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:34:01 crc kubenswrapper[4856]: I1122 08:34:01.878115 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v964f\" (UniqueName: \"kubernetes.io/projected/aa7e0985-459a-4527-83a1-595e7344c8fe-kube-api-access-v964f\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:01 crc kubenswrapper[4856]: I1122 08:34:01.878188 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa7e0985-459a-4527-83a1-595e7344c8fe-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:01 crc kubenswrapper[4856]: I1122 08:34:01.878200 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa7e0985-459a-4527-83a1-595e7344c8fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.343561 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-f4hxt" event={"ID":"aa7e0985-459a-4527-83a1-595e7344c8fe","Type":"ContainerDied","Data":"cdb110dbbd0e1e92c8f4efe592d77edc66b581fbf17144f8c1fb6ce84901bd0a"} Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.343919 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdb110dbbd0e1e92c8f4efe592d77edc66b581fbf17144f8c1fb6ce84901bd0a" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.343725 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-f4hxt" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.598131 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79c8568849-6rbtr"] Nov 22 08:34:02 crc kubenswrapper[4856]: E1122 08:34:02.598545 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa7e0985-459a-4527-83a1-595e7344c8fe" containerName="keystone-db-sync" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.598569 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa7e0985-459a-4527-83a1-595e7344c8fe" containerName="keystone-db-sync" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.598843 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa7e0985-459a-4527-83a1-595e7344c8fe" containerName="keystone-db-sync" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.601773 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.633204 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79c8568849-6rbtr"] Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.652243 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-2qmq9"] Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.658923 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.662035 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.662227 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.662408 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.662589 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-spvxg" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.663759 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.694275 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-ovsdbserver-sb\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.694353 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-ovsdbserver-nb\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.694381 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-dns-svc\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.694453 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-config\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.694624 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6c48\" (UniqueName: \"kubernetes.io/projected/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-kube-api-access-g6c48\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.746885 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2qmq9"] Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.796360 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-ovsdbserver-nb\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.797376 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-dns-svc\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.797319 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-ovsdbserver-nb\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.798021 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-dns-svc\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.798088 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-config\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.798112 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-credential-keys\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.798734 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-config\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.798901 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-scripts\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.798931 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-combined-ca-bundle\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.798979 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stl85\" (UniqueName: \"kubernetes.io/projected/783c541d-fc74-4691-893a-bb2307608caa-kube-api-access-stl85\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.799045 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6c48\" (UniqueName: \"kubernetes.io/projected/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-kube-api-access-g6c48\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.799087 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-config-data\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.799142 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-ovsdbserver-sb\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.799187 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-fernet-keys\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.799880 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-ovsdbserver-sb\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.819936 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6c48\" (UniqueName: \"kubernetes.io/projected/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-kube-api-access-g6c48\") pod \"dnsmasq-dns-79c8568849-6rbtr\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.900948 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-config-data\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.901011 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-fernet-keys\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.901073 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-credential-keys\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.901909 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-scripts\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.902166 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-combined-ca-bundle\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.902192 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stl85\" (UniqueName: \"kubernetes.io/projected/783c541d-fc74-4691-893a-bb2307608caa-kube-api-access-stl85\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.906182 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-scripts\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.906245 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-config-data\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.908652 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-combined-ca-bundle\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.911042 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-fernet-keys\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.912045 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-credential-keys\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.923862 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:02 crc kubenswrapper[4856]: I1122 08:34:02.924089 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stl85\" (UniqueName: \"kubernetes.io/projected/783c541d-fc74-4691-893a-bb2307608caa-kube-api-access-stl85\") pod \"keystone-bootstrap-2qmq9\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:03 crc kubenswrapper[4856]: I1122 08:34:03.014142 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:03 crc kubenswrapper[4856]: I1122 08:34:03.741964 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-2qmq9"] Nov 22 08:34:03 crc kubenswrapper[4856]: W1122 08:34:03.751268 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod783c541d_fc74_4691_893a_bb2307608caa.slice/crio-29df4c1f7a7f174374e431f0ff4b6cbd32746832435e85dbeea784a08df40632 WatchSource:0}: Error finding container 29df4c1f7a7f174374e431f0ff4b6cbd32746832435e85dbeea784a08df40632: Status 404 returned error can't find the container with id 29df4c1f7a7f174374e431f0ff4b6cbd32746832435e85dbeea784a08df40632 Nov 22 08:34:03 crc kubenswrapper[4856]: I1122 08:34:03.807544 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79c8568849-6rbtr"] Nov 22 08:34:03 crc kubenswrapper[4856]: W1122 08:34:03.814552 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode94128d4_0bfe_4b8c_9d8e_f404bf3beeb6.slice/crio-6f14454caa3adeba823cd39d4414f03cf41fd034b959db025373a87e7cba3755 WatchSource:0}: Error finding container 6f14454caa3adeba823cd39d4414f03cf41fd034b959db025373a87e7cba3755: Status 404 returned error can't find the container with id 6f14454caa3adeba823cd39d4414f03cf41fd034b959db025373a87e7cba3755 Nov 22 08:34:04 crc kubenswrapper[4856]: I1122 08:34:04.411453 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2qmq9" event={"ID":"783c541d-fc74-4691-893a-bb2307608caa","Type":"ContainerStarted","Data":"0388f4b5015abc8b5326411d4bbbe616481d9fd72b87afdbcc731d70d01e4921"} Nov 22 08:34:04 crc kubenswrapper[4856]: I1122 08:34:04.411989 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2qmq9" event={"ID":"783c541d-fc74-4691-893a-bb2307608caa","Type":"ContainerStarted","Data":"29df4c1f7a7f174374e431f0ff4b6cbd32746832435e85dbeea784a08df40632"} Nov 22 08:34:04 crc kubenswrapper[4856]: I1122 08:34:04.413318 4856 generic.go:334] "Generic (PLEG): container finished" podID="e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" containerID="89521544dfcf8018eefe180857c8a0091f1e6e0da9a68522243837de72213cac" exitCode=0 Nov 22 08:34:04 crc kubenswrapper[4856]: I1122 08:34:04.413359 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79c8568849-6rbtr" event={"ID":"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6","Type":"ContainerDied","Data":"89521544dfcf8018eefe180857c8a0091f1e6e0da9a68522243837de72213cac"} Nov 22 08:34:04 crc kubenswrapper[4856]: I1122 08:34:04.413379 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79c8568849-6rbtr" event={"ID":"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6","Type":"ContainerStarted","Data":"6f14454caa3adeba823cd39d4414f03cf41fd034b959db025373a87e7cba3755"} Nov 22 08:34:04 crc kubenswrapper[4856]: I1122 08:34:04.439146 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-2qmq9" podStartSLOduration=2.439102036 podStartE2EDuration="2.439102036s" podCreationTimestamp="2025-11-22 08:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:34:04.426941488 +0000 UTC m=+5486.840334786" watchObservedRunningTime="2025-11-22 08:34:04.439102036 +0000 UTC m=+5486.852495334" Nov 22 08:34:05 crc kubenswrapper[4856]: I1122 08:34:05.423678 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79c8568849-6rbtr" event={"ID":"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6","Type":"ContainerStarted","Data":"b25ec2d6e3b8825e022b00e3b389fd83262b4bcefd184959c9285b0f3bd9b54a"} Nov 22 08:34:05 crc kubenswrapper[4856]: I1122 08:34:05.424076 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:05 crc kubenswrapper[4856]: I1122 08:34:05.443088 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79c8568849-6rbtr" podStartSLOduration=3.443074326 podStartE2EDuration="3.443074326s" podCreationTimestamp="2025-11-22 08:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:34:05.44136705 +0000 UTC m=+5487.854760308" watchObservedRunningTime="2025-11-22 08:34:05.443074326 +0000 UTC m=+5487.856467574" Nov 22 08:34:08 crc kubenswrapper[4856]: I1122 08:34:08.455726 4856 generic.go:334] "Generic (PLEG): container finished" podID="783c541d-fc74-4691-893a-bb2307608caa" containerID="0388f4b5015abc8b5326411d4bbbe616481d9fd72b87afdbcc731d70d01e4921" exitCode=0 Nov 22 08:34:08 crc kubenswrapper[4856]: I1122 08:34:08.455869 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2qmq9" event={"ID":"783c541d-fc74-4691-893a-bb2307608caa","Type":"ContainerDied","Data":"0388f4b5015abc8b5326411d4bbbe616481d9fd72b87afdbcc731d70d01e4921"} Nov 22 08:34:09 crc kubenswrapper[4856]: I1122 08:34:09.819012 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.005380 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-combined-ca-bundle\") pod \"783c541d-fc74-4691-893a-bb2307608caa\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.005478 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-scripts\") pod \"783c541d-fc74-4691-893a-bb2307608caa\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.005618 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-fernet-keys\") pod \"783c541d-fc74-4691-893a-bb2307608caa\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.005651 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stl85\" (UniqueName: \"kubernetes.io/projected/783c541d-fc74-4691-893a-bb2307608caa-kube-api-access-stl85\") pod \"783c541d-fc74-4691-893a-bb2307608caa\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.005742 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-credential-keys\") pod \"783c541d-fc74-4691-893a-bb2307608caa\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.006466 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-config-data\") pod \"783c541d-fc74-4691-893a-bb2307608caa\" (UID: \"783c541d-fc74-4691-893a-bb2307608caa\") " Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.010745 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "783c541d-fc74-4691-893a-bb2307608caa" (UID: "783c541d-fc74-4691-893a-bb2307608caa"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.010805 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-scripts" (OuterVolumeSpecName: "scripts") pod "783c541d-fc74-4691-893a-bb2307608caa" (UID: "783c541d-fc74-4691-893a-bb2307608caa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.011371 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/783c541d-fc74-4691-893a-bb2307608caa-kube-api-access-stl85" (OuterVolumeSpecName: "kube-api-access-stl85") pod "783c541d-fc74-4691-893a-bb2307608caa" (UID: "783c541d-fc74-4691-893a-bb2307608caa"). InnerVolumeSpecName "kube-api-access-stl85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.011926 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "783c541d-fc74-4691-893a-bb2307608caa" (UID: "783c541d-fc74-4691-893a-bb2307608caa"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.030054 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "783c541d-fc74-4691-893a-bb2307608caa" (UID: "783c541d-fc74-4691-893a-bb2307608caa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.032341 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-config-data" (OuterVolumeSpecName: "config-data") pod "783c541d-fc74-4691-893a-bb2307608caa" (UID: "783c541d-fc74-4691-893a-bb2307608caa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.108301 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.108343 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.108360 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.108370 4856 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.108382 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stl85\" (UniqueName: \"kubernetes.io/projected/783c541d-fc74-4691-893a-bb2307608caa-kube-api-access-stl85\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.108395 4856 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/783c541d-fc74-4691-893a-bb2307608caa-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.478016 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-2qmq9" event={"ID":"783c541d-fc74-4691-893a-bb2307608caa","Type":"ContainerDied","Data":"29df4c1f7a7f174374e431f0ff4b6cbd32746832435e85dbeea784a08df40632"} Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.478051 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-2qmq9" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.478052 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29df4c1f7a7f174374e431f0ff4b6cbd32746832435e85dbeea784a08df40632" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.551947 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-2qmq9"] Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.556934 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-2qmq9"] Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.656765 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-vm9wt"] Nov 22 08:34:10 crc kubenswrapper[4856]: E1122 08:34:10.657361 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783c541d-fc74-4691-893a-bb2307608caa" containerName="keystone-bootstrap" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.657391 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="783c541d-fc74-4691-893a-bb2307608caa" containerName="keystone-bootstrap" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.658663 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="783c541d-fc74-4691-893a-bb2307608caa" containerName="keystone-bootstrap" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.659636 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.664244 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.664251 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.664896 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-spvxg" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.665039 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.665057 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vm9wt"] Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.664989 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.717588 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-credential-keys\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.717650 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqjxx\" (UniqueName: \"kubernetes.io/projected/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-kube-api-access-vqjxx\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.717787 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-combined-ca-bundle\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.717942 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-config-data\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.717963 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-scripts\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.718013 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-fernet-keys\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.722816 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="783c541d-fc74-4691-893a-bb2307608caa" path="/var/lib/kubelet/pods/783c541d-fc74-4691-893a-bb2307608caa/volumes" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.819958 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-config-data\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.820015 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-scripts\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.820068 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-fernet-keys\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.820115 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-credential-keys\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.820145 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqjxx\" (UniqueName: \"kubernetes.io/projected/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-kube-api-access-vqjxx\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.820205 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-combined-ca-bundle\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.824174 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-credential-keys\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.824875 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-fernet-keys\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.825268 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-scripts\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.825494 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-combined-ca-bundle\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.826483 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-config-data\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.835624 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqjxx\" (UniqueName: \"kubernetes.io/projected/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-kube-api-access-vqjxx\") pod \"keystone-bootstrap-vm9wt\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:10 crc kubenswrapper[4856]: I1122 08:34:10.984097 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:11 crc kubenswrapper[4856]: I1122 08:34:11.427041 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-vm9wt"] Nov 22 08:34:11 crc kubenswrapper[4856]: I1122 08:34:11.488062 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vm9wt" event={"ID":"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99","Type":"ContainerStarted","Data":"518b0bb59aff2af6e5fcdb9d395da8894ae7e28009e15b8366a93a9666e67cf2"} Nov 22 08:34:12 crc kubenswrapper[4856]: I1122 08:34:12.498262 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vm9wt" event={"ID":"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99","Type":"ContainerStarted","Data":"42f28f1cc97cbe6a6363460525ec227dec33a31c68c47411b5f3c09cc50fac93"} Nov 22 08:34:12 crc kubenswrapper[4856]: I1122 08:34:12.925571 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:34:12 crc kubenswrapper[4856]: I1122 08:34:12.952888 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-vm9wt" podStartSLOduration=2.952865961 podStartE2EDuration="2.952865961s" podCreationTimestamp="2025-11-22 08:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:34:12.521157411 +0000 UTC m=+5494.934550669" watchObservedRunningTime="2025-11-22 08:34:12.952865961 +0000 UTC m=+5495.366259219" Nov 22 08:34:13 crc kubenswrapper[4856]: I1122 08:34:13.014989 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9b4bf459-hswcq"] Nov 22 08:34:13 crc kubenswrapper[4856]: I1122 08:34:13.015298 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" podUID="2b0ed567-75cd-4287-acbc-7ee38aa82f2c" containerName="dnsmasq-dns" containerID="cri-o://263222ff655a0aeec4a985bd0a05d94e5023ab7a4a686e486f84876e87f04250" gracePeriod=10 Nov 22 08:34:13 crc kubenswrapper[4856]: I1122 08:34:13.508382 4856 generic.go:334] "Generic (PLEG): container finished" podID="2b0ed567-75cd-4287-acbc-7ee38aa82f2c" containerID="263222ff655a0aeec4a985bd0a05d94e5023ab7a4a686e486f84876e87f04250" exitCode=0 Nov 22 08:34:13 crc kubenswrapper[4856]: I1122 08:34:13.508421 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" event={"ID":"2b0ed567-75cd-4287-acbc-7ee38aa82f2c","Type":"ContainerDied","Data":"263222ff655a0aeec4a985bd0a05d94e5023ab7a4a686e486f84876e87f04250"} Nov 22 08:34:13 crc kubenswrapper[4856]: I1122 08:34:13.710482 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:34:13 crc kubenswrapper[4856]: E1122 08:34:13.711124 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.011330 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.181069 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrbqf\" (UniqueName: \"kubernetes.io/projected/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-kube-api-access-wrbqf\") pod \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.181149 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-config\") pod \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.181198 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-ovsdbserver-nb\") pod \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.181244 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-ovsdbserver-sb\") pod \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.181375 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-dns-svc\") pod \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\" (UID: \"2b0ed567-75cd-4287-acbc-7ee38aa82f2c\") " Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.190879 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-kube-api-access-wrbqf" (OuterVolumeSpecName: "kube-api-access-wrbqf") pod "2b0ed567-75cd-4287-acbc-7ee38aa82f2c" (UID: "2b0ed567-75cd-4287-acbc-7ee38aa82f2c"). InnerVolumeSpecName "kube-api-access-wrbqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.219872 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-config" (OuterVolumeSpecName: "config") pod "2b0ed567-75cd-4287-acbc-7ee38aa82f2c" (UID: "2b0ed567-75cd-4287-acbc-7ee38aa82f2c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.223135 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b0ed567-75cd-4287-acbc-7ee38aa82f2c" (UID: "2b0ed567-75cd-4287-acbc-7ee38aa82f2c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.231134 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b0ed567-75cd-4287-acbc-7ee38aa82f2c" (UID: "2b0ed567-75cd-4287-acbc-7ee38aa82f2c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.233768 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b0ed567-75cd-4287-acbc-7ee38aa82f2c" (UID: "2b0ed567-75cd-4287-acbc-7ee38aa82f2c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.283759 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.283789 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrbqf\" (UniqueName: \"kubernetes.io/projected/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-kube-api-access-wrbqf\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.283801 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.283809 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.283818 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b0ed567-75cd-4287-acbc-7ee38aa82f2c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.517384 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.517381 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" event={"ID":"2b0ed567-75cd-4287-acbc-7ee38aa82f2c","Type":"ContainerDied","Data":"f611df28e4ed07a2ee3ed7933ac6c5415f51d7cc162d2a9867e42a2a0d813ac3"} Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.517522 4856 scope.go:117] "RemoveContainer" containerID="263222ff655a0aeec4a985bd0a05d94e5023ab7a4a686e486f84876e87f04250" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.520098 4856 generic.go:334] "Generic (PLEG): container finished" podID="b2c2f0db-0bef-41d1-8b0c-4e7875e69f99" containerID="42f28f1cc97cbe6a6363460525ec227dec33a31c68c47411b5f3c09cc50fac93" exitCode=0 Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.520144 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vm9wt" event={"ID":"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99","Type":"ContainerDied","Data":"42f28f1cc97cbe6a6363460525ec227dec33a31c68c47411b5f3c09cc50fac93"} Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.563333 4856 scope.go:117] "RemoveContainer" containerID="42421b71fc2cc2957ae381e9d3ec60fdce4e86d7eaa9cda3115c8602506e210f" Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.569603 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9b4bf459-hswcq"] Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.575208 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9b4bf459-hswcq"] Nov 22 08:34:14 crc kubenswrapper[4856]: I1122 08:34:14.721655 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b0ed567-75cd-4287-acbc-7ee38aa82f2c" path="/var/lib/kubelet/pods/2b0ed567-75cd-4287-acbc-7ee38aa82f2c/volumes" Nov 22 08:34:15 crc kubenswrapper[4856]: I1122 08:34:15.829430 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:15 crc kubenswrapper[4856]: I1122 08:34:15.912712 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-credential-keys\") pod \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " Nov 22 08:34:15 crc kubenswrapper[4856]: I1122 08:34:15.912785 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqjxx\" (UniqueName: \"kubernetes.io/projected/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-kube-api-access-vqjxx\") pod \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " Nov 22 08:34:15 crc kubenswrapper[4856]: I1122 08:34:15.912867 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-scripts\") pod \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " Nov 22 08:34:15 crc kubenswrapper[4856]: I1122 08:34:15.912884 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-config-data\") pod \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " Nov 22 08:34:15 crc kubenswrapper[4856]: I1122 08:34:15.912902 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-combined-ca-bundle\") pod \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " Nov 22 08:34:15 crc kubenswrapper[4856]: I1122 08:34:15.913047 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-fernet-keys\") pod \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\" (UID: \"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99\") " Nov 22 08:34:15 crc kubenswrapper[4856]: I1122 08:34:15.918083 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-scripts" (OuterVolumeSpecName: "scripts") pod "b2c2f0db-0bef-41d1-8b0c-4e7875e69f99" (UID: "b2c2f0db-0bef-41d1-8b0c-4e7875e69f99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:34:15 crc kubenswrapper[4856]: I1122 08:34:15.918104 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-kube-api-access-vqjxx" (OuterVolumeSpecName: "kube-api-access-vqjxx") pod "b2c2f0db-0bef-41d1-8b0c-4e7875e69f99" (UID: "b2c2f0db-0bef-41d1-8b0c-4e7875e69f99"). InnerVolumeSpecName "kube-api-access-vqjxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:34:15 crc kubenswrapper[4856]: I1122 08:34:15.918186 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b2c2f0db-0bef-41d1-8b0c-4e7875e69f99" (UID: "b2c2f0db-0bef-41d1-8b0c-4e7875e69f99"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:34:15 crc kubenswrapper[4856]: I1122 08:34:15.919634 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b2c2f0db-0bef-41d1-8b0c-4e7875e69f99" (UID: "b2c2f0db-0bef-41d1-8b0c-4e7875e69f99"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:34:15 crc kubenswrapper[4856]: I1122 08:34:15.934754 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2c2f0db-0bef-41d1-8b0c-4e7875e69f99" (UID: "b2c2f0db-0bef-41d1-8b0c-4e7875e69f99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:34:15 crc kubenswrapper[4856]: I1122 08:34:15.954059 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-config-data" (OuterVolumeSpecName: "config-data") pod "b2c2f0db-0bef-41d1-8b0c-4e7875e69f99" (UID: "b2c2f0db-0bef-41d1-8b0c-4e7875e69f99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.014504 4856 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.014551 4856 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.014564 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqjxx\" (UniqueName: \"kubernetes.io/projected/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-kube-api-access-vqjxx\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.014575 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.014583 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.014593 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.550208 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-vm9wt" event={"ID":"b2c2f0db-0bef-41d1-8b0c-4e7875e69f99","Type":"ContainerDied","Data":"518b0bb59aff2af6e5fcdb9d395da8894ae7e28009e15b8366a93a9666e67cf2"} Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.550256 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="518b0bb59aff2af6e5fcdb9d395da8894ae7e28009e15b8366a93a9666e67cf2" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.550301 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-vm9wt" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.626215 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7c69fc876-sckl8"] Nov 22 08:34:16 crc kubenswrapper[4856]: E1122 08:34:16.626778 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c2f0db-0bef-41d1-8b0c-4e7875e69f99" containerName="keystone-bootstrap" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.626801 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c2f0db-0bef-41d1-8b0c-4e7875e69f99" containerName="keystone-bootstrap" Nov 22 08:34:16 crc kubenswrapper[4856]: E1122 08:34:16.626840 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0ed567-75cd-4287-acbc-7ee38aa82f2c" containerName="dnsmasq-dns" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.626849 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0ed567-75cd-4287-acbc-7ee38aa82f2c" containerName="dnsmasq-dns" Nov 22 08:34:16 crc kubenswrapper[4856]: E1122 08:34:16.626865 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0ed567-75cd-4287-acbc-7ee38aa82f2c" containerName="init" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.626876 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0ed567-75cd-4287-acbc-7ee38aa82f2c" containerName="init" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.627129 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c2f0db-0bef-41d1-8b0c-4e7875e69f99" containerName="keystone-bootstrap" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.627154 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0ed567-75cd-4287-acbc-7ee38aa82f2c" containerName="dnsmasq-dns" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.628039 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.630351 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.632084 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.632243 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.632368 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.632993 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.633099 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-spvxg" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.652015 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c69fc876-sckl8"] Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.729489 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-combined-ca-bundle\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.729752 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-internal-tls-certs\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.729805 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-config-data\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.729829 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-scripts\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.729976 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-credential-keys\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.730034 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgssc\" (UniqueName: \"kubernetes.io/projected/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-kube-api-access-wgssc\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.730071 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-fernet-keys\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.730115 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-public-tls-certs\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.831736 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-credential-keys\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.831813 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgssc\" (UniqueName: \"kubernetes.io/projected/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-kube-api-access-wgssc\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.831868 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-fernet-keys\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.831902 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-public-tls-certs\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.831946 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-combined-ca-bundle\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.832209 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-internal-tls-certs\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.832267 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-config-data\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.832320 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-scripts\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.835564 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-public-tls-certs\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.836064 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-credential-keys\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.836924 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-fernet-keys\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.840319 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-combined-ca-bundle\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.841131 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-internal-tls-certs\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.841497 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-scripts\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.841531 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-config-data\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.849113 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgssc\" (UniqueName: \"kubernetes.io/projected/cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa-kube-api-access-wgssc\") pod \"keystone-7c69fc876-sckl8\" (UID: \"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa\") " pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:16 crc kubenswrapper[4856]: I1122 08:34:16.956151 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:17 crc kubenswrapper[4856]: I1122 08:34:17.393280 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7c69fc876-sckl8"] Nov 22 08:34:17 crc kubenswrapper[4856]: I1122 08:34:17.560793 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c69fc876-sckl8" event={"ID":"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa","Type":"ContainerStarted","Data":"7a7803290f21a79423eeae130fb2606e182e97042c1aad8424e4f425a462c43b"} Nov 22 08:34:18 crc kubenswrapper[4856]: I1122 08:34:18.578024 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7c69fc876-sckl8" event={"ID":"cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa","Type":"ContainerStarted","Data":"bb858084bea2a1a9dc2bcaed8c1499bdfc235600e94d315cf948c75f02f30331"} Nov 22 08:34:18 crc kubenswrapper[4856]: I1122 08:34:18.578212 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:18 crc kubenswrapper[4856]: I1122 08:34:18.951431 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c9b4bf459-hswcq" podUID="2b0ed567-75cd-4287-acbc-7ee38aa82f2c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.22:5353: i/o timeout" Nov 22 08:34:24 crc kubenswrapper[4856]: I1122 08:34:24.710682 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:34:24 crc kubenswrapper[4856]: E1122 08:34:24.711976 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:34:37 crc kubenswrapper[4856]: I1122 08:34:37.709884 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:34:37 crc kubenswrapper[4856]: E1122 08:34:37.711100 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:34:48 crc kubenswrapper[4856]: I1122 08:34:48.707634 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7c69fc876-sckl8" Nov 22 08:34:48 crc kubenswrapper[4856]: I1122 08:34:48.742771 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7c69fc876-sckl8" podStartSLOduration=32.742535796 podStartE2EDuration="32.742535796s" podCreationTimestamp="2025-11-22 08:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:34:18.598886424 +0000 UTC m=+5501.012279732" watchObservedRunningTime="2025-11-22 08:34:48.742535796 +0000 UTC m=+5531.155929094" Nov 22 08:34:49 crc kubenswrapper[4856]: I1122 08:34:49.710630 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:34:49 crc kubenswrapper[4856]: E1122 08:34:49.711489 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.122618 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.123880 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.126016 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.126068 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-4w944" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.127260 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.140160 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.303693 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc894d74-307d-4700-aa80-9a72d7abe560-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.303754 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpdch\" (UniqueName: \"kubernetes.io/projected/bc894d74-307d-4700-aa80-9a72d7abe560-kube-api-access-jpdch\") pod \"openstackclient\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.303801 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc894d74-307d-4700-aa80-9a72d7abe560-openstack-config\") pod \"openstackclient\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.303899 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc894d74-307d-4700-aa80-9a72d7abe560-openstack-config-secret\") pod \"openstackclient\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.405898 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc894d74-307d-4700-aa80-9a72d7abe560-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.405965 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpdch\" (UniqueName: \"kubernetes.io/projected/bc894d74-307d-4700-aa80-9a72d7abe560-kube-api-access-jpdch\") pod \"openstackclient\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.406024 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc894d74-307d-4700-aa80-9a72d7abe560-openstack-config\") pod \"openstackclient\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.406070 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc894d74-307d-4700-aa80-9a72d7abe560-openstack-config-secret\") pod \"openstackclient\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.407314 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc894d74-307d-4700-aa80-9a72d7abe560-openstack-config\") pod \"openstackclient\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.412486 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc894d74-307d-4700-aa80-9a72d7abe560-openstack-config-secret\") pod \"openstackclient\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.412992 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc894d74-307d-4700-aa80-9a72d7abe560-combined-ca-bundle\") pod \"openstackclient\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.423551 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpdch\" (UniqueName: \"kubernetes.io/projected/bc894d74-307d-4700-aa80-9a72d7abe560-kube-api-access-jpdch\") pod \"openstackclient\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.444287 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.875356 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 08:34:51 crc kubenswrapper[4856]: I1122 08:34:51.909093 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bc894d74-307d-4700-aa80-9a72d7abe560","Type":"ContainerStarted","Data":"f92c46ed40a026c7c88d99f990570024e16326ec14054f9a7e969d7f48ed9e29"} Nov 22 08:35:01 crc kubenswrapper[4856]: I1122 08:35:01.709608 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:35:01 crc kubenswrapper[4856]: E1122 08:35:01.710353 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:35:07 crc kubenswrapper[4856]: I1122 08:35:07.044042 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"bc894d74-307d-4700-aa80-9a72d7abe560","Type":"ContainerStarted","Data":"2a4f14e48ec7fecdcb60a87a4e347194e7df540fa59d368042b79737ad8c8f67"} Nov 22 08:35:07 crc kubenswrapper[4856]: I1122 08:35:07.061314 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.099484099 podStartE2EDuration="16.061296681s" podCreationTimestamp="2025-11-22 08:34:51 +0000 UTC" firstStartedPulling="2025-11-22 08:34:51.880290882 +0000 UTC m=+5534.293684140" lastFinishedPulling="2025-11-22 08:35:05.842103474 +0000 UTC m=+5548.255496722" observedRunningTime="2025-11-22 08:35:07.059866143 +0000 UTC m=+5549.473259421" watchObservedRunningTime="2025-11-22 08:35:07.061296681 +0000 UTC m=+5549.474689939" Nov 22 08:35:14 crc kubenswrapper[4856]: I1122 08:35:14.710783 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:35:14 crc kubenswrapper[4856]: E1122 08:35:14.711477 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:35:22 crc kubenswrapper[4856]: I1122 08:35:22.177610 4856 scope.go:117] "RemoveContainer" containerID="15451d7f3bd2fed80a5408c89c6fd368893c6df453879873e0166e42e1108c07" Nov 22 08:35:28 crc kubenswrapper[4856]: I1122 08:35:28.733293 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:35:28 crc kubenswrapper[4856]: E1122 08:35:28.734200 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:35:39 crc kubenswrapper[4856]: I1122 08:35:39.710045 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:35:40 crc kubenswrapper[4856]: I1122 08:35:40.369797 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"d24613435baa98fcf4fed1d58784844b252bad2c404ea5e6d83f40d6769faaee"} Nov 22 08:36:14 crc kubenswrapper[4856]: E1122 08:36:14.035177 4856 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.132:57744->38.102.83.132:36441: write tcp 38.102.83.132:57744->38.102.83.132:36441: write: broken pipe Nov 22 08:36:22 crc kubenswrapper[4856]: I1122 08:36:22.221478 4856 scope.go:117] "RemoveContainer" containerID="dfe69eadfdcdf1efa12481e14c25e47522741266d2e28d1e7e37d662e8bf408a" Nov 22 08:36:22 crc kubenswrapper[4856]: I1122 08:36:22.245622 4856 scope.go:117] "RemoveContainer" containerID="2038a0c4aa339cc24469bcbf53f3f99e34d6cc99f46ee590600ba2617ee2e28e" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.445830 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-mltbc"] Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.447408 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mltbc" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.456062 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mltbc"] Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.537391 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-ef3a-account-create-rkn96"] Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.540034 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ef3a-account-create-rkn96" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.543417 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.553707 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ef3a-account-create-rkn96"] Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.601043 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6lck\" (UniqueName: \"kubernetes.io/projected/9091157f-35e8-471a-a784-9dad836695ab-kube-api-access-f6lck\") pod \"barbican-db-create-mltbc\" (UID: \"9091157f-35e8-471a-a784-9dad836695ab\") " pod="openstack/barbican-db-create-mltbc" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.601120 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9091157f-35e8-471a-a784-9dad836695ab-operator-scripts\") pod \"barbican-db-create-mltbc\" (UID: \"9091157f-35e8-471a-a784-9dad836695ab\") " pod="openstack/barbican-db-create-mltbc" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.702448 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv2kh\" (UniqueName: \"kubernetes.io/projected/f5562803-a85b-481f-a8ed-a0c309e63253-kube-api-access-dv2kh\") pod \"barbican-ef3a-account-create-rkn96\" (UID: \"f5562803-a85b-481f-a8ed-a0c309e63253\") " pod="openstack/barbican-ef3a-account-create-rkn96" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.702582 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6lck\" (UniqueName: \"kubernetes.io/projected/9091157f-35e8-471a-a784-9dad836695ab-kube-api-access-f6lck\") pod \"barbican-db-create-mltbc\" (UID: \"9091157f-35e8-471a-a784-9dad836695ab\") " pod="openstack/barbican-db-create-mltbc" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.702629 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9091157f-35e8-471a-a784-9dad836695ab-operator-scripts\") pod \"barbican-db-create-mltbc\" (UID: \"9091157f-35e8-471a-a784-9dad836695ab\") " pod="openstack/barbican-db-create-mltbc" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.702656 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5562803-a85b-481f-a8ed-a0c309e63253-operator-scripts\") pod \"barbican-ef3a-account-create-rkn96\" (UID: \"f5562803-a85b-481f-a8ed-a0c309e63253\") " pod="openstack/barbican-ef3a-account-create-rkn96" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.703600 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9091157f-35e8-471a-a784-9dad836695ab-operator-scripts\") pod \"barbican-db-create-mltbc\" (UID: \"9091157f-35e8-471a-a784-9dad836695ab\") " pod="openstack/barbican-db-create-mltbc" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.722741 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6lck\" (UniqueName: \"kubernetes.io/projected/9091157f-35e8-471a-a784-9dad836695ab-kube-api-access-f6lck\") pod \"barbican-db-create-mltbc\" (UID: \"9091157f-35e8-471a-a784-9dad836695ab\") " pod="openstack/barbican-db-create-mltbc" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.769222 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mltbc" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.803814 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5562803-a85b-481f-a8ed-a0c309e63253-operator-scripts\") pod \"barbican-ef3a-account-create-rkn96\" (UID: \"f5562803-a85b-481f-a8ed-a0c309e63253\") " pod="openstack/barbican-ef3a-account-create-rkn96" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.804280 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv2kh\" (UniqueName: \"kubernetes.io/projected/f5562803-a85b-481f-a8ed-a0c309e63253-kube-api-access-dv2kh\") pod \"barbican-ef3a-account-create-rkn96\" (UID: \"f5562803-a85b-481f-a8ed-a0c309e63253\") " pod="openstack/barbican-ef3a-account-create-rkn96" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.805161 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5562803-a85b-481f-a8ed-a0c309e63253-operator-scripts\") pod \"barbican-ef3a-account-create-rkn96\" (UID: \"f5562803-a85b-481f-a8ed-a0c309e63253\") " pod="openstack/barbican-ef3a-account-create-rkn96" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.824789 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv2kh\" (UniqueName: \"kubernetes.io/projected/f5562803-a85b-481f-a8ed-a0c309e63253-kube-api-access-dv2kh\") pod \"barbican-ef3a-account-create-rkn96\" (UID: \"f5562803-a85b-481f-a8ed-a0c309e63253\") " pod="openstack/barbican-ef3a-account-create-rkn96" Nov 22 08:36:26 crc kubenswrapper[4856]: I1122 08:36:26.854694 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ef3a-account-create-rkn96" Nov 22 08:36:27 crc kubenswrapper[4856]: I1122 08:36:27.209297 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mltbc"] Nov 22 08:36:27 crc kubenswrapper[4856]: W1122 08:36:27.214620 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9091157f_35e8_471a_a784_9dad836695ab.slice/crio-5d09570ea7b4069687bcd5605ef6da99633bf0a29b3bf4298417a8f6a3166d53 WatchSource:0}: Error finding container 5d09570ea7b4069687bcd5605ef6da99633bf0a29b3bf4298417a8f6a3166d53: Status 404 returned error can't find the container with id 5d09570ea7b4069687bcd5605ef6da99633bf0a29b3bf4298417a8f6a3166d53 Nov 22 08:36:27 crc kubenswrapper[4856]: I1122 08:36:27.321374 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-ef3a-account-create-rkn96"] Nov 22 08:36:27 crc kubenswrapper[4856]: W1122 08:36:27.325421 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5562803_a85b_481f_a8ed_a0c309e63253.slice/crio-2617353f9676c7123b5ad27da3d11f6b9aa43bfde8aaa15b7123eeed010aa515 WatchSource:0}: Error finding container 2617353f9676c7123b5ad27da3d11f6b9aa43bfde8aaa15b7123eeed010aa515: Status 404 returned error can't find the container with id 2617353f9676c7123b5ad27da3d11f6b9aa43bfde8aaa15b7123eeed010aa515 Nov 22 08:36:27 crc kubenswrapper[4856]: I1122 08:36:27.817588 4856 generic.go:334] "Generic (PLEG): container finished" podID="f5562803-a85b-481f-a8ed-a0c309e63253" containerID="3b81f97ea589c8b96081f551c6306e09a909dbee2801dc337e1f6dbd5e143d52" exitCode=0 Nov 22 08:36:27 crc kubenswrapper[4856]: I1122 08:36:27.817674 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ef3a-account-create-rkn96" event={"ID":"f5562803-a85b-481f-a8ed-a0c309e63253","Type":"ContainerDied","Data":"3b81f97ea589c8b96081f551c6306e09a909dbee2801dc337e1f6dbd5e143d52"} Nov 22 08:36:27 crc kubenswrapper[4856]: I1122 08:36:27.817700 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ef3a-account-create-rkn96" event={"ID":"f5562803-a85b-481f-a8ed-a0c309e63253","Type":"ContainerStarted","Data":"2617353f9676c7123b5ad27da3d11f6b9aa43bfde8aaa15b7123eeed010aa515"} Nov 22 08:36:27 crc kubenswrapper[4856]: I1122 08:36:27.819022 4856 generic.go:334] "Generic (PLEG): container finished" podID="9091157f-35e8-471a-a784-9dad836695ab" containerID="d9717c199e03ff9daea709aa0670bf7893dd675ccf9ef270d7ed1c515135bff0" exitCode=0 Nov 22 08:36:27 crc kubenswrapper[4856]: I1122 08:36:27.819073 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mltbc" event={"ID":"9091157f-35e8-471a-a784-9dad836695ab","Type":"ContainerDied","Data":"d9717c199e03ff9daea709aa0670bf7893dd675ccf9ef270d7ed1c515135bff0"} Nov 22 08:36:27 crc kubenswrapper[4856]: I1122 08:36:27.819101 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mltbc" event={"ID":"9091157f-35e8-471a-a784-9dad836695ab","Type":"ContainerStarted","Data":"5d09570ea7b4069687bcd5605ef6da99633bf0a29b3bf4298417a8f6a3166d53"} Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.213267 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mltbc" Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.220011 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ef3a-account-create-rkn96" Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.352452 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dv2kh\" (UniqueName: \"kubernetes.io/projected/f5562803-a85b-481f-a8ed-a0c309e63253-kube-api-access-dv2kh\") pod \"f5562803-a85b-481f-a8ed-a0c309e63253\" (UID: \"f5562803-a85b-481f-a8ed-a0c309e63253\") " Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.352576 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9091157f-35e8-471a-a784-9dad836695ab-operator-scripts\") pod \"9091157f-35e8-471a-a784-9dad836695ab\" (UID: \"9091157f-35e8-471a-a784-9dad836695ab\") " Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.352646 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6lck\" (UniqueName: \"kubernetes.io/projected/9091157f-35e8-471a-a784-9dad836695ab-kube-api-access-f6lck\") pod \"9091157f-35e8-471a-a784-9dad836695ab\" (UID: \"9091157f-35e8-471a-a784-9dad836695ab\") " Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.352785 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5562803-a85b-481f-a8ed-a0c309e63253-operator-scripts\") pod \"f5562803-a85b-481f-a8ed-a0c309e63253\" (UID: \"f5562803-a85b-481f-a8ed-a0c309e63253\") " Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.353274 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9091157f-35e8-471a-a784-9dad836695ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9091157f-35e8-471a-a784-9dad836695ab" (UID: "9091157f-35e8-471a-a784-9dad836695ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.353888 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5562803-a85b-481f-a8ed-a0c309e63253-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f5562803-a85b-481f-a8ed-a0c309e63253" (UID: "f5562803-a85b-481f-a8ed-a0c309e63253"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.358558 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5562803-a85b-481f-a8ed-a0c309e63253-kube-api-access-dv2kh" (OuterVolumeSpecName: "kube-api-access-dv2kh") pod "f5562803-a85b-481f-a8ed-a0c309e63253" (UID: "f5562803-a85b-481f-a8ed-a0c309e63253"). InnerVolumeSpecName "kube-api-access-dv2kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.358918 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9091157f-35e8-471a-a784-9dad836695ab-kube-api-access-f6lck" (OuterVolumeSpecName: "kube-api-access-f6lck") pod "9091157f-35e8-471a-a784-9dad836695ab" (UID: "9091157f-35e8-471a-a784-9dad836695ab"). InnerVolumeSpecName "kube-api-access-f6lck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.454333 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5562803-a85b-481f-a8ed-a0c309e63253-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.454374 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dv2kh\" (UniqueName: \"kubernetes.io/projected/f5562803-a85b-481f-a8ed-a0c309e63253-kube-api-access-dv2kh\") on node \"crc\" DevicePath \"\"" Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.454385 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9091157f-35e8-471a-a784-9dad836695ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.454393 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6lck\" (UniqueName: \"kubernetes.io/projected/9091157f-35e8-471a-a784-9dad836695ab-kube-api-access-f6lck\") on node \"crc\" DevicePath \"\"" Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.835865 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mltbc" Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.835866 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mltbc" event={"ID":"9091157f-35e8-471a-a784-9dad836695ab","Type":"ContainerDied","Data":"5d09570ea7b4069687bcd5605ef6da99633bf0a29b3bf4298417a8f6a3166d53"} Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.835991 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d09570ea7b4069687bcd5605ef6da99633bf0a29b3bf4298417a8f6a3166d53" Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.837600 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-ef3a-account-create-rkn96" event={"ID":"f5562803-a85b-481f-a8ed-a0c309e63253","Type":"ContainerDied","Data":"2617353f9676c7123b5ad27da3d11f6b9aa43bfde8aaa15b7123eeed010aa515"} Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.837652 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2617353f9676c7123b5ad27da3d11f6b9aa43bfde8aaa15b7123eeed010aa515" Nov 22 08:36:29 crc kubenswrapper[4856]: I1122 08:36:29.837656 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-ef3a-account-create-rkn96" Nov 22 08:36:31 crc kubenswrapper[4856]: I1122 08:36:31.895987 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-f865q"] Nov 22 08:36:31 crc kubenswrapper[4856]: E1122 08:36:31.896626 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5562803-a85b-481f-a8ed-a0c309e63253" containerName="mariadb-account-create" Nov 22 08:36:31 crc kubenswrapper[4856]: I1122 08:36:31.896642 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5562803-a85b-481f-a8ed-a0c309e63253" containerName="mariadb-account-create" Nov 22 08:36:31 crc kubenswrapper[4856]: E1122 08:36:31.896661 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9091157f-35e8-471a-a784-9dad836695ab" containerName="mariadb-database-create" Nov 22 08:36:31 crc kubenswrapper[4856]: I1122 08:36:31.896668 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9091157f-35e8-471a-a784-9dad836695ab" containerName="mariadb-database-create" Nov 22 08:36:31 crc kubenswrapper[4856]: I1122 08:36:31.896852 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5562803-a85b-481f-a8ed-a0c309e63253" containerName="mariadb-account-create" Nov 22 08:36:31 crc kubenswrapper[4856]: I1122 08:36:31.896874 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9091157f-35e8-471a-a784-9dad836695ab" containerName="mariadb-database-create" Nov 22 08:36:31 crc kubenswrapper[4856]: I1122 08:36:31.897421 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-f865q" Nov 22 08:36:31 crc kubenswrapper[4856]: I1122 08:36:31.903369 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 08:36:31 crc kubenswrapper[4856]: I1122 08:36:31.903380 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-57fs8" Nov 22 08:36:31 crc kubenswrapper[4856]: I1122 08:36:31.909150 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-f865q"] Nov 22 08:36:31 crc kubenswrapper[4856]: I1122 08:36:31.997916 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b945e4ce-9238-4801-a040-0fc22d868de7-db-sync-config-data\") pod \"barbican-db-sync-f865q\" (UID: \"b945e4ce-9238-4801-a040-0fc22d868de7\") " pod="openstack/barbican-db-sync-f865q" Nov 22 08:36:31 crc kubenswrapper[4856]: I1122 08:36:31.998075 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjh6k\" (UniqueName: \"kubernetes.io/projected/b945e4ce-9238-4801-a040-0fc22d868de7-kube-api-access-rjh6k\") pod \"barbican-db-sync-f865q\" (UID: \"b945e4ce-9238-4801-a040-0fc22d868de7\") " pod="openstack/barbican-db-sync-f865q" Nov 22 08:36:31 crc kubenswrapper[4856]: I1122 08:36:31.998136 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b945e4ce-9238-4801-a040-0fc22d868de7-combined-ca-bundle\") pod \"barbican-db-sync-f865q\" (UID: \"b945e4ce-9238-4801-a040-0fc22d868de7\") " pod="openstack/barbican-db-sync-f865q" Nov 22 08:36:32 crc kubenswrapper[4856]: I1122 08:36:32.100315 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjh6k\" (UniqueName: \"kubernetes.io/projected/b945e4ce-9238-4801-a040-0fc22d868de7-kube-api-access-rjh6k\") pod \"barbican-db-sync-f865q\" (UID: \"b945e4ce-9238-4801-a040-0fc22d868de7\") " pod="openstack/barbican-db-sync-f865q" Nov 22 08:36:32 crc kubenswrapper[4856]: I1122 08:36:32.100426 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b945e4ce-9238-4801-a040-0fc22d868de7-combined-ca-bundle\") pod \"barbican-db-sync-f865q\" (UID: \"b945e4ce-9238-4801-a040-0fc22d868de7\") " pod="openstack/barbican-db-sync-f865q" Nov 22 08:36:32 crc kubenswrapper[4856]: I1122 08:36:32.100539 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b945e4ce-9238-4801-a040-0fc22d868de7-db-sync-config-data\") pod \"barbican-db-sync-f865q\" (UID: \"b945e4ce-9238-4801-a040-0fc22d868de7\") " pod="openstack/barbican-db-sync-f865q" Nov 22 08:36:32 crc kubenswrapper[4856]: I1122 08:36:32.109302 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b945e4ce-9238-4801-a040-0fc22d868de7-db-sync-config-data\") pod \"barbican-db-sync-f865q\" (UID: \"b945e4ce-9238-4801-a040-0fc22d868de7\") " pod="openstack/barbican-db-sync-f865q" Nov 22 08:36:32 crc kubenswrapper[4856]: I1122 08:36:32.118013 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjh6k\" (UniqueName: \"kubernetes.io/projected/b945e4ce-9238-4801-a040-0fc22d868de7-kube-api-access-rjh6k\") pod \"barbican-db-sync-f865q\" (UID: \"b945e4ce-9238-4801-a040-0fc22d868de7\") " pod="openstack/barbican-db-sync-f865q" Nov 22 08:36:32 crc kubenswrapper[4856]: I1122 08:36:32.124669 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b945e4ce-9238-4801-a040-0fc22d868de7-combined-ca-bundle\") pod \"barbican-db-sync-f865q\" (UID: \"b945e4ce-9238-4801-a040-0fc22d868de7\") " pod="openstack/barbican-db-sync-f865q" Nov 22 08:36:32 crc kubenswrapper[4856]: I1122 08:36:32.220821 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-f865q" Nov 22 08:36:32 crc kubenswrapper[4856]: I1122 08:36:32.662615 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-f865q"] Nov 22 08:36:32 crc kubenswrapper[4856]: I1122 08:36:32.863435 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-f865q" event={"ID":"b945e4ce-9238-4801-a040-0fc22d868de7","Type":"ContainerStarted","Data":"7ba3d94844718687a4337ac9a7ebe507c3e89f9fdfa078c152ed5508df336ff7"} Nov 22 08:36:37 crc kubenswrapper[4856]: I1122 08:36:37.903359 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-f865q" event={"ID":"b945e4ce-9238-4801-a040-0fc22d868de7","Type":"ContainerStarted","Data":"1b5a366c26955a35d4496d1eb77522d7e86641bcf36a9510f81e1ce989dc803a"} Nov 22 08:36:37 crc kubenswrapper[4856]: I1122 08:36:37.919214 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-f865q" podStartSLOduration=2.262324939 podStartE2EDuration="6.9191959s" podCreationTimestamp="2025-11-22 08:36:31 +0000 UTC" firstStartedPulling="2025-11-22 08:36:32.68043083 +0000 UTC m=+5635.093824088" lastFinishedPulling="2025-11-22 08:36:37.337301791 +0000 UTC m=+5639.750695049" observedRunningTime="2025-11-22 08:36:37.918866372 +0000 UTC m=+5640.332259630" watchObservedRunningTime="2025-11-22 08:36:37.9191959 +0000 UTC m=+5640.332589158" Nov 22 08:36:38 crc kubenswrapper[4856]: I1122 08:36:38.917261 4856 generic.go:334] "Generic (PLEG): container finished" podID="b945e4ce-9238-4801-a040-0fc22d868de7" containerID="1b5a366c26955a35d4496d1eb77522d7e86641bcf36a9510f81e1ce989dc803a" exitCode=0 Nov 22 08:36:38 crc kubenswrapper[4856]: I1122 08:36:38.917305 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-f865q" event={"ID":"b945e4ce-9238-4801-a040-0fc22d868de7","Type":"ContainerDied","Data":"1b5a366c26955a35d4496d1eb77522d7e86641bcf36a9510f81e1ce989dc803a"} Nov 22 08:36:40 crc kubenswrapper[4856]: I1122 08:36:40.271005 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-f865q" Nov 22 08:36:40 crc kubenswrapper[4856]: I1122 08:36:40.349721 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjh6k\" (UniqueName: \"kubernetes.io/projected/b945e4ce-9238-4801-a040-0fc22d868de7-kube-api-access-rjh6k\") pod \"b945e4ce-9238-4801-a040-0fc22d868de7\" (UID: \"b945e4ce-9238-4801-a040-0fc22d868de7\") " Nov 22 08:36:40 crc kubenswrapper[4856]: I1122 08:36:40.349886 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b945e4ce-9238-4801-a040-0fc22d868de7-db-sync-config-data\") pod \"b945e4ce-9238-4801-a040-0fc22d868de7\" (UID: \"b945e4ce-9238-4801-a040-0fc22d868de7\") " Nov 22 08:36:40 crc kubenswrapper[4856]: I1122 08:36:40.350006 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b945e4ce-9238-4801-a040-0fc22d868de7-combined-ca-bundle\") pod \"b945e4ce-9238-4801-a040-0fc22d868de7\" (UID: \"b945e4ce-9238-4801-a040-0fc22d868de7\") " Nov 22 08:36:40 crc kubenswrapper[4856]: I1122 08:36:40.357852 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b945e4ce-9238-4801-a040-0fc22d868de7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b945e4ce-9238-4801-a040-0fc22d868de7" (UID: "b945e4ce-9238-4801-a040-0fc22d868de7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:36:40 crc kubenswrapper[4856]: I1122 08:36:40.359176 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b945e4ce-9238-4801-a040-0fc22d868de7-kube-api-access-rjh6k" (OuterVolumeSpecName: "kube-api-access-rjh6k") pod "b945e4ce-9238-4801-a040-0fc22d868de7" (UID: "b945e4ce-9238-4801-a040-0fc22d868de7"). InnerVolumeSpecName "kube-api-access-rjh6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:36:40 crc kubenswrapper[4856]: I1122 08:36:40.384166 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b945e4ce-9238-4801-a040-0fc22d868de7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b945e4ce-9238-4801-a040-0fc22d868de7" (UID: "b945e4ce-9238-4801-a040-0fc22d868de7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:36:40 crc kubenswrapper[4856]: I1122 08:36:40.451882 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b945e4ce-9238-4801-a040-0fc22d868de7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:36:40 crc kubenswrapper[4856]: I1122 08:36:40.451940 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjh6k\" (UniqueName: \"kubernetes.io/projected/b945e4ce-9238-4801-a040-0fc22d868de7-kube-api-access-rjh6k\") on node \"crc\" DevicePath \"\"" Nov 22 08:36:40 crc kubenswrapper[4856]: I1122 08:36:40.451951 4856 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b945e4ce-9238-4801-a040-0fc22d868de7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:36:40 crc kubenswrapper[4856]: I1122 08:36:40.935176 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-f865q" event={"ID":"b945e4ce-9238-4801-a040-0fc22d868de7","Type":"ContainerDied","Data":"7ba3d94844718687a4337ac9a7ebe507c3e89f9fdfa078c152ed5508df336ff7"} Nov 22 08:36:40 crc kubenswrapper[4856]: I1122 08:36:40.935222 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba3d94844718687a4337ac9a7ebe507c3e89f9fdfa078c152ed5508df336ff7" Nov 22 08:36:40 crc kubenswrapper[4856]: I1122 08:36:40.935260 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-f865q" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.145424 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-955d7597c-vxs4h"] Nov 22 08:36:41 crc kubenswrapper[4856]: E1122 08:36:41.146314 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b945e4ce-9238-4801-a040-0fc22d868de7" containerName="barbican-db-sync" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.146328 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b945e4ce-9238-4801-a040-0fc22d868de7" containerName="barbican-db-sync" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.146494 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b945e4ce-9238-4801-a040-0fc22d868de7" containerName="barbican-db-sync" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.147409 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.151034 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.151487 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-57fs8" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.151788 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.172797 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-955d7597c-vxs4h"] Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.203604 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-689d9fcc78-qzcr4"] Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.205642 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.208234 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.218048 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-689d9fcc78-qzcr4"] Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.273418 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cad1422-5ab8-4d58-8f88-730c9e301ae9-logs\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.273555 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-762q9\" (UniqueName: \"kubernetes.io/projected/8cad1422-5ab8-4d58-8f88-730c9e301ae9-kube-api-access-762q9\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.273658 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cad1422-5ab8-4d58-8f88-730c9e301ae9-config-data-custom\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.273821 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cad1422-5ab8-4d58-8f88-730c9e301ae9-config-data\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.273899 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cad1422-5ab8-4d58-8f88-730c9e301ae9-combined-ca-bundle\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.302585 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d9c44b575-dqqvn"] Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.317058 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.334899 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d9c44b575-dqqvn"] Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.376404 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51e24c6d-a8b8-44a4-8654-8e8623dc844f-config-data-custom\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.376467 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e24c6d-a8b8-44a4-8654-8e8623dc844f-combined-ca-bundle\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.376587 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e24c6d-a8b8-44a4-8654-8e8623dc844f-config-data\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.376650 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cad1422-5ab8-4d58-8f88-730c9e301ae9-logs\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.376721 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-762q9\" (UniqueName: \"kubernetes.io/projected/8cad1422-5ab8-4d58-8f88-730c9e301ae9-kube-api-access-762q9\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.376772 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cad1422-5ab8-4d58-8f88-730c9e301ae9-config-data-custom\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.376826 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51e24c6d-a8b8-44a4-8654-8e8623dc844f-logs\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.376856 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cad1422-5ab8-4d58-8f88-730c9e301ae9-config-data\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.376885 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xhk4\" (UniqueName: \"kubernetes.io/projected/51e24c6d-a8b8-44a4-8654-8e8623dc844f-kube-api-access-4xhk4\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.376914 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cad1422-5ab8-4d58-8f88-730c9e301ae9-combined-ca-bundle\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.377144 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cad1422-5ab8-4d58-8f88-730c9e301ae9-logs\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.381540 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cad1422-5ab8-4d58-8f88-730c9e301ae9-config-data\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.387749 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8cad1422-5ab8-4d58-8f88-730c9e301ae9-config-data-custom\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.396391 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cad1422-5ab8-4d58-8f88-730c9e301ae9-combined-ca-bundle\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.399232 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-762q9\" (UniqueName: \"kubernetes.io/projected/8cad1422-5ab8-4d58-8f88-730c9e301ae9-kube-api-access-762q9\") pod \"barbican-worker-955d7597c-vxs4h\" (UID: \"8cad1422-5ab8-4d58-8f88-730c9e301ae9\") " pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.438904 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6cc475fc98-fv7bh"] Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.440944 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.455008 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cc475fc98-fv7bh"] Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.455167 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.470417 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-955d7597c-vxs4h" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.478050 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbb6t\" (UniqueName: \"kubernetes.io/projected/95999a93-29e1-455b-840f-06b9e8e5cacc-kube-api-access-pbb6t\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.478109 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-config\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.478156 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51e24c6d-a8b8-44a4-8654-8e8623dc844f-logs\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.478181 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xhk4\" (UniqueName: \"kubernetes.io/projected/51e24c6d-a8b8-44a4-8654-8e8623dc844f-kube-api-access-4xhk4\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.478203 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-dns-svc\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.478243 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-ovsdbserver-sb\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.478282 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-ovsdbserver-nb\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.478315 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51e24c6d-a8b8-44a4-8654-8e8623dc844f-config-data-custom\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.478342 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e24c6d-a8b8-44a4-8654-8e8623dc844f-combined-ca-bundle\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.478381 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e24c6d-a8b8-44a4-8654-8e8623dc844f-config-data\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.481202 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51e24c6d-a8b8-44a4-8654-8e8623dc844f-logs\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.483746 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e24c6d-a8b8-44a4-8654-8e8623dc844f-config-data\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.485803 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51e24c6d-a8b8-44a4-8654-8e8623dc844f-combined-ca-bundle\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.487197 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51e24c6d-a8b8-44a4-8654-8e8623dc844f-config-data-custom\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.506173 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xhk4\" (UniqueName: \"kubernetes.io/projected/51e24c6d-a8b8-44a4-8654-8e8623dc844f-kube-api-access-4xhk4\") pod \"barbican-keystone-listener-689d9fcc78-qzcr4\" (UID: \"51e24c6d-a8b8-44a4-8654-8e8623dc844f\") " pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.528337 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.580921 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-ovsdbserver-sb\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.580973 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-ovsdbserver-nb\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.581032 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df44256-9812-4f63-beb9-a3cb6f22ed0d-logs\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.581059 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-combined-ca-bundle\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.581088 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbb6t\" (UniqueName: \"kubernetes.io/projected/95999a93-29e1-455b-840f-06b9e8e5cacc-kube-api-access-pbb6t\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.581107 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-config-data-custom\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.581133 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-config-data\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.581165 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-config\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.581191 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsz26\" (UniqueName: \"kubernetes.io/projected/8df44256-9812-4f63-beb9-a3cb6f22ed0d-kube-api-access-tsz26\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.581242 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-dns-svc\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.584052 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-ovsdbserver-nb\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.584073 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-dns-svc\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.584088 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-ovsdbserver-sb\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.584899 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-config\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.605653 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbb6t\" (UniqueName: \"kubernetes.io/projected/95999a93-29e1-455b-840f-06b9e8e5cacc-kube-api-access-pbb6t\") pod \"dnsmasq-dns-d9c44b575-dqqvn\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.642975 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.693550 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df44256-9812-4f63-beb9-a3cb6f22ed0d-logs\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.693622 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-combined-ca-bundle\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.693666 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-config-data-custom\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.693697 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-config-data\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.693758 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsz26\" (UniqueName: \"kubernetes.io/projected/8df44256-9812-4f63-beb9-a3cb6f22ed0d-kube-api-access-tsz26\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.694254 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df44256-9812-4f63-beb9-a3cb6f22ed0d-logs\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.700459 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-combined-ca-bundle\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.702758 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-config-data-custom\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.703782 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-config-data\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.720040 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsz26\" (UniqueName: \"kubernetes.io/projected/8df44256-9812-4f63-beb9-a3cb6f22ed0d-kube-api-access-tsz26\") pod \"barbican-api-6cc475fc98-fv7bh\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.781420 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:41 crc kubenswrapper[4856]: I1122 08:36:41.991165 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-955d7597c-vxs4h"] Nov 22 08:36:41 crc kubenswrapper[4856]: W1122 08:36:41.993353 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cad1422_5ab8_4d58_8f88_730c9e301ae9.slice/crio-f5198f7c99fee6a3d86a31c5aef9b81f3ad2b1d78195be7ac59137c3b3341698 WatchSource:0}: Error finding container f5198f7c99fee6a3d86a31c5aef9b81f3ad2b1d78195be7ac59137c3b3341698: Status 404 returned error can't find the container with id f5198f7c99fee6a3d86a31c5aef9b81f3ad2b1d78195be7ac59137c3b3341698 Nov 22 08:36:42 crc kubenswrapper[4856]: I1122 08:36:42.107248 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-689d9fcc78-qzcr4"] Nov 22 08:36:42 crc kubenswrapper[4856]: I1122 08:36:42.217114 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d9c44b575-dqqvn"] Nov 22 08:36:42 crc kubenswrapper[4856]: W1122 08:36:42.228025 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95999a93_29e1_455b_840f_06b9e8e5cacc.slice/crio-8f7f955bdd6d28dfd51d002811316070b5b5a2292255e7b13a89461452c5969e WatchSource:0}: Error finding container 8f7f955bdd6d28dfd51d002811316070b5b5a2292255e7b13a89461452c5969e: Status 404 returned error can't find the container with id 8f7f955bdd6d28dfd51d002811316070b5b5a2292255e7b13a89461452c5969e Nov 22 08:36:42 crc kubenswrapper[4856]: I1122 08:36:42.293607 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cc475fc98-fv7bh"] Nov 22 08:36:42 crc kubenswrapper[4856]: I1122 08:36:42.958870 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-955d7597c-vxs4h" event={"ID":"8cad1422-5ab8-4d58-8f88-730c9e301ae9","Type":"ContainerStarted","Data":"f5198f7c99fee6a3d86a31c5aef9b81f3ad2b1d78195be7ac59137c3b3341698"} Nov 22 08:36:42 crc kubenswrapper[4856]: I1122 08:36:42.960845 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" event={"ID":"51e24c6d-a8b8-44a4-8654-8e8623dc844f","Type":"ContainerStarted","Data":"7c583d272e615ec6a6a7e9b040e3e4fef121a440c1a72c6df967a434532a53f0"} Nov 22 08:36:42 crc kubenswrapper[4856]: I1122 08:36:42.964983 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc475fc98-fv7bh" event={"ID":"8df44256-9812-4f63-beb9-a3cb6f22ed0d","Type":"ContainerStarted","Data":"b258c69e90d9e93a7c272855af48995895fb079a521b523d0337e29c65f15043"} Nov 22 08:36:42 crc kubenswrapper[4856]: I1122 08:36:42.965048 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc475fc98-fv7bh" event={"ID":"8df44256-9812-4f63-beb9-a3cb6f22ed0d","Type":"ContainerStarted","Data":"6bb49533632c0839c3bb455d0c882244204ce9631bfadf32434734727e0a6c9a"} Nov 22 08:36:42 crc kubenswrapper[4856]: I1122 08:36:42.967444 4856 generic.go:334] "Generic (PLEG): container finished" podID="95999a93-29e1-455b-840f-06b9e8e5cacc" containerID="041415267adf801158898c9db7b875100fc30771bd088f012b474c05f8e1c8c3" exitCode=0 Nov 22 08:36:42 crc kubenswrapper[4856]: I1122 08:36:42.967497 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" event={"ID":"95999a93-29e1-455b-840f-06b9e8e5cacc","Type":"ContainerDied","Data":"041415267adf801158898c9db7b875100fc30771bd088f012b474c05f8e1c8c3"} Nov 22 08:36:42 crc kubenswrapper[4856]: I1122 08:36:42.967554 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" event={"ID":"95999a93-29e1-455b-840f-06b9e8e5cacc","Type":"ContainerStarted","Data":"8f7f955bdd6d28dfd51d002811316070b5b5a2292255e7b13a89461452c5969e"} Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.304664 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7ff8c9cc54-8k24x"] Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.306253 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.308733 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.308961 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.321087 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7ff8c9cc54-8k24x"] Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.432091 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-config-data-custom\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.432170 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-internal-tls-certs\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.432215 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2rhn\" (UniqueName: \"kubernetes.io/projected/fcb86f9c-fee1-46d6-acac-20f49f472dfa-kube-api-access-l2rhn\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.432253 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcb86f9c-fee1-46d6-acac-20f49f472dfa-logs\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.432294 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-config-data\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.432449 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-combined-ca-bundle\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.432475 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-public-tls-certs\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.534665 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-combined-ca-bundle\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.535708 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-public-tls-certs\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.535766 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-config-data-custom\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.535957 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-internal-tls-certs\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.535999 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2rhn\" (UniqueName: \"kubernetes.io/projected/fcb86f9c-fee1-46d6-acac-20f49f472dfa-kube-api-access-l2rhn\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.536031 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcb86f9c-fee1-46d6-acac-20f49f472dfa-logs\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.536063 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-config-data\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.537115 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fcb86f9c-fee1-46d6-acac-20f49f472dfa-logs\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.545781 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-config-data-custom\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.548119 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-config-data\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.550062 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-internal-tls-certs\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.550825 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-public-tls-certs\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.560210 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcb86f9c-fee1-46d6-acac-20f49f472dfa-combined-ca-bundle\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.560210 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2rhn\" (UniqueName: \"kubernetes.io/projected/fcb86f9c-fee1-46d6-acac-20f49f472dfa-kube-api-access-l2rhn\") pod \"barbican-api-7ff8c9cc54-8k24x\" (UID: \"fcb86f9c-fee1-46d6-acac-20f49f472dfa\") " pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.628113 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.979414 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc475fc98-fv7bh" event={"ID":"8df44256-9812-4f63-beb9-a3cb6f22ed0d","Type":"ContainerStarted","Data":"297ba6f7460b2713f75d68d8ac0d37b8ca472cecf3a97a53ad9d50e4d0d3a147"} Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.979728 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:43 crc kubenswrapper[4856]: I1122 08:36:43.979789 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:44 crc kubenswrapper[4856]: I1122 08:36:44.012249 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6cc475fc98-fv7bh" podStartSLOduration=3.012210624 podStartE2EDuration="3.012210624s" podCreationTimestamp="2025-11-22 08:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:36:44.004309651 +0000 UTC m=+5646.417702929" watchObservedRunningTime="2025-11-22 08:36:44.012210624 +0000 UTC m=+5646.425603872" Nov 22 08:36:44 crc kubenswrapper[4856]: W1122 08:36:44.417033 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcb86f9c_fee1_46d6_acac_20f49f472dfa.slice/crio-cbcc557208975d8ac9d987d167cc95c055c6f9939fb0fb1305d59a2d3657c6bd WatchSource:0}: Error finding container cbcc557208975d8ac9d987d167cc95c055c6f9939fb0fb1305d59a2d3657c6bd: Status 404 returned error can't find the container with id cbcc557208975d8ac9d987d167cc95c055c6f9939fb0fb1305d59a2d3657c6bd Nov 22 08:36:44 crc kubenswrapper[4856]: I1122 08:36:44.417153 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7ff8c9cc54-8k24x"] Nov 22 08:36:44 crc kubenswrapper[4856]: I1122 08:36:44.990955 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff8c9cc54-8k24x" event={"ID":"fcb86f9c-fee1-46d6-acac-20f49f472dfa","Type":"ContainerStarted","Data":"3c7c2266e8cfd619059b2dd4c076cbe46e6bd704f7fba4265b504d2514a7d50a"} Nov 22 08:36:44 crc kubenswrapper[4856]: I1122 08:36:44.991612 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff8c9cc54-8k24x" event={"ID":"fcb86f9c-fee1-46d6-acac-20f49f472dfa","Type":"ContainerStarted","Data":"0858e071f1e047e49224dc52e0e7ce766807473453331f01354868a76a91c9cf"} Nov 22 08:36:44 crc kubenswrapper[4856]: I1122 08:36:44.991668 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:44 crc kubenswrapper[4856]: I1122 08:36:44.991685 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:44 crc kubenswrapper[4856]: I1122 08:36:44.991697 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff8c9cc54-8k24x" event={"ID":"fcb86f9c-fee1-46d6-acac-20f49f472dfa","Type":"ContainerStarted","Data":"cbcc557208975d8ac9d987d167cc95c055c6f9939fb0fb1305d59a2d3657c6bd"} Nov 22 08:36:44 crc kubenswrapper[4856]: I1122 08:36:44.992697 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" event={"ID":"51e24c6d-a8b8-44a4-8654-8e8623dc844f","Type":"ContainerStarted","Data":"9738ca583f09775ca228dc6dc921e2758bda3d015507b07eeee890ba948eab48"} Nov 22 08:36:44 crc kubenswrapper[4856]: I1122 08:36:44.992792 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" event={"ID":"51e24c6d-a8b8-44a4-8654-8e8623dc844f","Type":"ContainerStarted","Data":"fad0ab83f2bd49270f2b82fd2fbe1e9035a385b4d0358010e5fe8b80fb4df589"} Nov 22 08:36:44 crc kubenswrapper[4856]: I1122 08:36:44.994689 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" event={"ID":"95999a93-29e1-455b-840f-06b9e8e5cacc","Type":"ContainerStarted","Data":"92a966f24693c02e9b3e5cf8ff5f38b2525e22fa61e79d0d854ffd9855a04088"} Nov 22 08:36:44 crc kubenswrapper[4856]: I1122 08:36:44.994827 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:44 crc kubenswrapper[4856]: I1122 08:36:44.996459 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-955d7597c-vxs4h" event={"ID":"8cad1422-5ab8-4d58-8f88-730c9e301ae9","Type":"ContainerStarted","Data":"f6ef935e744af8874997aa683bfb022265b39d8d778bb6f4d6e5c4a02b3b881e"} Nov 22 08:36:44 crc kubenswrapper[4856]: I1122 08:36:44.996502 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-955d7597c-vxs4h" event={"ID":"8cad1422-5ab8-4d58-8f88-730c9e301ae9","Type":"ContainerStarted","Data":"76da12057248ac8a34c7edc18cb310647b6c4e519fa37d91e2157e5343c9ab8c"} Nov 22 08:36:45 crc kubenswrapper[4856]: I1122 08:36:45.037435 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7ff8c9cc54-8k24x" podStartSLOduration=2.037403185 podStartE2EDuration="2.037403185s" podCreationTimestamp="2025-11-22 08:36:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:36:45.014101928 +0000 UTC m=+5647.427495206" watchObservedRunningTime="2025-11-22 08:36:45.037403185 +0000 UTC m=+5647.450796453" Nov 22 08:36:45 crc kubenswrapper[4856]: I1122 08:36:45.039625 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-955d7597c-vxs4h" podStartSLOduration=2.073427148 podStartE2EDuration="4.039616085s" podCreationTimestamp="2025-11-22 08:36:41 +0000 UTC" firstStartedPulling="2025-11-22 08:36:41.995108367 +0000 UTC m=+5644.408501625" lastFinishedPulling="2025-11-22 08:36:43.961297304 +0000 UTC m=+5646.374690562" observedRunningTime="2025-11-22 08:36:45.031754963 +0000 UTC m=+5647.445148221" watchObservedRunningTime="2025-11-22 08:36:45.039616085 +0000 UTC m=+5647.453009353" Nov 22 08:36:45 crc kubenswrapper[4856]: I1122 08:36:45.059989 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-689d9fcc78-qzcr4" podStartSLOduration=2.210326879 podStartE2EDuration="4.059972302s" podCreationTimestamp="2025-11-22 08:36:41 +0000 UTC" firstStartedPulling="2025-11-22 08:36:42.117724985 +0000 UTC m=+5644.531118243" lastFinishedPulling="2025-11-22 08:36:43.967370408 +0000 UTC m=+5646.380763666" observedRunningTime="2025-11-22 08:36:45.05059372 +0000 UTC m=+5647.463986978" watchObservedRunningTime="2025-11-22 08:36:45.059972302 +0000 UTC m=+5647.473365560" Nov 22 08:36:45 crc kubenswrapper[4856]: I1122 08:36:45.083472 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" podStartSLOduration=4.083452854 podStartE2EDuration="4.083452854s" podCreationTimestamp="2025-11-22 08:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:36:45.075228762 +0000 UTC m=+5647.488622020" watchObservedRunningTime="2025-11-22 08:36:45.083452854 +0000 UTC m=+5647.496846112" Nov 22 08:36:51 crc kubenswrapper[4856]: I1122 08:36:51.644651 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:36:51 crc kubenswrapper[4856]: I1122 08:36:51.719898 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79c8568849-6rbtr"] Nov 22 08:36:51 crc kubenswrapper[4856]: I1122 08:36:51.720109 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79c8568849-6rbtr" podUID="e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" containerName="dnsmasq-dns" containerID="cri-o://b25ec2d6e3b8825e022b00e3b389fd83262b4bcefd184959c9285b0f3bd9b54a" gracePeriod=10 Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.066258 4856 generic.go:334] "Generic (PLEG): container finished" podID="e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" containerID="b25ec2d6e3b8825e022b00e3b389fd83262b4bcefd184959c9285b0f3bd9b54a" exitCode=0 Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.066711 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79c8568849-6rbtr" event={"ID":"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6","Type":"ContainerDied","Data":"b25ec2d6e3b8825e022b00e3b389fd83262b4bcefd184959c9285b0f3bd9b54a"} Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.257754 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.427291 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6c48\" (UniqueName: \"kubernetes.io/projected/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-kube-api-access-g6c48\") pod \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.427718 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-ovsdbserver-nb\") pod \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.427869 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-dns-svc\") pod \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.427980 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-config\") pod \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.428070 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-ovsdbserver-sb\") pod \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\" (UID: \"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6\") " Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.442762 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-kube-api-access-g6c48" (OuterVolumeSpecName: "kube-api-access-g6c48") pod "e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" (UID: "e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6"). InnerVolumeSpecName "kube-api-access-g6c48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.479577 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" (UID: "e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.481731 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" (UID: "e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.484248 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-config" (OuterVolumeSpecName: "config") pod "e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" (UID: "e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.487317 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" (UID: "e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.530758 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6c48\" (UniqueName: \"kubernetes.io/projected/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-kube-api-access-g6c48\") on node \"crc\" DevicePath \"\"" Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.530792 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.530802 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.530812 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:36:52 crc kubenswrapper[4856]: I1122 08:36:52.530822 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 08:36:53 crc kubenswrapper[4856]: I1122 08:36:53.080792 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79c8568849-6rbtr" event={"ID":"e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6","Type":"ContainerDied","Data":"6f14454caa3adeba823cd39d4414f03cf41fd034b959db025373a87e7cba3755"} Nov 22 08:36:53 crc kubenswrapper[4856]: I1122 08:36:53.081399 4856 scope.go:117] "RemoveContainer" containerID="b25ec2d6e3b8825e022b00e3b389fd83262b4bcefd184959c9285b0f3bd9b54a" Nov 22 08:36:53 crc kubenswrapper[4856]: I1122 08:36:53.081156 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79c8568849-6rbtr" Nov 22 08:36:53 crc kubenswrapper[4856]: I1122 08:36:53.119359 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79c8568849-6rbtr"] Nov 22 08:36:53 crc kubenswrapper[4856]: I1122 08:36:53.121822 4856 scope.go:117] "RemoveContainer" containerID="89521544dfcf8018eefe180857c8a0091f1e6e0da9a68522243837de72213cac" Nov 22 08:36:53 crc kubenswrapper[4856]: I1122 08:36:53.128361 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79c8568849-6rbtr"] Nov 22 08:36:53 crc kubenswrapper[4856]: I1122 08:36:53.187181 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:53 crc kubenswrapper[4856]: I1122 08:36:53.375904 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:54 crc kubenswrapper[4856]: I1122 08:36:54.722691 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" path="/var/lib/kubelet/pods/e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6/volumes" Nov 22 08:36:54 crc kubenswrapper[4856]: I1122 08:36:54.981207 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:55 crc kubenswrapper[4856]: I1122 08:36:55.129959 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7ff8c9cc54-8k24x" Nov 22 08:36:55 crc kubenswrapper[4856]: I1122 08:36:55.224603 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cc475fc98-fv7bh"] Nov 22 08:36:55 crc kubenswrapper[4856]: I1122 08:36:55.224836 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cc475fc98-fv7bh" podUID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" containerName="barbican-api-log" containerID="cri-o://b258c69e90d9e93a7c272855af48995895fb079a521b523d0337e29c65f15043" gracePeriod=30 Nov 22 08:36:55 crc kubenswrapper[4856]: I1122 08:36:55.224954 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cc475fc98-fv7bh" podUID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" containerName="barbican-api" containerID="cri-o://297ba6f7460b2713f75d68d8ac0d37b8ca472cecf3a97a53ad9d50e4d0d3a147" gracePeriod=30 Nov 22 08:36:55 crc kubenswrapper[4856]: I1122 08:36:55.230353 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6cc475fc98-fv7bh" podUID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.39:9311/healthcheck\": EOF" Nov 22 08:36:56 crc kubenswrapper[4856]: I1122 08:36:56.112031 4856 generic.go:334] "Generic (PLEG): container finished" podID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" containerID="b258c69e90d9e93a7c272855af48995895fb079a521b523d0337e29c65f15043" exitCode=143 Nov 22 08:36:56 crc kubenswrapper[4856]: I1122 08:36:56.112139 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc475fc98-fv7bh" event={"ID":"8df44256-9812-4f63-beb9-a3cb6f22ed0d","Type":"ContainerDied","Data":"b258c69e90d9e93a7c272855af48995895fb079a521b523d0337e29c65f15043"} Nov 22 08:36:58 crc kubenswrapper[4856]: I1122 08:36:58.365821 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6cc475fc98-fv7bh" podUID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.1.39:9311/healthcheck\": read tcp 10.217.0.2:36904->10.217.1.39:9311: read: connection reset by peer" Nov 22 08:36:58 crc kubenswrapper[4856]: I1122 08:36:58.365946 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6cc475fc98-fv7bh" podUID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.1.39:9311/healthcheck\": read tcp 10.217.0.2:36918->10.217.1.39:9311: read: connection reset by peer" Nov 22 08:36:59 crc kubenswrapper[4856]: I1122 08:36:59.144787 4856 generic.go:334] "Generic (PLEG): container finished" podID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" containerID="297ba6f7460b2713f75d68d8ac0d37b8ca472cecf3a97a53ad9d50e4d0d3a147" exitCode=0 Nov 22 08:36:59 crc kubenswrapper[4856]: I1122 08:36:59.144872 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc475fc98-fv7bh" event={"ID":"8df44256-9812-4f63-beb9-a3cb6f22ed0d","Type":"ContainerDied","Data":"297ba6f7460b2713f75d68d8ac0d37b8ca472cecf3a97a53ad9d50e4d0d3a147"} Nov 22 08:36:59 crc kubenswrapper[4856]: I1122 08:36:59.883843 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:36:59 crc kubenswrapper[4856]: I1122 08:36:59.983862 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsz26\" (UniqueName: \"kubernetes.io/projected/8df44256-9812-4f63-beb9-a3cb6f22ed0d-kube-api-access-tsz26\") pod \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " Nov 22 08:36:59 crc kubenswrapper[4856]: I1122 08:36:59.983918 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-config-data\") pod \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " Nov 22 08:36:59 crc kubenswrapper[4856]: I1122 08:36:59.983955 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-config-data-custom\") pod \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " Nov 22 08:36:59 crc kubenswrapper[4856]: I1122 08:36:59.983990 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-combined-ca-bundle\") pod \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " Nov 22 08:36:59 crc kubenswrapper[4856]: I1122 08:36:59.984037 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df44256-9812-4f63-beb9-a3cb6f22ed0d-logs\") pod \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\" (UID: \"8df44256-9812-4f63-beb9-a3cb6f22ed0d\") " Nov 22 08:36:59 crc kubenswrapper[4856]: I1122 08:36:59.984574 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8df44256-9812-4f63-beb9-a3cb6f22ed0d-logs" (OuterVolumeSpecName: "logs") pod "8df44256-9812-4f63-beb9-a3cb6f22ed0d" (UID: "8df44256-9812-4f63-beb9-a3cb6f22ed0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:36:59 crc kubenswrapper[4856]: I1122 08:36:59.989718 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8df44256-9812-4f63-beb9-a3cb6f22ed0d" (UID: "8df44256-9812-4f63-beb9-a3cb6f22ed0d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:36:59 crc kubenswrapper[4856]: I1122 08:36:59.989882 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df44256-9812-4f63-beb9-a3cb6f22ed0d-kube-api-access-tsz26" (OuterVolumeSpecName: "kube-api-access-tsz26") pod "8df44256-9812-4f63-beb9-a3cb6f22ed0d" (UID: "8df44256-9812-4f63-beb9-a3cb6f22ed0d"). InnerVolumeSpecName "kube-api-access-tsz26". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.017360 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8df44256-9812-4f63-beb9-a3cb6f22ed0d" (UID: "8df44256-9812-4f63-beb9-a3cb6f22ed0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.027190 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-config-data" (OuterVolumeSpecName: "config-data") pod "8df44256-9812-4f63-beb9-a3cb6f22ed0d" (UID: "8df44256-9812-4f63-beb9-a3cb6f22ed0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.086441 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsz26\" (UniqueName: \"kubernetes.io/projected/8df44256-9812-4f63-beb9-a3cb6f22ed0d-kube-api-access-tsz26\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.086491 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.086533 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.086552 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df44256-9812-4f63-beb9-a3cb6f22ed0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.086572 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8df44256-9812-4f63-beb9-a3cb6f22ed0d-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.156440 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cc475fc98-fv7bh" event={"ID":"8df44256-9812-4f63-beb9-a3cb6f22ed0d","Type":"ContainerDied","Data":"6bb49533632c0839c3bb455d0c882244204ce9631bfadf32434734727e0a6c9a"} Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.156531 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cc475fc98-fv7bh" Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.156600 4856 scope.go:117] "RemoveContainer" containerID="297ba6f7460b2713f75d68d8ac0d37b8ca472cecf3a97a53ad9d50e4d0d3a147" Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.196361 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cc475fc98-fv7bh"] Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.200767 4856 scope.go:117] "RemoveContainer" containerID="b258c69e90d9e93a7c272855af48995895fb079a521b523d0337e29c65f15043" Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.205462 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6cc475fc98-fv7bh"] Nov 22 08:37:00 crc kubenswrapper[4856]: I1122 08:37:00.721991 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" path="/var/lib/kubelet/pods/8df44256-9812-4f63-beb9-a3cb6f22ed0d/volumes" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.689143 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-8qjfd"] Nov 22 08:37:18 crc kubenswrapper[4856]: E1122 08:37:18.690163 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" containerName="init" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.690184 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" containerName="init" Nov 22 08:37:18 crc kubenswrapper[4856]: E1122 08:37:18.690196 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" containerName="barbican-api-log" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.690203 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" containerName="barbican-api-log" Nov 22 08:37:18 crc kubenswrapper[4856]: E1122 08:37:18.690227 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" containerName="barbican-api" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.690236 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" containerName="barbican-api" Nov 22 08:37:18 crc kubenswrapper[4856]: E1122 08:37:18.690272 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" containerName="dnsmasq-dns" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.690281 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" containerName="dnsmasq-dns" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.690467 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" containerName="barbican-api" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.690485 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df44256-9812-4f63-beb9-a3cb6f22ed0d" containerName="barbican-api-log" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.690497 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e94128d4-0bfe-4b8c-9d8e-f404bf3beeb6" containerName="dnsmasq-dns" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.691411 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8qjfd" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.701181 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8qjfd"] Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.784184 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-acc9-account-create-tbhhz"] Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.785265 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-acc9-account-create-tbhhz" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.790965 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.795294 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-acc9-account-create-tbhhz"] Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.859626 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xjg8\" (UniqueName: \"kubernetes.io/projected/17d00c65-c366-406b-b9be-1d9c80574db0-kube-api-access-5xjg8\") pod \"neutron-db-create-8qjfd\" (UID: \"17d00c65-c366-406b-b9be-1d9c80574db0\") " pod="openstack/neutron-db-create-8qjfd" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.860700 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d00c65-c366-406b-b9be-1d9c80574db0-operator-scripts\") pod \"neutron-db-create-8qjfd\" (UID: \"17d00c65-c366-406b-b9be-1d9c80574db0\") " pod="openstack/neutron-db-create-8qjfd" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.962536 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d00c65-c366-406b-b9be-1d9c80574db0-operator-scripts\") pod \"neutron-db-create-8qjfd\" (UID: \"17d00c65-c366-406b-b9be-1d9c80574db0\") " pod="openstack/neutron-db-create-8qjfd" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.962589 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd45e1-dd34-4aaa-b0f7-e939fdae1d40-operator-scripts\") pod \"neutron-acc9-account-create-tbhhz\" (UID: \"39bd45e1-dd34-4aaa-b0f7-e939fdae1d40\") " pod="openstack/neutron-acc9-account-create-tbhhz" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.962643 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6rfr\" (UniqueName: \"kubernetes.io/projected/39bd45e1-dd34-4aaa-b0f7-e939fdae1d40-kube-api-access-n6rfr\") pod \"neutron-acc9-account-create-tbhhz\" (UID: \"39bd45e1-dd34-4aaa-b0f7-e939fdae1d40\") " pod="openstack/neutron-acc9-account-create-tbhhz" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.962705 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xjg8\" (UniqueName: \"kubernetes.io/projected/17d00c65-c366-406b-b9be-1d9c80574db0-kube-api-access-5xjg8\") pod \"neutron-db-create-8qjfd\" (UID: \"17d00c65-c366-406b-b9be-1d9c80574db0\") " pod="openstack/neutron-db-create-8qjfd" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.963863 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d00c65-c366-406b-b9be-1d9c80574db0-operator-scripts\") pod \"neutron-db-create-8qjfd\" (UID: \"17d00c65-c366-406b-b9be-1d9c80574db0\") " pod="openstack/neutron-db-create-8qjfd" Nov 22 08:37:18 crc kubenswrapper[4856]: I1122 08:37:18.984183 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xjg8\" (UniqueName: \"kubernetes.io/projected/17d00c65-c366-406b-b9be-1d9c80574db0-kube-api-access-5xjg8\") pod \"neutron-db-create-8qjfd\" (UID: \"17d00c65-c366-406b-b9be-1d9c80574db0\") " pod="openstack/neutron-db-create-8qjfd" Nov 22 08:37:19 crc kubenswrapper[4856]: I1122 08:37:19.010035 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8qjfd" Nov 22 08:37:19 crc kubenswrapper[4856]: I1122 08:37:19.064743 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd45e1-dd34-4aaa-b0f7-e939fdae1d40-operator-scripts\") pod \"neutron-acc9-account-create-tbhhz\" (UID: \"39bd45e1-dd34-4aaa-b0f7-e939fdae1d40\") " pod="openstack/neutron-acc9-account-create-tbhhz" Nov 22 08:37:19 crc kubenswrapper[4856]: I1122 08:37:19.064824 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rfr\" (UniqueName: \"kubernetes.io/projected/39bd45e1-dd34-4aaa-b0f7-e939fdae1d40-kube-api-access-n6rfr\") pod \"neutron-acc9-account-create-tbhhz\" (UID: \"39bd45e1-dd34-4aaa-b0f7-e939fdae1d40\") " pod="openstack/neutron-acc9-account-create-tbhhz" Nov 22 08:37:19 crc kubenswrapper[4856]: I1122 08:37:19.065915 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd45e1-dd34-4aaa-b0f7-e939fdae1d40-operator-scripts\") pod \"neutron-acc9-account-create-tbhhz\" (UID: \"39bd45e1-dd34-4aaa-b0f7-e939fdae1d40\") " pod="openstack/neutron-acc9-account-create-tbhhz" Nov 22 08:37:19 crc kubenswrapper[4856]: I1122 08:37:19.082752 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6rfr\" (UniqueName: \"kubernetes.io/projected/39bd45e1-dd34-4aaa-b0f7-e939fdae1d40-kube-api-access-n6rfr\") pod \"neutron-acc9-account-create-tbhhz\" (UID: \"39bd45e1-dd34-4aaa-b0f7-e939fdae1d40\") " pod="openstack/neutron-acc9-account-create-tbhhz" Nov 22 08:37:19 crc kubenswrapper[4856]: I1122 08:37:19.102548 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-acc9-account-create-tbhhz" Nov 22 08:37:19 crc kubenswrapper[4856]: I1122 08:37:19.456684 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-8qjfd"] Nov 22 08:37:19 crc kubenswrapper[4856]: I1122 08:37:19.605612 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-acc9-account-create-tbhhz"] Nov 22 08:37:19 crc kubenswrapper[4856]: W1122 08:37:19.610714 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39bd45e1_dd34_4aaa_b0f7_e939fdae1d40.slice/crio-918a7f4be51b229eb14307821670ae13f07fe774b16ec622cca1e143de2268d1 WatchSource:0}: Error finding container 918a7f4be51b229eb14307821670ae13f07fe774b16ec622cca1e143de2268d1: Status 404 returned error can't find the container with id 918a7f4be51b229eb14307821670ae13f07fe774b16ec622cca1e143de2268d1 Nov 22 08:37:20 crc kubenswrapper[4856]: I1122 08:37:20.380664 4856 generic.go:334] "Generic (PLEG): container finished" podID="17d00c65-c366-406b-b9be-1d9c80574db0" containerID="39ef4ef30c09002a4bb23bf2e7c579cc2c554c3d7baea7033164eedda904462d" exitCode=0 Nov 22 08:37:20 crc kubenswrapper[4856]: I1122 08:37:20.381032 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8qjfd" event={"ID":"17d00c65-c366-406b-b9be-1d9c80574db0","Type":"ContainerDied","Data":"39ef4ef30c09002a4bb23bf2e7c579cc2c554c3d7baea7033164eedda904462d"} Nov 22 08:37:20 crc kubenswrapper[4856]: I1122 08:37:20.381158 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8qjfd" event={"ID":"17d00c65-c366-406b-b9be-1d9c80574db0","Type":"ContainerStarted","Data":"7822fb3f33604f20c8ca69fa5beb8d148c93096cac6c27467ea6e53f7b963c43"} Nov 22 08:37:20 crc kubenswrapper[4856]: I1122 08:37:20.383039 4856 generic.go:334] "Generic (PLEG): container finished" podID="39bd45e1-dd34-4aaa-b0f7-e939fdae1d40" containerID="d7644380a62eff558c0fa35ec29ae901b1559292572f2737f5aaba538794a13c" exitCode=0 Nov 22 08:37:20 crc kubenswrapper[4856]: I1122 08:37:20.383084 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-acc9-account-create-tbhhz" event={"ID":"39bd45e1-dd34-4aaa-b0f7-e939fdae1d40","Type":"ContainerDied","Data":"d7644380a62eff558c0fa35ec29ae901b1559292572f2737f5aaba538794a13c"} Nov 22 08:37:20 crc kubenswrapper[4856]: I1122 08:37:20.383108 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-acc9-account-create-tbhhz" event={"ID":"39bd45e1-dd34-4aaa-b0f7-e939fdae1d40","Type":"ContainerStarted","Data":"918a7f4be51b229eb14307821670ae13f07fe774b16ec622cca1e143de2268d1"} Nov 22 08:37:21 crc kubenswrapper[4856]: I1122 08:37:21.775871 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-acc9-account-create-tbhhz" Nov 22 08:37:21 crc kubenswrapper[4856]: I1122 08:37:21.783201 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8qjfd" Nov 22 08:37:21 crc kubenswrapper[4856]: I1122 08:37:21.923617 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6rfr\" (UniqueName: \"kubernetes.io/projected/39bd45e1-dd34-4aaa-b0f7-e939fdae1d40-kube-api-access-n6rfr\") pod \"39bd45e1-dd34-4aaa-b0f7-e939fdae1d40\" (UID: \"39bd45e1-dd34-4aaa-b0f7-e939fdae1d40\") " Nov 22 08:37:21 crc kubenswrapper[4856]: I1122 08:37:21.923720 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xjg8\" (UniqueName: \"kubernetes.io/projected/17d00c65-c366-406b-b9be-1d9c80574db0-kube-api-access-5xjg8\") pod \"17d00c65-c366-406b-b9be-1d9c80574db0\" (UID: \"17d00c65-c366-406b-b9be-1d9c80574db0\") " Nov 22 08:37:21 crc kubenswrapper[4856]: I1122 08:37:21.923857 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d00c65-c366-406b-b9be-1d9c80574db0-operator-scripts\") pod \"17d00c65-c366-406b-b9be-1d9c80574db0\" (UID: \"17d00c65-c366-406b-b9be-1d9c80574db0\") " Nov 22 08:37:21 crc kubenswrapper[4856]: I1122 08:37:21.923946 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd45e1-dd34-4aaa-b0f7-e939fdae1d40-operator-scripts\") pod \"39bd45e1-dd34-4aaa-b0f7-e939fdae1d40\" (UID: \"39bd45e1-dd34-4aaa-b0f7-e939fdae1d40\") " Nov 22 08:37:21 crc kubenswrapper[4856]: I1122 08:37:21.925173 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bd45e1-dd34-4aaa-b0f7-e939fdae1d40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "39bd45e1-dd34-4aaa-b0f7-e939fdae1d40" (UID: "39bd45e1-dd34-4aaa-b0f7-e939fdae1d40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:37:21 crc kubenswrapper[4856]: I1122 08:37:21.925459 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17d00c65-c366-406b-b9be-1d9c80574db0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17d00c65-c366-406b-b9be-1d9c80574db0" (UID: "17d00c65-c366-406b-b9be-1d9c80574db0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:37:21 crc kubenswrapper[4856]: I1122 08:37:21.930692 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39bd45e1-dd34-4aaa-b0f7-e939fdae1d40-kube-api-access-n6rfr" (OuterVolumeSpecName: "kube-api-access-n6rfr") pod "39bd45e1-dd34-4aaa-b0f7-e939fdae1d40" (UID: "39bd45e1-dd34-4aaa-b0f7-e939fdae1d40"). InnerVolumeSpecName "kube-api-access-n6rfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:37:21 crc kubenswrapper[4856]: I1122 08:37:21.931002 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d00c65-c366-406b-b9be-1d9c80574db0-kube-api-access-5xjg8" (OuterVolumeSpecName: "kube-api-access-5xjg8") pod "17d00c65-c366-406b-b9be-1d9c80574db0" (UID: "17d00c65-c366-406b-b9be-1d9c80574db0"). InnerVolumeSpecName "kube-api-access-5xjg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:37:22 crc kubenswrapper[4856]: I1122 08:37:22.025986 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/39bd45e1-dd34-4aaa-b0f7-e939fdae1d40-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:22 crc kubenswrapper[4856]: I1122 08:37:22.026277 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6rfr\" (UniqueName: \"kubernetes.io/projected/39bd45e1-dd34-4aaa-b0f7-e939fdae1d40-kube-api-access-n6rfr\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:22 crc kubenswrapper[4856]: I1122 08:37:22.026341 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xjg8\" (UniqueName: \"kubernetes.io/projected/17d00c65-c366-406b-b9be-1d9c80574db0-kube-api-access-5xjg8\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:22 crc kubenswrapper[4856]: I1122 08:37:22.026402 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17d00c65-c366-406b-b9be-1d9c80574db0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:22 crc kubenswrapper[4856]: I1122 08:37:22.322924 4856 scope.go:117] "RemoveContainer" containerID="7d2b3cc2dc1eefa3214289e070a63f247b95dbc1e4dea539c656533fa73b7d2f" Nov 22 08:37:22 crc kubenswrapper[4856]: I1122 08:37:22.343896 4856 scope.go:117] "RemoveContainer" containerID="f7d1dde0bf4691b59a12fc80afb0031aff82c145d4416a7290e3a53e41f853ad" Nov 22 08:37:22 crc kubenswrapper[4856]: I1122 08:37:22.366496 4856 scope.go:117] "RemoveContainer" containerID="b87e517ef9745f76584bd324723f87ae3fa0efd59c92043300fc4ec71bb70f8b" Nov 22 08:37:22 crc kubenswrapper[4856]: I1122 08:37:22.405998 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-8qjfd" event={"ID":"17d00c65-c366-406b-b9be-1d9c80574db0","Type":"ContainerDied","Data":"7822fb3f33604f20c8ca69fa5beb8d148c93096cac6c27467ea6e53f7b963c43"} Nov 22 08:37:22 crc kubenswrapper[4856]: I1122 08:37:22.406042 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7822fb3f33604f20c8ca69fa5beb8d148c93096cac6c27467ea6e53f7b963c43" Nov 22 08:37:22 crc kubenswrapper[4856]: I1122 08:37:22.406290 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-8qjfd" Nov 22 08:37:22 crc kubenswrapper[4856]: I1122 08:37:22.407358 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-acc9-account-create-tbhhz" event={"ID":"39bd45e1-dd34-4aaa-b0f7-e939fdae1d40","Type":"ContainerDied","Data":"918a7f4be51b229eb14307821670ae13f07fe774b16ec622cca1e143de2268d1"} Nov 22 08:37:22 crc kubenswrapper[4856]: I1122 08:37:22.407379 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="918a7f4be51b229eb14307821670ae13f07fe774b16ec622cca1e143de2268d1" Nov 22 08:37:22 crc kubenswrapper[4856]: I1122 08:37:22.407422 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-acc9-account-create-tbhhz" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.026980 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-qxjbg"] Nov 22 08:37:24 crc kubenswrapper[4856]: E1122 08:37:24.028014 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bd45e1-dd34-4aaa-b0f7-e939fdae1d40" containerName="mariadb-account-create" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.028038 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bd45e1-dd34-4aaa-b0f7-e939fdae1d40" containerName="mariadb-account-create" Nov 22 08:37:24 crc kubenswrapper[4856]: E1122 08:37:24.028083 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d00c65-c366-406b-b9be-1d9c80574db0" containerName="mariadb-database-create" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.028095 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d00c65-c366-406b-b9be-1d9c80574db0" containerName="mariadb-database-create" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.028358 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d00c65-c366-406b-b9be-1d9c80574db0" containerName="mariadb-database-create" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.028397 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bd45e1-dd34-4aaa-b0f7-e939fdae1d40" containerName="mariadb-account-create" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.029373 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qxjbg" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.038194 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.038608 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qjf46" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.038915 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.052123 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qxjbg"] Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.170757 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0b62851-be03-45b3-8433-3d78718bc4c7-config\") pod \"neutron-db-sync-qxjbg\" (UID: \"c0b62851-be03-45b3-8433-3d78718bc4c7\") " pod="openstack/neutron-db-sync-qxjbg" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.170845 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b62851-be03-45b3-8433-3d78718bc4c7-combined-ca-bundle\") pod \"neutron-db-sync-qxjbg\" (UID: \"c0b62851-be03-45b3-8433-3d78718bc4c7\") " pod="openstack/neutron-db-sync-qxjbg" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.170881 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtzgg\" (UniqueName: \"kubernetes.io/projected/c0b62851-be03-45b3-8433-3d78718bc4c7-kube-api-access-gtzgg\") pod \"neutron-db-sync-qxjbg\" (UID: \"c0b62851-be03-45b3-8433-3d78718bc4c7\") " pod="openstack/neutron-db-sync-qxjbg" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.272718 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0b62851-be03-45b3-8433-3d78718bc4c7-config\") pod \"neutron-db-sync-qxjbg\" (UID: \"c0b62851-be03-45b3-8433-3d78718bc4c7\") " pod="openstack/neutron-db-sync-qxjbg" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.272813 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b62851-be03-45b3-8433-3d78718bc4c7-combined-ca-bundle\") pod \"neutron-db-sync-qxjbg\" (UID: \"c0b62851-be03-45b3-8433-3d78718bc4c7\") " pod="openstack/neutron-db-sync-qxjbg" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.272868 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtzgg\" (UniqueName: \"kubernetes.io/projected/c0b62851-be03-45b3-8433-3d78718bc4c7-kube-api-access-gtzgg\") pod \"neutron-db-sync-qxjbg\" (UID: \"c0b62851-be03-45b3-8433-3d78718bc4c7\") " pod="openstack/neutron-db-sync-qxjbg" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.282374 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0b62851-be03-45b3-8433-3d78718bc4c7-config\") pod \"neutron-db-sync-qxjbg\" (UID: \"c0b62851-be03-45b3-8433-3d78718bc4c7\") " pod="openstack/neutron-db-sync-qxjbg" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.286629 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b62851-be03-45b3-8433-3d78718bc4c7-combined-ca-bundle\") pod \"neutron-db-sync-qxjbg\" (UID: \"c0b62851-be03-45b3-8433-3d78718bc4c7\") " pod="openstack/neutron-db-sync-qxjbg" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.291046 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtzgg\" (UniqueName: \"kubernetes.io/projected/c0b62851-be03-45b3-8433-3d78718bc4c7-kube-api-access-gtzgg\") pod \"neutron-db-sync-qxjbg\" (UID: \"c0b62851-be03-45b3-8433-3d78718bc4c7\") " pod="openstack/neutron-db-sync-qxjbg" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.363889 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qxjbg" Nov 22 08:37:24 crc kubenswrapper[4856]: I1122 08:37:24.639127 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-qxjbg"] Nov 22 08:37:25 crc kubenswrapper[4856]: I1122 08:37:25.450486 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qxjbg" event={"ID":"c0b62851-be03-45b3-8433-3d78718bc4c7","Type":"ContainerStarted","Data":"fccad72b5ea8ef54c9f9294cd09d881b479aa1a8606754fc56187acb4b451818"} Nov 22 08:37:25 crc kubenswrapper[4856]: I1122 08:37:25.450600 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qxjbg" event={"ID":"c0b62851-be03-45b3-8433-3d78718bc4c7","Type":"ContainerStarted","Data":"3da33b45c3f63fb0e30cb1f637f07de1a8806ac5e37ffa86234b8365e2357c90"} Nov 22 08:37:40 crc kubenswrapper[4856]: I1122 08:37:40.603528 4856 generic.go:334] "Generic (PLEG): container finished" podID="c0b62851-be03-45b3-8433-3d78718bc4c7" containerID="fccad72b5ea8ef54c9f9294cd09d881b479aa1a8606754fc56187acb4b451818" exitCode=0 Nov 22 08:37:40 crc kubenswrapper[4856]: I1122 08:37:40.603666 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qxjbg" event={"ID":"c0b62851-be03-45b3-8433-3d78718bc4c7","Type":"ContainerDied","Data":"fccad72b5ea8ef54c9f9294cd09d881b479aa1a8606754fc56187acb4b451818"} Nov 22 08:37:41 crc kubenswrapper[4856]: I1122 08:37:41.993294 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qxjbg" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.149785 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b62851-be03-45b3-8433-3d78718bc4c7-combined-ca-bundle\") pod \"c0b62851-be03-45b3-8433-3d78718bc4c7\" (UID: \"c0b62851-be03-45b3-8433-3d78718bc4c7\") " Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.149937 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0b62851-be03-45b3-8433-3d78718bc4c7-config\") pod \"c0b62851-be03-45b3-8433-3d78718bc4c7\" (UID: \"c0b62851-be03-45b3-8433-3d78718bc4c7\") " Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.149969 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtzgg\" (UniqueName: \"kubernetes.io/projected/c0b62851-be03-45b3-8433-3d78718bc4c7-kube-api-access-gtzgg\") pod \"c0b62851-be03-45b3-8433-3d78718bc4c7\" (UID: \"c0b62851-be03-45b3-8433-3d78718bc4c7\") " Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.155336 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0b62851-be03-45b3-8433-3d78718bc4c7-kube-api-access-gtzgg" (OuterVolumeSpecName: "kube-api-access-gtzgg") pod "c0b62851-be03-45b3-8433-3d78718bc4c7" (UID: "c0b62851-be03-45b3-8433-3d78718bc4c7"). InnerVolumeSpecName "kube-api-access-gtzgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.184231 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0b62851-be03-45b3-8433-3d78718bc4c7-config" (OuterVolumeSpecName: "config") pod "c0b62851-be03-45b3-8433-3d78718bc4c7" (UID: "c0b62851-be03-45b3-8433-3d78718bc4c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.195569 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0b62851-be03-45b3-8433-3d78718bc4c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0b62851-be03-45b3-8433-3d78718bc4c7" (UID: "c0b62851-be03-45b3-8433-3d78718bc4c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.252303 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0b62851-be03-45b3-8433-3d78718bc4c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.252346 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0b62851-be03-45b3-8433-3d78718bc4c7-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.252361 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtzgg\" (UniqueName: \"kubernetes.io/projected/c0b62851-be03-45b3-8433-3d78718bc4c7-kube-api-access-gtzgg\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.630831 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-qxjbg" event={"ID":"c0b62851-be03-45b3-8433-3d78718bc4c7","Type":"ContainerDied","Data":"3da33b45c3f63fb0e30cb1f637f07de1a8806ac5e37ffa86234b8365e2357c90"} Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.630872 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3da33b45c3f63fb0e30cb1f637f07de1a8806ac5e37ffa86234b8365e2357c90" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.630901 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-qxjbg" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.789884 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-596946c445-xwz4m"] Nov 22 08:37:42 crc kubenswrapper[4856]: E1122 08:37:42.790361 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0b62851-be03-45b3-8433-3d78718bc4c7" containerName="neutron-db-sync" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.790383 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0b62851-be03-45b3-8433-3d78718bc4c7" containerName="neutron-db-sync" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.790639 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0b62851-be03-45b3-8433-3d78718bc4c7" containerName="neutron-db-sync" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.792087 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.802855 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-596946c445-xwz4m"] Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.860126 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-597f54fbb8-hrfv7"] Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.861682 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.865124 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-config\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.865201 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-ovsdbserver-sb\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.865250 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cggwm\" (UniqueName: \"kubernetes.io/projected/ec0b1d15-fa52-4230-9a43-acea2137779e-kube-api-access-cggwm\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.865289 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-ovsdbserver-nb\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.865367 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-dns-svc\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.870323 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.871190 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qjf46" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.871880 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.877602 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.897095 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-597f54fbb8-hrfv7"] Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.970906 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-config\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.970967 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-ovsdbserver-sb\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.971015 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cggwm\" (UniqueName: \"kubernetes.io/projected/ec0b1d15-fa52-4230-9a43-acea2137779e-kube-api-access-cggwm\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.971042 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-ovndb-tls-certs\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.971060 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-httpd-config\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.971077 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-combined-ca-bundle\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.971106 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-ovsdbserver-nb\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.971129 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6492\" (UniqueName: \"kubernetes.io/projected/abb31d3b-785f-466f-96fb-9b9c58385e69-kube-api-access-b6492\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.971155 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-dns-svc\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.971189 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-config\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.972122 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-config\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.972956 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-ovsdbserver-sb\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.975041 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-ovsdbserver-nb\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:42 crc kubenswrapper[4856]: I1122 08:37:42.975571 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-dns-svc\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:42.993309 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cggwm\" (UniqueName: \"kubernetes.io/projected/ec0b1d15-fa52-4230-9a43-acea2137779e-kube-api-access-cggwm\") pod \"dnsmasq-dns-596946c445-xwz4m\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.072977 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-ovndb-tls-certs\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.073038 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-httpd-config\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.073059 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-combined-ca-bundle\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.073102 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6492\" (UniqueName: \"kubernetes.io/projected/abb31d3b-785f-466f-96fb-9b9c58385e69-kube-api-access-b6492\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.073159 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-config\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.077460 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-ovndb-tls-certs\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.077882 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-combined-ca-bundle\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.078417 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-config\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.078805 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-httpd-config\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.097464 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6492\" (UniqueName: \"kubernetes.io/projected/abb31d3b-785f-466f-96fb-9b9c58385e69-kube-api-access-b6492\") pod \"neutron-597f54fbb8-hrfv7\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.116466 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.200921 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.614054 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-596946c445-xwz4m"] Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.639929 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596946c445-xwz4m" event={"ID":"ec0b1d15-fa52-4230-9a43-acea2137779e","Type":"ContainerStarted","Data":"59bcca73e22d2cb36e7b98511753c47b9b0d4655f11310f8255ebb6bf7601a56"} Nov 22 08:37:43 crc kubenswrapper[4856]: I1122 08:37:43.759988 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-597f54fbb8-hrfv7"] Nov 22 08:37:43 crc kubenswrapper[4856]: W1122 08:37:43.765120 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabb31d3b_785f_466f_96fb_9b9c58385e69.slice/crio-b84e47086594a3747281c6d4b8c91cd3013179796d6e8c1b27a1af5be14e16cf WatchSource:0}: Error finding container b84e47086594a3747281c6d4b8c91cd3013179796d6e8c1b27a1af5be14e16cf: Status 404 returned error can't find the container with id b84e47086594a3747281c6d4b8c91cd3013179796d6e8c1b27a1af5be14e16cf Nov 22 08:37:44 crc kubenswrapper[4856]: I1122 08:37:44.654102 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-597f54fbb8-hrfv7" event={"ID":"abb31d3b-785f-466f-96fb-9b9c58385e69","Type":"ContainerStarted","Data":"95631fe6cfa1e54dbdecb8bf34c4a15509321ade08e76e396aa4da74b3777b34"} Nov 22 08:37:44 crc kubenswrapper[4856]: I1122 08:37:44.654422 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-597f54fbb8-hrfv7" event={"ID":"abb31d3b-785f-466f-96fb-9b9c58385e69","Type":"ContainerStarted","Data":"f26db040959fedf739cfb67a9faf44bde5386222f9784d71f836825130aa1fd5"} Nov 22 08:37:44 crc kubenswrapper[4856]: I1122 08:37:44.654434 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-597f54fbb8-hrfv7" event={"ID":"abb31d3b-785f-466f-96fb-9b9c58385e69","Type":"ContainerStarted","Data":"b84e47086594a3747281c6d4b8c91cd3013179796d6e8c1b27a1af5be14e16cf"} Nov 22 08:37:44 crc kubenswrapper[4856]: I1122 08:37:44.656894 4856 generic.go:334] "Generic (PLEG): container finished" podID="ec0b1d15-fa52-4230-9a43-acea2137779e" containerID="64507337b77b6a26027fa011d05f8c748a774c61beb089ec433c1e881a9e0114" exitCode=0 Nov 22 08:37:44 crc kubenswrapper[4856]: I1122 08:37:44.656995 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596946c445-xwz4m" event={"ID":"ec0b1d15-fa52-4230-9a43-acea2137779e","Type":"ContainerDied","Data":"64507337b77b6a26027fa011d05f8c748a774c61beb089ec433c1e881a9e0114"} Nov 22 08:37:44 crc kubenswrapper[4856]: I1122 08:37:44.696373 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-597f54fbb8-hrfv7" podStartSLOduration=2.696347868 podStartE2EDuration="2.696347868s" podCreationTimestamp="2025-11-22 08:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:37:44.671563351 +0000 UTC m=+5707.084956619" watchObservedRunningTime="2025-11-22 08:37:44.696347868 +0000 UTC m=+5707.109741126" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.241186 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-8445b95697-hfkrr"] Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.243577 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.246473 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.247016 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.257934 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8445b95697-hfkrr"] Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.412455 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-config\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.412533 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjms\" (UniqueName: \"kubernetes.io/projected/4b1f18e1-3e4e-4337-b2e2-e4363d635895-kube-api-access-sjjms\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.412605 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-internal-tls-certs\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.412670 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-ovndb-tls-certs\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.412714 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-public-tls-certs\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.412847 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-combined-ca-bundle\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.412888 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-httpd-config\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.515041 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-combined-ca-bundle\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.515088 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-httpd-config\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.515128 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-config\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.515156 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjjms\" (UniqueName: \"kubernetes.io/projected/4b1f18e1-3e4e-4337-b2e2-e4363d635895-kube-api-access-sjjms\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.515206 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-internal-tls-certs\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.515232 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-ovndb-tls-certs\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.515256 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-public-tls-certs\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.521604 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-internal-tls-certs\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.521780 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-config\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.522092 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-public-tls-certs\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.522111 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-httpd-config\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.524366 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-ovndb-tls-certs\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.525197 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1f18e1-3e4e-4337-b2e2-e4363d635895-combined-ca-bundle\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.537216 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjjms\" (UniqueName: \"kubernetes.io/projected/4b1f18e1-3e4e-4337-b2e2-e4363d635895-kube-api-access-sjjms\") pod \"neutron-8445b95697-hfkrr\" (UID: \"4b1f18e1-3e4e-4337-b2e2-e4363d635895\") " pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.560843 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.670624 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596946c445-xwz4m" event={"ID":"ec0b1d15-fa52-4230-9a43-acea2137779e","Type":"ContainerStarted","Data":"adc2033c7f657c25af2de5e09fdb79beb1db8d3f4fd713bfb840b608cabfbad7"} Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.670931 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.670945 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:45 crc kubenswrapper[4856]: I1122 08:37:45.704403 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-596946c445-xwz4m" podStartSLOduration=3.704380017 podStartE2EDuration="3.704380017s" podCreationTimestamp="2025-11-22 08:37:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:37:45.692377004 +0000 UTC m=+5708.105770252" watchObservedRunningTime="2025-11-22 08:37:45.704380017 +0000 UTC m=+5708.117773275" Nov 22 08:37:46 crc kubenswrapper[4856]: I1122 08:37:46.133983 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-8445b95697-hfkrr"] Nov 22 08:37:46 crc kubenswrapper[4856]: W1122 08:37:46.134197 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b1f18e1_3e4e_4337_b2e2_e4363d635895.slice/crio-3f057b8d26aa68aa586b46c62f71b3d2c5661f71b9e7e48d97767427bf280cba WatchSource:0}: Error finding container 3f057b8d26aa68aa586b46c62f71b3d2c5661f71b9e7e48d97767427bf280cba: Status 404 returned error can't find the container with id 3f057b8d26aa68aa586b46c62f71b3d2c5661f71b9e7e48d97767427bf280cba Nov 22 08:37:46 crc kubenswrapper[4856]: I1122 08:37:46.696648 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8445b95697-hfkrr" event={"ID":"4b1f18e1-3e4e-4337-b2e2-e4363d635895","Type":"ContainerStarted","Data":"180dce1dbbc9a43abff04e70fe2128a49e8cd6a75cf2679ba18f9a5bcc37026a"} Nov 22 08:37:46 crc kubenswrapper[4856]: I1122 08:37:46.696934 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8445b95697-hfkrr" event={"ID":"4b1f18e1-3e4e-4337-b2e2-e4363d635895","Type":"ContainerStarted","Data":"d80a3ab940bdab732e0b4351ae0b7d62b735f526b0148b94da6d592104341635"} Nov 22 08:37:46 crc kubenswrapper[4856]: I1122 08:37:46.696944 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-8445b95697-hfkrr" event={"ID":"4b1f18e1-3e4e-4337-b2e2-e4363d635895","Type":"ContainerStarted","Data":"3f057b8d26aa68aa586b46c62f71b3d2c5661f71b9e7e48d97767427bf280cba"} Nov 22 08:37:46 crc kubenswrapper[4856]: I1122 08:37:46.696958 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:37:46 crc kubenswrapper[4856]: I1122 08:37:46.726903 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-8445b95697-hfkrr" podStartSLOduration=1.726879746 podStartE2EDuration="1.726879746s" podCreationTimestamp="2025-11-22 08:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:37:46.716848966 +0000 UTC m=+5709.130242234" watchObservedRunningTime="2025-11-22 08:37:46.726879746 +0000 UTC m=+5709.140273004" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.118796 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.172061 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d9c44b575-dqqvn"] Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.172308 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" podUID="95999a93-29e1-455b-840f-06b9e8e5cacc" containerName="dnsmasq-dns" containerID="cri-o://92a966f24693c02e9b3e5cf8ff5f38b2525e22fa61e79d0d854ffd9855a04088" gracePeriod=10 Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.647739 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.757429 4856 generic.go:334] "Generic (PLEG): container finished" podID="95999a93-29e1-455b-840f-06b9e8e5cacc" containerID="92a966f24693c02e9b3e5cf8ff5f38b2525e22fa61e79d0d854ffd9855a04088" exitCode=0 Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.757473 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" event={"ID":"95999a93-29e1-455b-840f-06b9e8e5cacc","Type":"ContainerDied","Data":"92a966f24693c02e9b3e5cf8ff5f38b2525e22fa61e79d0d854ffd9855a04088"} Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.757499 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" event={"ID":"95999a93-29e1-455b-840f-06b9e8e5cacc","Type":"ContainerDied","Data":"8f7f955bdd6d28dfd51d002811316070b5b5a2292255e7b13a89461452c5969e"} Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.757529 4856 scope.go:117] "RemoveContainer" containerID="92a966f24693c02e9b3e5cf8ff5f38b2525e22fa61e79d0d854ffd9855a04088" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.757612 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d9c44b575-dqqvn" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.772350 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbb6t\" (UniqueName: \"kubernetes.io/projected/95999a93-29e1-455b-840f-06b9e8e5cacc-kube-api-access-pbb6t\") pod \"95999a93-29e1-455b-840f-06b9e8e5cacc\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.772615 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-ovsdbserver-sb\") pod \"95999a93-29e1-455b-840f-06b9e8e5cacc\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.772665 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-dns-svc\") pod \"95999a93-29e1-455b-840f-06b9e8e5cacc\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.772694 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-config\") pod \"95999a93-29e1-455b-840f-06b9e8e5cacc\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.772797 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-ovsdbserver-nb\") pod \"95999a93-29e1-455b-840f-06b9e8e5cacc\" (UID: \"95999a93-29e1-455b-840f-06b9e8e5cacc\") " Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.780967 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95999a93-29e1-455b-840f-06b9e8e5cacc-kube-api-access-pbb6t" (OuterVolumeSpecName: "kube-api-access-pbb6t") pod "95999a93-29e1-455b-840f-06b9e8e5cacc" (UID: "95999a93-29e1-455b-840f-06b9e8e5cacc"). InnerVolumeSpecName "kube-api-access-pbb6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.787490 4856 scope.go:117] "RemoveContainer" containerID="041415267adf801158898c9db7b875100fc30771bd088f012b474c05f8e1c8c3" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.824148 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95999a93-29e1-455b-840f-06b9e8e5cacc" (UID: "95999a93-29e1-455b-840f-06b9e8e5cacc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.824823 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-config" (OuterVolumeSpecName: "config") pod "95999a93-29e1-455b-840f-06b9e8e5cacc" (UID: "95999a93-29e1-455b-840f-06b9e8e5cacc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.826468 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95999a93-29e1-455b-840f-06b9e8e5cacc" (UID: "95999a93-29e1-455b-840f-06b9e8e5cacc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.835317 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95999a93-29e1-455b-840f-06b9e8e5cacc" (UID: "95999a93-29e1-455b-840f-06b9e8e5cacc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.875892 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbb6t\" (UniqueName: \"kubernetes.io/projected/95999a93-29e1-455b-840f-06b9e8e5cacc-kube-api-access-pbb6t\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.875943 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.875955 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.876272 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.876284 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95999a93-29e1-455b-840f-06b9e8e5cacc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.884018 4856 scope.go:117] "RemoveContainer" containerID="92a966f24693c02e9b3e5cf8ff5f38b2525e22fa61e79d0d854ffd9855a04088" Nov 22 08:37:53 crc kubenswrapper[4856]: E1122 08:37:53.884609 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92a966f24693c02e9b3e5cf8ff5f38b2525e22fa61e79d0d854ffd9855a04088\": container with ID starting with 92a966f24693c02e9b3e5cf8ff5f38b2525e22fa61e79d0d854ffd9855a04088 not found: ID does not exist" containerID="92a966f24693c02e9b3e5cf8ff5f38b2525e22fa61e79d0d854ffd9855a04088" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.884652 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92a966f24693c02e9b3e5cf8ff5f38b2525e22fa61e79d0d854ffd9855a04088"} err="failed to get container status \"92a966f24693c02e9b3e5cf8ff5f38b2525e22fa61e79d0d854ffd9855a04088\": rpc error: code = NotFound desc = could not find container \"92a966f24693c02e9b3e5cf8ff5f38b2525e22fa61e79d0d854ffd9855a04088\": container with ID starting with 92a966f24693c02e9b3e5cf8ff5f38b2525e22fa61e79d0d854ffd9855a04088 not found: ID does not exist" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.884677 4856 scope.go:117] "RemoveContainer" containerID="041415267adf801158898c9db7b875100fc30771bd088f012b474c05f8e1c8c3" Nov 22 08:37:53 crc kubenswrapper[4856]: E1122 08:37:53.885010 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"041415267adf801158898c9db7b875100fc30771bd088f012b474c05f8e1c8c3\": container with ID starting with 041415267adf801158898c9db7b875100fc30771bd088f012b474c05f8e1c8c3 not found: ID does not exist" containerID="041415267adf801158898c9db7b875100fc30771bd088f012b474c05f8e1c8c3" Nov 22 08:37:53 crc kubenswrapper[4856]: I1122 08:37:53.885044 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041415267adf801158898c9db7b875100fc30771bd088f012b474c05f8e1c8c3"} err="failed to get container status \"041415267adf801158898c9db7b875100fc30771bd088f012b474c05f8e1c8c3\": rpc error: code = NotFound desc = could not find container \"041415267adf801158898c9db7b875100fc30771bd088f012b474c05f8e1c8c3\": container with ID starting with 041415267adf801158898c9db7b875100fc30771bd088f012b474c05f8e1c8c3 not found: ID does not exist" Nov 22 08:37:54 crc kubenswrapper[4856]: I1122 08:37:54.098597 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d9c44b575-dqqvn"] Nov 22 08:37:54 crc kubenswrapper[4856]: I1122 08:37:54.104576 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d9c44b575-dqqvn"] Nov 22 08:37:54 crc kubenswrapper[4856]: I1122 08:37:54.723395 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95999a93-29e1-455b-840f-06b9e8e5cacc" path="/var/lib/kubelet/pods/95999a93-29e1-455b-840f-06b9e8e5cacc/volumes" Nov 22 08:37:59 crc kubenswrapper[4856]: I1122 08:37:59.754367 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:37:59 crc kubenswrapper[4856]: I1122 08:37:59.754754 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:38:13 crc kubenswrapper[4856]: I1122 08:38:13.210022 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:38:15 crc kubenswrapper[4856]: I1122 08:38:15.577019 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-8445b95697-hfkrr" Nov 22 08:38:15 crc kubenswrapper[4856]: I1122 08:38:15.677834 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-597f54fbb8-hrfv7"] Nov 22 08:38:15 crc kubenswrapper[4856]: I1122 08:38:15.678528 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-597f54fbb8-hrfv7" podUID="abb31d3b-785f-466f-96fb-9b9c58385e69" containerName="neutron-api" containerID="cri-o://f26db040959fedf739cfb67a9faf44bde5386222f9784d71f836825130aa1fd5" gracePeriod=30 Nov 22 08:38:15 crc kubenswrapper[4856]: I1122 08:38:15.678796 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-597f54fbb8-hrfv7" podUID="abb31d3b-785f-466f-96fb-9b9c58385e69" containerName="neutron-httpd" containerID="cri-o://95631fe6cfa1e54dbdecb8bf34c4a15509321ade08e76e396aa4da74b3777b34" gracePeriod=30 Nov 22 08:38:16 crc kubenswrapper[4856]: I1122 08:38:16.034921 4856 generic.go:334] "Generic (PLEG): container finished" podID="abb31d3b-785f-466f-96fb-9b9c58385e69" containerID="95631fe6cfa1e54dbdecb8bf34c4a15509321ade08e76e396aa4da74b3777b34" exitCode=0 Nov 22 08:38:16 crc kubenswrapper[4856]: I1122 08:38:16.034968 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-597f54fbb8-hrfv7" event={"ID":"abb31d3b-785f-466f-96fb-9b9c58385e69","Type":"ContainerDied","Data":"95631fe6cfa1e54dbdecb8bf34c4a15509321ade08e76e396aa4da74b3777b34"} Nov 22 08:38:25 crc kubenswrapper[4856]: I1122 08:38:25.121298 4856 generic.go:334] "Generic (PLEG): container finished" podID="abb31d3b-785f-466f-96fb-9b9c58385e69" containerID="f26db040959fedf739cfb67a9faf44bde5386222f9784d71f836825130aa1fd5" exitCode=0 Nov 22 08:38:25 crc kubenswrapper[4856]: I1122 08:38:25.121453 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-597f54fbb8-hrfv7" event={"ID":"abb31d3b-785f-466f-96fb-9b9c58385e69","Type":"ContainerDied","Data":"f26db040959fedf739cfb67a9faf44bde5386222f9784d71f836825130aa1fd5"} Nov 22 08:38:25 crc kubenswrapper[4856]: I1122 08:38:25.348374 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:38:25 crc kubenswrapper[4856]: I1122 08:38:25.492235 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-combined-ca-bundle\") pod \"abb31d3b-785f-466f-96fb-9b9c58385e69\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " Nov 22 08:38:25 crc kubenswrapper[4856]: I1122 08:38:25.492607 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6492\" (UniqueName: \"kubernetes.io/projected/abb31d3b-785f-466f-96fb-9b9c58385e69-kube-api-access-b6492\") pod \"abb31d3b-785f-466f-96fb-9b9c58385e69\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.492667 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-ovndb-tls-certs\") pod \"abb31d3b-785f-466f-96fb-9b9c58385e69\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.492789 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-config\") pod \"abb31d3b-785f-466f-96fb-9b9c58385e69\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.492888 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-httpd-config\") pod \"abb31d3b-785f-466f-96fb-9b9c58385e69\" (UID: \"abb31d3b-785f-466f-96fb-9b9c58385e69\") " Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.507874 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abb31d3b-785f-466f-96fb-9b9c58385e69-kube-api-access-b6492" (OuterVolumeSpecName: "kube-api-access-b6492") pod "abb31d3b-785f-466f-96fb-9b9c58385e69" (UID: "abb31d3b-785f-466f-96fb-9b9c58385e69"). InnerVolumeSpecName "kube-api-access-b6492". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.530738 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "abb31d3b-785f-466f-96fb-9b9c58385e69" (UID: "abb31d3b-785f-466f-96fb-9b9c58385e69"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.594642 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.594667 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6492\" (UniqueName: \"kubernetes.io/projected/abb31d3b-785f-466f-96fb-9b9c58385e69-kube-api-access-b6492\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.622795 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-config" (OuterVolumeSpecName: "config") pod "abb31d3b-785f-466f-96fb-9b9c58385e69" (UID: "abb31d3b-785f-466f-96fb-9b9c58385e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.628363 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-2sgjz"] Nov 22 08:38:26 crc kubenswrapper[4856]: E1122 08:38:25.636856 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb31d3b-785f-466f-96fb-9b9c58385e69" containerName="neutron-httpd" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.636889 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb31d3b-785f-466f-96fb-9b9c58385e69" containerName="neutron-httpd" Nov 22 08:38:26 crc kubenswrapper[4856]: E1122 08:38:25.636905 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95999a93-29e1-455b-840f-06b9e8e5cacc" containerName="init" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.636911 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="95999a93-29e1-455b-840f-06b9e8e5cacc" containerName="init" Nov 22 08:38:26 crc kubenswrapper[4856]: E1122 08:38:25.636925 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95999a93-29e1-455b-840f-06b9e8e5cacc" containerName="dnsmasq-dns" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.636931 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="95999a93-29e1-455b-840f-06b9e8e5cacc" containerName="dnsmasq-dns" Nov 22 08:38:26 crc kubenswrapper[4856]: E1122 08:38:25.636951 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abb31d3b-785f-466f-96fb-9b9c58385e69" containerName="neutron-api" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.636957 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="abb31d3b-785f-466f-96fb-9b9c58385e69" containerName="neutron-api" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.637127 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="95999a93-29e1-455b-840f-06b9e8e5cacc" containerName="dnsmasq-dns" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.637142 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="abb31d3b-785f-466f-96fb-9b9c58385e69" containerName="neutron-api" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.637156 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="abb31d3b-785f-466f-96fb-9b9c58385e69" containerName="neutron-httpd" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.637848 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.651257 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-55w9q" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.652298 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.652523 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.652652 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.660641 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.661173 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2sgjz"] Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.661445 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "abb31d3b-785f-466f-96fb-9b9c58385e69" (UID: "abb31d3b-785f-466f-96fb-9b9c58385e69"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.680740 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abb31d3b-785f-466f-96fb-9b9c58385e69" (UID: "abb31d3b-785f-466f-96fb-9b9c58385e69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.700358 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.700389 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.700401 4856 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/abb31d3b-785f-466f-96fb-9b9c58385e69-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.725036 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-557f8c765f-svdht"] Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.731767 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.755605 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557f8c765f-svdht"] Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.802573 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-combined-ca-bundle\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.802617 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-ovsdbserver-sb\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.802659 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z299k\" (UniqueName: \"kubernetes.io/projected/42d62543-107b-4d42-a45b-aa1f49b3323c-kube-api-access-z299k\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.802752 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2ksb\" (UniqueName: \"kubernetes.io/projected/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-kube-api-access-s2ksb\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.802787 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-scripts\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.802869 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-dispersionconf\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.802897 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-ring-data-devices\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.802963 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-ovsdbserver-nb\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.803066 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-etc-swift\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.803254 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-dns-svc\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.803476 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-swiftconf\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.805818 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-config\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.907936 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-swiftconf\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.908011 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-config\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.908267 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-combined-ca-bundle\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.908298 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-ovsdbserver-sb\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.908819 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-config\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.909003 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-ovsdbserver-sb\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.909036 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z299k\" (UniqueName: \"kubernetes.io/projected/42d62543-107b-4d42-a45b-aa1f49b3323c-kube-api-access-z299k\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.909082 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2ksb\" (UniqueName: \"kubernetes.io/projected/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-kube-api-access-s2ksb\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.909106 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-scripts\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.909766 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-scripts\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.910081 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-dispersionconf\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.910126 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-ring-data-devices\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.910250 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-ovsdbserver-nb\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.910275 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-etc-swift\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.910329 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-dns-svc\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.910888 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-etc-swift\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.910990 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-ring-data-devices\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.911858 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-ovsdbserver-nb\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.911972 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-dns-svc\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.914461 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-dispersionconf\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.918021 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-swiftconf\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.918167 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-combined-ca-bundle\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.933270 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z299k\" (UniqueName: \"kubernetes.io/projected/42d62543-107b-4d42-a45b-aa1f49b3323c-kube-api-access-z299k\") pod \"dnsmasq-dns-557f8c765f-svdht\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:25.942251 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2ksb\" (UniqueName: \"kubernetes.io/projected/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-kube-api-access-s2ksb\") pod \"swift-ring-rebalance-2sgjz\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:26.008581 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:26.092795 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:26.140388 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-597f54fbb8-hrfv7" event={"ID":"abb31d3b-785f-466f-96fb-9b9c58385e69","Type":"ContainerDied","Data":"b84e47086594a3747281c6d4b8c91cd3013179796d6e8c1b27a1af5be14e16cf"} Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:26.140433 4856 scope.go:117] "RemoveContainer" containerID="95631fe6cfa1e54dbdecb8bf34c4a15509321ade08e76e396aa4da74b3777b34" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:26.140636 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-597f54fbb8-hrfv7" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:26.192254 4856 scope.go:117] "RemoveContainer" containerID="f26db040959fedf739cfb67a9faf44bde5386222f9784d71f836825130aa1fd5" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:26.209057 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-597f54fbb8-hrfv7"] Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:26.218793 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-597f54fbb8-hrfv7"] Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:26.720096 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abb31d3b-785f-466f-96fb-9b9c58385e69" path="/var/lib/kubelet/pods/abb31d3b-785f-466f-96fb-9b9c58385e69/volumes" Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:26.776326 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:26.776617 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2sgjz"] Nov 22 08:38:26 crc kubenswrapper[4856]: I1122 08:38:26.847398 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-557f8c765f-svdht"] Nov 22 08:38:26 crc kubenswrapper[4856]: W1122 08:38:26.850717 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42d62543_107b_4d42_a45b_aa1f49b3323c.slice/crio-9083cb76f203c50a10a17b85ab1e6f6a9f87eafb1b2747001cdb5fb690b80528 WatchSource:0}: Error finding container 9083cb76f203c50a10a17b85ab1e6f6a9f87eafb1b2747001cdb5fb690b80528: Status 404 returned error can't find the container with id 9083cb76f203c50a10a17b85ab1e6f6a9f87eafb1b2747001cdb5fb690b80528 Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.154464 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2sgjz" event={"ID":"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a","Type":"ContainerStarted","Data":"313820dc72126c11a610352d1fcb96972b9de8563d70c41e19669ba81820295a"} Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.157732 4856 generic.go:334] "Generic (PLEG): container finished" podID="42d62543-107b-4d42-a45b-aa1f49b3323c" containerID="68cccd80e4a60ecdccd11dfcdaedf805c3ddc70486ad2da449bff5e73eb86c76" exitCode=0 Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.157825 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557f8c765f-svdht" event={"ID":"42d62543-107b-4d42-a45b-aa1f49b3323c","Type":"ContainerDied","Data":"68cccd80e4a60ecdccd11dfcdaedf805c3ddc70486ad2da449bff5e73eb86c76"} Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.157853 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557f8c765f-svdht" event={"ID":"42d62543-107b-4d42-a45b-aa1f49b3323c","Type":"ContainerStarted","Data":"9083cb76f203c50a10a17b85ab1e6f6a9f87eafb1b2747001cdb5fb690b80528"} Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.582568 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5cff4f4b96-k7sg7"] Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.601491 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.603849 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5cff4f4b96-k7sg7"] Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.605031 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.759428 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-etc-swift\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.759503 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-config-data\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.759541 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfc6z\" (UniqueName: \"kubernetes.io/projected/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-kube-api-access-bfc6z\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.759785 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-combined-ca-bundle\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.759963 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-log-httpd\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.760015 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-run-httpd\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.862480 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-combined-ca-bundle\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.862577 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-log-httpd\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.862607 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-run-httpd\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.863186 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-log-httpd\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.863273 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-etc-swift\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.863663 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-run-httpd\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.864167 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-config-data\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.864212 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfc6z\" (UniqueName: \"kubernetes.io/projected/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-kube-api-access-bfc6z\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.867828 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-config-data\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.868305 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-combined-ca-bundle\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.871667 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-etc-swift\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.885949 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfc6z\" (UniqueName: \"kubernetes.io/projected/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-kube-api-access-bfc6z\") pod \"swift-proxy-5cff4f4b96-k7sg7\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:27 crc kubenswrapper[4856]: I1122 08:38:27.947706 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:28 crc kubenswrapper[4856]: I1122 08:38:28.172260 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557f8c765f-svdht" event={"ID":"42d62543-107b-4d42-a45b-aa1f49b3323c","Type":"ContainerStarted","Data":"1c2c44d843e73c4b6787792bfe4b60e295fc4ab12bc5ba33086158551c0869fb"} Nov 22 08:38:28 crc kubenswrapper[4856]: I1122 08:38:28.173367 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:28 crc kubenswrapper[4856]: I1122 08:38:28.189431 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-557f8c765f-svdht" podStartSLOduration=3.189413054 podStartE2EDuration="3.189413054s" podCreationTimestamp="2025-11-22 08:38:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:38:28.188941842 +0000 UTC m=+5750.602335100" watchObservedRunningTime="2025-11-22 08:38:28.189413054 +0000 UTC m=+5750.602806312" Nov 22 08:38:28 crc kubenswrapper[4856]: I1122 08:38:28.793880 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5cff4f4b96-k7sg7"] Nov 22 08:38:29 crc kubenswrapper[4856]: I1122 08:38:29.754924 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:38:29 crc kubenswrapper[4856]: I1122 08:38:29.755313 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.200424 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" event={"ID":"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf","Type":"ContainerStarted","Data":"90591addfaeae8723e80c7a411e45ad849799dad5457dec2ac6f3103c0ec861d"} Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.646867 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-68fcd9d79d-pb2lw"] Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.649954 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.653170 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.654060 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.663288 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-68fcd9d79d-pb2lw"] Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.731232 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24afd937-020f-43ff-beec-3bccac3dffec-combined-ca-bundle\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.731414 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24afd937-020f-43ff-beec-3bccac3dffec-log-httpd\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.731439 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24afd937-020f-43ff-beec-3bccac3dffec-config-data\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.731502 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24afd937-020f-43ff-beec-3bccac3dffec-public-tls-certs\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.731675 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24afd937-020f-43ff-beec-3bccac3dffec-internal-tls-certs\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.731732 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq8zj\" (UniqueName: \"kubernetes.io/projected/24afd937-020f-43ff-beec-3bccac3dffec-kube-api-access-lq8zj\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.731922 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/24afd937-020f-43ff-beec-3bccac3dffec-etc-swift\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.731973 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24afd937-020f-43ff-beec-3bccac3dffec-run-httpd\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.833454 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24afd937-020f-43ff-beec-3bccac3dffec-log-httpd\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.833495 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24afd937-020f-43ff-beec-3bccac3dffec-config-data\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.833899 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24afd937-020f-43ff-beec-3bccac3dffec-public-tls-certs\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.833933 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24afd937-020f-43ff-beec-3bccac3dffec-internal-tls-certs\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.834073 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24afd937-020f-43ff-beec-3bccac3dffec-log-httpd\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.833958 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq8zj\" (UniqueName: \"kubernetes.io/projected/24afd937-020f-43ff-beec-3bccac3dffec-kube-api-access-lq8zj\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.835363 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/24afd937-020f-43ff-beec-3bccac3dffec-etc-swift\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.835408 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24afd937-020f-43ff-beec-3bccac3dffec-run-httpd\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.835499 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24afd937-020f-43ff-beec-3bccac3dffec-combined-ca-bundle\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.836500 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/24afd937-020f-43ff-beec-3bccac3dffec-run-httpd\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.840701 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/24afd937-020f-43ff-beec-3bccac3dffec-public-tls-certs\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.840810 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24afd937-020f-43ff-beec-3bccac3dffec-combined-ca-bundle\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.841774 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24afd937-020f-43ff-beec-3bccac3dffec-config-data\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.847185 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/24afd937-020f-43ff-beec-3bccac3dffec-internal-tls-certs\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.847668 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/24afd937-020f-43ff-beec-3bccac3dffec-etc-swift\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.855953 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq8zj\" (UniqueName: \"kubernetes.io/projected/24afd937-020f-43ff-beec-3bccac3dffec-kube-api-access-lq8zj\") pod \"swift-proxy-68fcd9d79d-pb2lw\" (UID: \"24afd937-020f-43ff-beec-3bccac3dffec\") " pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:30 crc kubenswrapper[4856]: I1122 08:38:30.970172 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:31 crc kubenswrapper[4856]: I1122 08:38:31.808127 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-68fcd9d79d-pb2lw"] Nov 22 08:38:32 crc kubenswrapper[4856]: I1122 08:38:32.220205 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2sgjz" event={"ID":"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a","Type":"ContainerStarted","Data":"243e92d5f1a9cd2e2a2f4d8c348e3ff68df5030947833c6b345fb29f9d91d0cc"} Nov 22 08:38:32 crc kubenswrapper[4856]: I1122 08:38:32.222379 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-68fcd9d79d-pb2lw" event={"ID":"24afd937-020f-43ff-beec-3bccac3dffec","Type":"ContainerStarted","Data":"ab4fde50398e4f597323af2052ea2aa7c965f20bf267eb13f7dee91bfafffba0"} Nov 22 08:38:32 crc kubenswrapper[4856]: I1122 08:38:32.222466 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-68fcd9d79d-pb2lw" event={"ID":"24afd937-020f-43ff-beec-3bccac3dffec","Type":"ContainerStarted","Data":"6f4c44523008c92488df4acc29b81cf009f1daee753c134e0dfcbdb06b6455ca"} Nov 22 08:38:32 crc kubenswrapper[4856]: I1122 08:38:32.224454 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" event={"ID":"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf","Type":"ContainerStarted","Data":"44a021af35eefbd67aa6d2633bc8a086a10505e6ccabcbfa7a395fc38dd18fc9"} Nov 22 08:38:32 crc kubenswrapper[4856]: I1122 08:38:32.224499 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" event={"ID":"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf","Type":"ContainerStarted","Data":"0e60b0362bf742b1ba192d49f0ea03d5cf92a66022343704af21a1eec256852c"} Nov 22 08:38:32 crc kubenswrapper[4856]: I1122 08:38:32.225211 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:32 crc kubenswrapper[4856]: I1122 08:38:32.246238 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-2sgjz" podStartSLOduration=2.615745776 podStartE2EDuration="7.246218997s" podCreationTimestamp="2025-11-22 08:38:25 +0000 UTC" firstStartedPulling="2025-11-22 08:38:26.776089835 +0000 UTC m=+5749.189483093" lastFinishedPulling="2025-11-22 08:38:31.406563056 +0000 UTC m=+5753.819956314" observedRunningTime="2025-11-22 08:38:32.240226937 +0000 UTC m=+5754.653620215" watchObservedRunningTime="2025-11-22 08:38:32.246218997 +0000 UTC m=+5754.659612255" Nov 22 08:38:32 crc kubenswrapper[4856]: I1122 08:38:32.274171 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" podStartSLOduration=3.700383224 podStartE2EDuration="5.274142998s" podCreationTimestamp="2025-11-22 08:38:27 +0000 UTC" firstStartedPulling="2025-11-22 08:38:29.817204932 +0000 UTC m=+5752.230598200" lastFinishedPulling="2025-11-22 08:38:31.390964716 +0000 UTC m=+5753.804357974" observedRunningTime="2025-11-22 08:38:32.260033229 +0000 UTC m=+5754.673426487" watchObservedRunningTime="2025-11-22 08:38:32.274142998 +0000 UTC m=+5754.687536256" Nov 22 08:38:32 crc kubenswrapper[4856]: I1122 08:38:32.948876 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:33 crc kubenswrapper[4856]: I1122 08:38:33.265939 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-68fcd9d79d-pb2lw" event={"ID":"24afd937-020f-43ff-beec-3bccac3dffec","Type":"ContainerStarted","Data":"29c8ce7c0125b86ea33f30ac154be813f6926f2058abc0a590c7d1215456fcee"} Nov 22 08:38:33 crc kubenswrapper[4856]: I1122 08:38:33.266913 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:33 crc kubenswrapper[4856]: I1122 08:38:33.266960 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:33 crc kubenswrapper[4856]: I1122 08:38:33.293592 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-68fcd9d79d-pb2lw" podStartSLOduration=3.293575855 podStartE2EDuration="3.293575855s" podCreationTimestamp="2025-11-22 08:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:38:33.289964727 +0000 UTC m=+5755.703357985" watchObservedRunningTime="2025-11-22 08:38:33.293575855 +0000 UTC m=+5755.706969113" Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.094690 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.170051 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-596946c445-xwz4m"] Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.170442 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-596946c445-xwz4m" podUID="ec0b1d15-fa52-4230-9a43-acea2137779e" containerName="dnsmasq-dns" containerID="cri-o://adc2033c7f657c25af2de5e09fdb79beb1db8d3f4fd713bfb840b608cabfbad7" gracePeriod=10 Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.328968 4856 generic.go:334] "Generic (PLEG): container finished" podID="58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a" containerID="243e92d5f1a9cd2e2a2f4d8c348e3ff68df5030947833c6b345fb29f9d91d0cc" exitCode=0 Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.329063 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2sgjz" event={"ID":"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a","Type":"ContainerDied","Data":"243e92d5f1a9cd2e2a2f4d8c348e3ff68df5030947833c6b345fb29f9d91d0cc"} Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.652009 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.761824 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cggwm\" (UniqueName: \"kubernetes.io/projected/ec0b1d15-fa52-4230-9a43-acea2137779e-kube-api-access-cggwm\") pod \"ec0b1d15-fa52-4230-9a43-acea2137779e\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.761963 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-dns-svc\") pod \"ec0b1d15-fa52-4230-9a43-acea2137779e\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.762063 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-ovsdbserver-nb\") pod \"ec0b1d15-fa52-4230-9a43-acea2137779e\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.762194 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-ovsdbserver-sb\") pod \"ec0b1d15-fa52-4230-9a43-acea2137779e\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.762249 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-config\") pod \"ec0b1d15-fa52-4230-9a43-acea2137779e\" (UID: \"ec0b1d15-fa52-4230-9a43-acea2137779e\") " Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.771368 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec0b1d15-fa52-4230-9a43-acea2137779e-kube-api-access-cggwm" (OuterVolumeSpecName: "kube-api-access-cggwm") pod "ec0b1d15-fa52-4230-9a43-acea2137779e" (UID: "ec0b1d15-fa52-4230-9a43-acea2137779e"). InnerVolumeSpecName "kube-api-access-cggwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.809757 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec0b1d15-fa52-4230-9a43-acea2137779e" (UID: "ec0b1d15-fa52-4230-9a43-acea2137779e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.815706 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-config" (OuterVolumeSpecName: "config") pod "ec0b1d15-fa52-4230-9a43-acea2137779e" (UID: "ec0b1d15-fa52-4230-9a43-acea2137779e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.818208 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec0b1d15-fa52-4230-9a43-acea2137779e" (UID: "ec0b1d15-fa52-4230-9a43-acea2137779e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.830617 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec0b1d15-fa52-4230-9a43-acea2137779e" (UID: "ec0b1d15-fa52-4230-9a43-acea2137779e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.865057 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cggwm\" (UniqueName: \"kubernetes.io/projected/ec0b1d15-fa52-4230-9a43-acea2137779e-kube-api-access-cggwm\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.865090 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.865099 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.865108 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:36 crc kubenswrapper[4856]: I1122 08:38:36.865117 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec0b1d15-fa52-4230-9a43-acea2137779e-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.340596 4856 generic.go:334] "Generic (PLEG): container finished" podID="ec0b1d15-fa52-4230-9a43-acea2137779e" containerID="adc2033c7f657c25af2de5e09fdb79beb1db8d3f4fd713bfb840b608cabfbad7" exitCode=0 Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.340674 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596946c445-xwz4m" event={"ID":"ec0b1d15-fa52-4230-9a43-acea2137779e","Type":"ContainerDied","Data":"adc2033c7f657c25af2de5e09fdb79beb1db8d3f4fd713bfb840b608cabfbad7"} Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.340727 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-596946c445-xwz4m" event={"ID":"ec0b1d15-fa52-4230-9a43-acea2137779e","Type":"ContainerDied","Data":"59bcca73e22d2cb36e7b98511753c47b9b0d4655f11310f8255ebb6bf7601a56"} Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.340750 4856 scope.go:117] "RemoveContainer" containerID="adc2033c7f657c25af2de5e09fdb79beb1db8d3f4fd713bfb840b608cabfbad7" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.341202 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-596946c445-xwz4m" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.369214 4856 scope.go:117] "RemoveContainer" containerID="64507337b77b6a26027fa011d05f8c748a774c61beb089ec433c1e881a9e0114" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.382463 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-596946c445-xwz4m"] Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.403710 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-596946c445-xwz4m"] Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.407774 4856 scope.go:117] "RemoveContainer" containerID="adc2033c7f657c25af2de5e09fdb79beb1db8d3f4fd713bfb840b608cabfbad7" Nov 22 08:38:37 crc kubenswrapper[4856]: E1122 08:38:37.412036 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adc2033c7f657c25af2de5e09fdb79beb1db8d3f4fd713bfb840b608cabfbad7\": container with ID starting with adc2033c7f657c25af2de5e09fdb79beb1db8d3f4fd713bfb840b608cabfbad7 not found: ID does not exist" containerID="adc2033c7f657c25af2de5e09fdb79beb1db8d3f4fd713bfb840b608cabfbad7" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.412081 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adc2033c7f657c25af2de5e09fdb79beb1db8d3f4fd713bfb840b608cabfbad7"} err="failed to get container status \"adc2033c7f657c25af2de5e09fdb79beb1db8d3f4fd713bfb840b608cabfbad7\": rpc error: code = NotFound desc = could not find container \"adc2033c7f657c25af2de5e09fdb79beb1db8d3f4fd713bfb840b608cabfbad7\": container with ID starting with adc2033c7f657c25af2de5e09fdb79beb1db8d3f4fd713bfb840b608cabfbad7 not found: ID does not exist" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.412139 4856 scope.go:117] "RemoveContainer" containerID="64507337b77b6a26027fa011d05f8c748a774c61beb089ec433c1e881a9e0114" Nov 22 08:38:37 crc kubenswrapper[4856]: E1122 08:38:37.413015 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64507337b77b6a26027fa011d05f8c748a774c61beb089ec433c1e881a9e0114\": container with ID starting with 64507337b77b6a26027fa011d05f8c748a774c61beb089ec433c1e881a9e0114 not found: ID does not exist" containerID="64507337b77b6a26027fa011d05f8c748a774c61beb089ec433c1e881a9e0114" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.413040 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64507337b77b6a26027fa011d05f8c748a774c61beb089ec433c1e881a9e0114"} err="failed to get container status \"64507337b77b6a26027fa011d05f8c748a774c61beb089ec433c1e881a9e0114\": rpc error: code = NotFound desc = could not find container \"64507337b77b6a26027fa011d05f8c748a774c61beb089ec433c1e881a9e0114\": container with ID starting with 64507337b77b6a26027fa011d05f8c748a774c61beb089ec433c1e881a9e0114 not found: ID does not exist" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.716969 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.885268 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-combined-ca-bundle\") pod \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.885378 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-swiftconf\") pod \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.885465 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2ksb\" (UniqueName: \"kubernetes.io/projected/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-kube-api-access-s2ksb\") pod \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.885494 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-scripts\") pod \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.885573 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-dispersionconf\") pod \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.885657 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-ring-data-devices\") pod \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.885683 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-etc-swift\") pod \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\" (UID: \"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a\") " Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.886342 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a" (UID: "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.886680 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a" (UID: "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.890725 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-kube-api-access-s2ksb" (OuterVolumeSpecName: "kube-api-access-s2ksb") pod "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a" (UID: "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a"). InnerVolumeSpecName "kube-api-access-s2ksb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.895566 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a" (UID: "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.915772 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-scripts" (OuterVolumeSpecName: "scripts") pod "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a" (UID: "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.918163 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a" (UID: "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.919862 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a" (UID: "58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.950244 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.951369 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.987761 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.988960 4856 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.989015 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2ksb\" (UniqueName: \"kubernetes.io/projected/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-kube-api-access-s2ksb\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.989036 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.989110 4856 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.989127 4856 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:37 crc kubenswrapper[4856]: I1122 08:38:37.989141 4856 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:38 crc kubenswrapper[4856]: I1122 08:38:38.353232 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2sgjz" Nov 22 08:38:38 crc kubenswrapper[4856]: I1122 08:38:38.363850 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2sgjz" event={"ID":"58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a","Type":"ContainerDied","Data":"313820dc72126c11a610352d1fcb96972b9de8563d70c41e19669ba81820295a"} Nov 22 08:38:38 crc kubenswrapper[4856]: I1122 08:38:38.363903 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="313820dc72126c11a610352d1fcb96972b9de8563d70c41e19669ba81820295a" Nov 22 08:38:38 crc kubenswrapper[4856]: I1122 08:38:38.724357 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec0b1d15-fa52-4230-9a43-acea2137779e" path="/var/lib/kubelet/pods/ec0b1d15-fa52-4230-9a43-acea2137779e/volumes" Nov 22 08:38:40 crc kubenswrapper[4856]: I1122 08:38:40.979862 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:40 crc kubenswrapper[4856]: I1122 08:38:40.995954 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-68fcd9d79d-pb2lw" Nov 22 08:38:41 crc kubenswrapper[4856]: I1122 08:38:41.072484 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5cff4f4b96-k7sg7"] Nov 22 08:38:41 crc kubenswrapper[4856]: I1122 08:38:41.072906 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" podUID="98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" containerName="proxy-server" containerID="cri-o://44a021af35eefbd67aa6d2633bc8a086a10505e6ccabcbfa7a395fc38dd18fc9" gracePeriod=30 Nov 22 08:38:41 crc kubenswrapper[4856]: I1122 08:38:41.073310 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" podUID="98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" containerName="proxy-httpd" containerID="cri-o://0e60b0362bf742b1ba192d49f0ea03d5cf92a66022343704af21a1eec256852c" gracePeriod=30 Nov 22 08:38:41 crc kubenswrapper[4856]: I1122 08:38:41.381734 4856 generic.go:334] "Generic (PLEG): container finished" podID="98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" containerID="0e60b0362bf742b1ba192d49f0ea03d5cf92a66022343704af21a1eec256852c" exitCode=0 Nov 22 08:38:41 crc kubenswrapper[4856]: I1122 08:38:41.381890 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" event={"ID":"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf","Type":"ContainerDied","Data":"0e60b0362bf742b1ba192d49f0ea03d5cf92a66022343704af21a1eec256852c"} Nov 22 08:38:41 crc kubenswrapper[4856]: I1122 08:38:41.932183 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.069163 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-run-httpd\") pod \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.069538 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" (UID: "98ee37a3-f468-42c8-bb7a-c44ecae2a0bf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.069583 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-combined-ca-bundle\") pod \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.069685 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-config-data\") pod \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.069724 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-log-httpd\") pod \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.069831 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-etc-swift\") pod \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.069869 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfc6z\" (UniqueName: \"kubernetes.io/projected/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-kube-api-access-bfc6z\") pod \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\" (UID: \"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf\") " Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.070258 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" (UID: "98ee37a3-f468-42c8-bb7a-c44ecae2a0bf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.070308 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.075874 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-kube-api-access-bfc6z" (OuterVolumeSpecName: "kube-api-access-bfc6z") pod "98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" (UID: "98ee37a3-f468-42c8-bb7a-c44ecae2a0bf"). InnerVolumeSpecName "kube-api-access-bfc6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.077226 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" (UID: "98ee37a3-f468-42c8-bb7a-c44ecae2a0bf"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.115766 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-config-data" (OuterVolumeSpecName: "config-data") pod "98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" (UID: "98ee37a3-f468-42c8-bb7a-c44ecae2a0bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.118802 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" (UID: "98ee37a3-f468-42c8-bb7a-c44ecae2a0bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.171841 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.171906 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.171927 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.171938 4856 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.171949 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfc6z\" (UniqueName: \"kubernetes.io/projected/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf-kube-api-access-bfc6z\") on node \"crc\" DevicePath \"\"" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.391457 4856 generic.go:334] "Generic (PLEG): container finished" podID="98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" containerID="44a021af35eefbd67aa6d2633bc8a086a10505e6ccabcbfa7a395fc38dd18fc9" exitCode=0 Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.391502 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" event={"ID":"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf","Type":"ContainerDied","Data":"44a021af35eefbd67aa6d2633bc8a086a10505e6ccabcbfa7a395fc38dd18fc9"} Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.391552 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" event={"ID":"98ee37a3-f468-42c8-bb7a-c44ecae2a0bf","Type":"ContainerDied","Data":"90591addfaeae8723e80c7a411e45ad849799dad5457dec2ac6f3103c0ec861d"} Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.391573 4856 scope.go:117] "RemoveContainer" containerID="44a021af35eefbd67aa6d2633bc8a086a10505e6ccabcbfa7a395fc38dd18fc9" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.391619 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5cff4f4b96-k7sg7" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.425314 4856 scope.go:117] "RemoveContainer" containerID="0e60b0362bf742b1ba192d49f0ea03d5cf92a66022343704af21a1eec256852c" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.428997 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-5cff4f4b96-k7sg7"] Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.436147 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-5cff4f4b96-k7sg7"] Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.443898 4856 scope.go:117] "RemoveContainer" containerID="44a021af35eefbd67aa6d2633bc8a086a10505e6ccabcbfa7a395fc38dd18fc9" Nov 22 08:38:42 crc kubenswrapper[4856]: E1122 08:38:42.444370 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44a021af35eefbd67aa6d2633bc8a086a10505e6ccabcbfa7a395fc38dd18fc9\": container with ID starting with 44a021af35eefbd67aa6d2633bc8a086a10505e6ccabcbfa7a395fc38dd18fc9 not found: ID does not exist" containerID="44a021af35eefbd67aa6d2633bc8a086a10505e6ccabcbfa7a395fc38dd18fc9" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.444411 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44a021af35eefbd67aa6d2633bc8a086a10505e6ccabcbfa7a395fc38dd18fc9"} err="failed to get container status \"44a021af35eefbd67aa6d2633bc8a086a10505e6ccabcbfa7a395fc38dd18fc9\": rpc error: code = NotFound desc = could not find container \"44a021af35eefbd67aa6d2633bc8a086a10505e6ccabcbfa7a395fc38dd18fc9\": container with ID starting with 44a021af35eefbd67aa6d2633bc8a086a10505e6ccabcbfa7a395fc38dd18fc9 not found: ID does not exist" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.444432 4856 scope.go:117] "RemoveContainer" containerID="0e60b0362bf742b1ba192d49f0ea03d5cf92a66022343704af21a1eec256852c" Nov 22 08:38:42 crc kubenswrapper[4856]: E1122 08:38:42.444779 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e60b0362bf742b1ba192d49f0ea03d5cf92a66022343704af21a1eec256852c\": container with ID starting with 0e60b0362bf742b1ba192d49f0ea03d5cf92a66022343704af21a1eec256852c not found: ID does not exist" containerID="0e60b0362bf742b1ba192d49f0ea03d5cf92a66022343704af21a1eec256852c" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.444806 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e60b0362bf742b1ba192d49f0ea03d5cf92a66022343704af21a1eec256852c"} err="failed to get container status \"0e60b0362bf742b1ba192d49f0ea03d5cf92a66022343704af21a1eec256852c\": rpc error: code = NotFound desc = could not find container \"0e60b0362bf742b1ba192d49f0ea03d5cf92a66022343704af21a1eec256852c\": container with ID starting with 0e60b0362bf742b1ba192d49f0ea03d5cf92a66022343704af21a1eec256852c not found: ID does not exist" Nov 22 08:38:42 crc kubenswrapper[4856]: I1122 08:38:42.721201 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" path="/var/lib/kubelet/pods/98ee37a3-f468-42c8-bb7a-c44ecae2a0bf/volumes" Nov 22 08:38:59 crc kubenswrapper[4856]: I1122 08:38:59.754267 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:38:59 crc kubenswrapper[4856]: I1122 08:38:59.754843 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:38:59 crc kubenswrapper[4856]: I1122 08:38:59.754899 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 08:38:59 crc kubenswrapper[4856]: I1122 08:38:59.755690 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d24613435baa98fcf4fed1d58784844b252bad2c404ea5e6d83f40d6769faaee"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:38:59 crc kubenswrapper[4856]: I1122 08:38:59.755751 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://d24613435baa98fcf4fed1d58784844b252bad2c404ea5e6d83f40d6769faaee" gracePeriod=600 Nov 22 08:39:00 crc kubenswrapper[4856]: I1122 08:39:00.587260 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="d24613435baa98fcf4fed1d58784844b252bad2c404ea5e6d83f40d6769faaee" exitCode=0 Nov 22 08:39:00 crc kubenswrapper[4856]: I1122 08:39:00.587359 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"d24613435baa98fcf4fed1d58784844b252bad2c404ea5e6d83f40d6769faaee"} Nov 22 08:39:00 crc kubenswrapper[4856]: I1122 08:39:00.587849 4856 scope.go:117] "RemoveContainer" containerID="79bc1b1c6f9c5137ce19346a865e0b3e4f940299201621d19cb5120a18b2e650" Nov 22 08:39:01 crc kubenswrapper[4856]: I1122 08:39:01.602391 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76"} Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.101129 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-tgjkx"] Nov 22 08:39:13 crc kubenswrapper[4856]: E1122 08:39:13.101944 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a" containerName="swift-ring-rebalance" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.101956 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a" containerName="swift-ring-rebalance" Nov 22 08:39:13 crc kubenswrapper[4856]: E1122 08:39:13.101982 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0b1d15-fa52-4230-9a43-acea2137779e" containerName="init" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.101988 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0b1d15-fa52-4230-9a43-acea2137779e" containerName="init" Nov 22 08:39:13 crc kubenswrapper[4856]: E1122 08:39:13.102001 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec0b1d15-fa52-4230-9a43-acea2137779e" containerName="dnsmasq-dns" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.102008 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec0b1d15-fa52-4230-9a43-acea2137779e" containerName="dnsmasq-dns" Nov 22 08:39:13 crc kubenswrapper[4856]: E1122 08:39:13.102029 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" containerName="proxy-server" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.102034 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" containerName="proxy-server" Nov 22 08:39:13 crc kubenswrapper[4856]: E1122 08:39:13.102042 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" containerName="proxy-httpd" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.102048 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" containerName="proxy-httpd" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.110343 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec0b1d15-fa52-4230-9a43-acea2137779e" containerName="dnsmasq-dns" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.110435 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" containerName="proxy-httpd" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.110472 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="98ee37a3-f468-42c8-bb7a-c44ecae2a0bf" containerName="proxy-server" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.110530 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a" containerName="swift-ring-rebalance" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.111756 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tgjkx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.140084 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-tgjkx"] Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.207016 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c90d-account-create-4hzwx"] Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.208208 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c90d-account-create-4hzwx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.210324 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.217598 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c90d-account-create-4hzwx"] Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.220302 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhbjv\" (UniqueName: \"kubernetes.io/projected/29c8fd83-ca48-40c4-b640-dded6ec91e69-kube-api-access-hhbjv\") pod \"cinder-db-create-tgjkx\" (UID: \"29c8fd83-ca48-40c4-b640-dded6ec91e69\") " pod="openstack/cinder-db-create-tgjkx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.220402 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29c8fd83-ca48-40c4-b640-dded6ec91e69-operator-scripts\") pod \"cinder-db-create-tgjkx\" (UID: \"29c8fd83-ca48-40c4-b640-dded6ec91e69\") " pod="openstack/cinder-db-create-tgjkx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.322426 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29c8fd83-ca48-40c4-b640-dded6ec91e69-operator-scripts\") pod \"cinder-db-create-tgjkx\" (UID: \"29c8fd83-ca48-40c4-b640-dded6ec91e69\") " pod="openstack/cinder-db-create-tgjkx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.322548 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b9bb96-b75a-450e-afba-b290ec554b4b-operator-scripts\") pod \"cinder-c90d-account-create-4hzwx\" (UID: \"a9b9bb96-b75a-450e-afba-b290ec554b4b\") " pod="openstack/cinder-c90d-account-create-4hzwx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.322657 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjfrj\" (UniqueName: \"kubernetes.io/projected/a9b9bb96-b75a-450e-afba-b290ec554b4b-kube-api-access-cjfrj\") pod \"cinder-c90d-account-create-4hzwx\" (UID: \"a9b9bb96-b75a-450e-afba-b290ec554b4b\") " pod="openstack/cinder-c90d-account-create-4hzwx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.322721 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhbjv\" (UniqueName: \"kubernetes.io/projected/29c8fd83-ca48-40c4-b640-dded6ec91e69-kube-api-access-hhbjv\") pod \"cinder-db-create-tgjkx\" (UID: \"29c8fd83-ca48-40c4-b640-dded6ec91e69\") " pod="openstack/cinder-db-create-tgjkx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.323691 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29c8fd83-ca48-40c4-b640-dded6ec91e69-operator-scripts\") pod \"cinder-db-create-tgjkx\" (UID: \"29c8fd83-ca48-40c4-b640-dded6ec91e69\") " pod="openstack/cinder-db-create-tgjkx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.342274 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhbjv\" (UniqueName: \"kubernetes.io/projected/29c8fd83-ca48-40c4-b640-dded6ec91e69-kube-api-access-hhbjv\") pod \"cinder-db-create-tgjkx\" (UID: \"29c8fd83-ca48-40c4-b640-dded6ec91e69\") " pod="openstack/cinder-db-create-tgjkx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.424090 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b9bb96-b75a-450e-afba-b290ec554b4b-operator-scripts\") pod \"cinder-c90d-account-create-4hzwx\" (UID: \"a9b9bb96-b75a-450e-afba-b290ec554b4b\") " pod="openstack/cinder-c90d-account-create-4hzwx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.424196 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjfrj\" (UniqueName: \"kubernetes.io/projected/a9b9bb96-b75a-450e-afba-b290ec554b4b-kube-api-access-cjfrj\") pod \"cinder-c90d-account-create-4hzwx\" (UID: \"a9b9bb96-b75a-450e-afba-b290ec554b4b\") " pod="openstack/cinder-c90d-account-create-4hzwx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.425418 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b9bb96-b75a-450e-afba-b290ec554b4b-operator-scripts\") pod \"cinder-c90d-account-create-4hzwx\" (UID: \"a9b9bb96-b75a-450e-afba-b290ec554b4b\") " pod="openstack/cinder-c90d-account-create-4hzwx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.431940 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tgjkx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.443227 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjfrj\" (UniqueName: \"kubernetes.io/projected/a9b9bb96-b75a-450e-afba-b290ec554b4b-kube-api-access-cjfrj\") pod \"cinder-c90d-account-create-4hzwx\" (UID: \"a9b9bb96-b75a-450e-afba-b290ec554b4b\") " pod="openstack/cinder-c90d-account-create-4hzwx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.525985 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c90d-account-create-4hzwx" Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.851378 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-tgjkx"] Nov 22 08:39:13 crc kubenswrapper[4856]: W1122 08:39:13.854221 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29c8fd83_ca48_40c4_b640_dded6ec91e69.slice/crio-cce8e5e75252b560f3230bff1f96b1bd017db88938129f3bd1c5d18452f9f991 WatchSource:0}: Error finding container cce8e5e75252b560f3230bff1f96b1bd017db88938129f3bd1c5d18452f9f991: Status 404 returned error can't find the container with id cce8e5e75252b560f3230bff1f96b1bd017db88938129f3bd1c5d18452f9f991 Nov 22 08:39:13 crc kubenswrapper[4856]: I1122 08:39:13.976057 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c90d-account-create-4hzwx"] Nov 22 08:39:14 crc kubenswrapper[4856]: I1122 08:39:14.733117 4856 generic.go:334] "Generic (PLEG): container finished" podID="a9b9bb96-b75a-450e-afba-b290ec554b4b" containerID="f1c8f505db70f4824efdd09dfdc3295943db6c043dd547788670aafe338e3a3e" exitCode=0 Nov 22 08:39:14 crc kubenswrapper[4856]: I1122 08:39:14.733207 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c90d-account-create-4hzwx" event={"ID":"a9b9bb96-b75a-450e-afba-b290ec554b4b","Type":"ContainerDied","Data":"f1c8f505db70f4824efdd09dfdc3295943db6c043dd547788670aafe338e3a3e"} Nov 22 08:39:14 crc kubenswrapper[4856]: I1122 08:39:14.733298 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c90d-account-create-4hzwx" event={"ID":"a9b9bb96-b75a-450e-afba-b290ec554b4b","Type":"ContainerStarted","Data":"3c19fbb0b660d09f436ab636910b5b0492a4c86f6c7001c608851469092a8a5c"} Nov 22 08:39:14 crc kubenswrapper[4856]: I1122 08:39:14.736077 4856 generic.go:334] "Generic (PLEG): container finished" podID="29c8fd83-ca48-40c4-b640-dded6ec91e69" containerID="f0d618af93e239ba26dfe5c8c86a88e8fe73ea7f034082350ee2dd9bc4c81710" exitCode=0 Nov 22 08:39:14 crc kubenswrapper[4856]: I1122 08:39:14.736139 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-tgjkx" event={"ID":"29c8fd83-ca48-40c4-b640-dded6ec91e69","Type":"ContainerDied","Data":"f0d618af93e239ba26dfe5c8c86a88e8fe73ea7f034082350ee2dd9bc4c81710"} Nov 22 08:39:14 crc kubenswrapper[4856]: I1122 08:39:14.736172 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-tgjkx" event={"ID":"29c8fd83-ca48-40c4-b640-dded6ec91e69","Type":"ContainerStarted","Data":"cce8e5e75252b560f3230bff1f96b1bd017db88938129f3bd1c5d18452f9f991"} Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.123499 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tgjkx" Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.129305 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c90d-account-create-4hzwx" Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.177650 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjfrj\" (UniqueName: \"kubernetes.io/projected/a9b9bb96-b75a-450e-afba-b290ec554b4b-kube-api-access-cjfrj\") pod \"a9b9bb96-b75a-450e-afba-b290ec554b4b\" (UID: \"a9b9bb96-b75a-450e-afba-b290ec554b4b\") " Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.177762 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29c8fd83-ca48-40c4-b640-dded6ec91e69-operator-scripts\") pod \"29c8fd83-ca48-40c4-b640-dded6ec91e69\" (UID: \"29c8fd83-ca48-40c4-b640-dded6ec91e69\") " Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.177788 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b9bb96-b75a-450e-afba-b290ec554b4b-operator-scripts\") pod \"a9b9bb96-b75a-450e-afba-b290ec554b4b\" (UID: \"a9b9bb96-b75a-450e-afba-b290ec554b4b\") " Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.177984 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhbjv\" (UniqueName: \"kubernetes.io/projected/29c8fd83-ca48-40c4-b640-dded6ec91e69-kube-api-access-hhbjv\") pod \"29c8fd83-ca48-40c4-b640-dded6ec91e69\" (UID: \"29c8fd83-ca48-40c4-b640-dded6ec91e69\") " Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.178470 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29c8fd83-ca48-40c4-b640-dded6ec91e69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29c8fd83-ca48-40c4-b640-dded6ec91e69" (UID: "29c8fd83-ca48-40c4-b640-dded6ec91e69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.178504 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b9bb96-b75a-450e-afba-b290ec554b4b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9b9bb96-b75a-450e-afba-b290ec554b4b" (UID: "a9b9bb96-b75a-450e-afba-b290ec554b4b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.182956 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b9bb96-b75a-450e-afba-b290ec554b4b-kube-api-access-cjfrj" (OuterVolumeSpecName: "kube-api-access-cjfrj") pod "a9b9bb96-b75a-450e-afba-b290ec554b4b" (UID: "a9b9bb96-b75a-450e-afba-b290ec554b4b"). InnerVolumeSpecName "kube-api-access-cjfrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.183557 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29c8fd83-ca48-40c4-b640-dded6ec91e69-kube-api-access-hhbjv" (OuterVolumeSpecName: "kube-api-access-hhbjv") pod "29c8fd83-ca48-40c4-b640-dded6ec91e69" (UID: "29c8fd83-ca48-40c4-b640-dded6ec91e69"). InnerVolumeSpecName "kube-api-access-hhbjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.279980 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhbjv\" (UniqueName: \"kubernetes.io/projected/29c8fd83-ca48-40c4-b640-dded6ec91e69-kube-api-access-hhbjv\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.280015 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjfrj\" (UniqueName: \"kubernetes.io/projected/a9b9bb96-b75a-450e-afba-b290ec554b4b-kube-api-access-cjfrj\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.280026 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29c8fd83-ca48-40c4-b640-dded6ec91e69-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.280038 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b9bb96-b75a-450e-afba-b290ec554b4b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.753787 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-tgjkx" event={"ID":"29c8fd83-ca48-40c4-b640-dded6ec91e69","Type":"ContainerDied","Data":"cce8e5e75252b560f3230bff1f96b1bd017db88938129f3bd1c5d18452f9f991"} Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.753842 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cce8e5e75252b560f3230bff1f96b1bd017db88938129f3bd1c5d18452f9f991" Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.753800 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-tgjkx" Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.756324 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c90d-account-create-4hzwx" event={"ID":"a9b9bb96-b75a-450e-afba-b290ec554b4b","Type":"ContainerDied","Data":"3c19fbb0b660d09f436ab636910b5b0492a4c86f6c7001c608851469092a8a5c"} Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.756357 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c19fbb0b660d09f436ab636910b5b0492a4c86f6c7001c608851469092a8a5c" Nov 22 08:39:16 crc kubenswrapper[4856]: I1122 08:39:16.756395 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c90d-account-create-4hzwx" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.440649 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-x96jr"] Nov 22 08:39:18 crc kubenswrapper[4856]: E1122 08:39:18.442262 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b9bb96-b75a-450e-afba-b290ec554b4b" containerName="mariadb-account-create" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.442334 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b9bb96-b75a-450e-afba-b290ec554b4b" containerName="mariadb-account-create" Nov 22 08:39:18 crc kubenswrapper[4856]: E1122 08:39:18.442449 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29c8fd83-ca48-40c4-b640-dded6ec91e69" containerName="mariadb-database-create" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.442586 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="29c8fd83-ca48-40c4-b640-dded6ec91e69" containerName="mariadb-database-create" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.442845 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="29c8fd83-ca48-40c4-b640-dded6ec91e69" containerName="mariadb-database-create" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.442924 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b9bb96-b75a-450e-afba-b290ec554b4b" containerName="mariadb-account-create" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.443760 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.450577 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.450949 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vs8c2" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.451003 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.461291 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x96jr"] Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.524879 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-db-sync-config-data\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.525441 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-etc-machine-id\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.525555 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-combined-ca-bundle\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.526165 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-config-data\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.526287 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m9n2\" (UniqueName: \"kubernetes.io/projected/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-kube-api-access-5m9n2\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.526558 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-scripts\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.627983 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-config-data\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.628060 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m9n2\" (UniqueName: \"kubernetes.io/projected/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-kube-api-access-5m9n2\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.628102 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-scripts\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.628154 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-db-sync-config-data\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.628178 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-etc-machine-id\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.628201 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-combined-ca-bundle\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.628792 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-etc-machine-id\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.635600 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-db-sync-config-data\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.635862 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-scripts\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.636238 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-combined-ca-bundle\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.636270 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-config-data\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.650353 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m9n2\" (UniqueName: \"kubernetes.io/projected/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-kube-api-access-5m9n2\") pod \"cinder-db-sync-x96jr\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:18 crc kubenswrapper[4856]: I1122 08:39:18.771240 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:19 crc kubenswrapper[4856]: I1122 08:39:19.211001 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-x96jr"] Nov 22 08:39:19 crc kubenswrapper[4856]: I1122 08:39:19.786268 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x96jr" event={"ID":"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5","Type":"ContainerStarted","Data":"78fe8f45e1d2f36db8ffa23a8d3071b06bc93c8a328d5678b06f73bf27ad3307"} Nov 22 08:39:22 crc kubenswrapper[4856]: I1122 08:39:22.464811 4856 scope.go:117] "RemoveContainer" containerID="ce527cca8d428e0648124727f44117977f3074c8b00841872043c814faf1c91f" Nov 22 08:39:22 crc kubenswrapper[4856]: I1122 08:39:22.493961 4856 scope.go:117] "RemoveContainer" containerID="82dee7ba9b70faf2acdcf9403ef9b6dd28fa4226764a0c7388dfe58fefc7d0ee" Nov 22 08:39:38 crc kubenswrapper[4856]: I1122 08:39:38.962841 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x96jr" event={"ID":"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5","Type":"ContainerStarted","Data":"4b0d32af063078fcdf7da05237996b7339742fb51dea6b8b1b2d3b8d0da0028c"} Nov 22 08:39:38 crc kubenswrapper[4856]: I1122 08:39:38.989981 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-x96jr" podStartSLOduration=2.055003519 podStartE2EDuration="20.989960569s" podCreationTimestamp="2025-11-22 08:39:18 +0000 UTC" firstStartedPulling="2025-11-22 08:39:19.218966215 +0000 UTC m=+5801.632359473" lastFinishedPulling="2025-11-22 08:39:38.153923265 +0000 UTC m=+5820.567316523" observedRunningTime="2025-11-22 08:39:38.982614802 +0000 UTC m=+5821.396008060" watchObservedRunningTime="2025-11-22 08:39:38.989960569 +0000 UTC m=+5821.403353827" Nov 22 08:39:41 crc kubenswrapper[4856]: I1122 08:39:41.992394 4856 generic.go:334] "Generic (PLEG): container finished" podID="cc7f7ea0-76af-4d66-955a-3ad2b1f034e5" containerID="4b0d32af063078fcdf7da05237996b7339742fb51dea6b8b1b2d3b8d0da0028c" exitCode=0 Nov 22 08:39:41 crc kubenswrapper[4856]: I1122 08:39:41.992480 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x96jr" event={"ID":"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5","Type":"ContainerDied","Data":"4b0d32af063078fcdf7da05237996b7339742fb51dea6b8b1b2d3b8d0da0028c"} Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.322316 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.402332 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m9n2\" (UniqueName: \"kubernetes.io/projected/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-kube-api-access-5m9n2\") pod \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.402406 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-combined-ca-bundle\") pod \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.402438 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-scripts\") pod \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.402615 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-db-sync-config-data\") pod \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.402649 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-etc-machine-id\") pod \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.402918 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "cc7f7ea0-76af-4d66-955a-3ad2b1f034e5" (UID: "cc7f7ea0-76af-4d66-955a-3ad2b1f034e5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.403041 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-config-data\") pod \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\" (UID: \"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5\") " Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.403625 4856 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.409827 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-kube-api-access-5m9n2" (OuterVolumeSpecName: "kube-api-access-5m9n2") pod "cc7f7ea0-76af-4d66-955a-3ad2b1f034e5" (UID: "cc7f7ea0-76af-4d66-955a-3ad2b1f034e5"). InnerVolumeSpecName "kube-api-access-5m9n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.409922 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-scripts" (OuterVolumeSpecName: "scripts") pod "cc7f7ea0-76af-4d66-955a-3ad2b1f034e5" (UID: "cc7f7ea0-76af-4d66-955a-3ad2b1f034e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.410368 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "cc7f7ea0-76af-4d66-955a-3ad2b1f034e5" (UID: "cc7f7ea0-76af-4d66-955a-3ad2b1f034e5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.433709 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc7f7ea0-76af-4d66-955a-3ad2b1f034e5" (UID: "cc7f7ea0-76af-4d66-955a-3ad2b1f034e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.456815 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-config-data" (OuterVolumeSpecName: "config-data") pod "cc7f7ea0-76af-4d66-955a-3ad2b1f034e5" (UID: "cc7f7ea0-76af-4d66-955a-3ad2b1f034e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.505569 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.505616 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m9n2\" (UniqueName: \"kubernetes.io/projected/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-kube-api-access-5m9n2\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.505627 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.505638 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:43 crc kubenswrapper[4856]: I1122 08:39:43.505649 4856 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.011979 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-x96jr" event={"ID":"cc7f7ea0-76af-4d66-955a-3ad2b1f034e5","Type":"ContainerDied","Data":"78fe8f45e1d2f36db8ffa23a8d3071b06bc93c8a328d5678b06f73bf27ad3307"} Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.012030 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78fe8f45e1d2f36db8ffa23a8d3071b06bc93c8a328d5678b06f73bf27ad3307" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.012067 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-x96jr" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.356599 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d944f7c5f-lkpm6"] Nov 22 08:39:44 crc kubenswrapper[4856]: E1122 08:39:44.357987 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7f7ea0-76af-4d66-955a-3ad2b1f034e5" containerName="cinder-db-sync" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.358462 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7f7ea0-76af-4d66-955a-3ad2b1f034e5" containerName="cinder-db-sync" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.358853 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7f7ea0-76af-4d66-955a-3ad2b1f034e5" containerName="cinder-db-sync" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.362696 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.372015 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d944f7c5f-lkpm6"] Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.425935 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-dns-svc\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.426198 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nrjc\" (UniqueName: \"kubernetes.io/projected/38c1c139-a689-46ca-84f5-d896cced8655-kube-api-access-6nrjc\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.426424 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-config\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.426496 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-ovsdbserver-sb\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.426658 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-ovsdbserver-nb\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.528368 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nrjc\" (UniqueName: \"kubernetes.io/projected/38c1c139-a689-46ca-84f5-d896cced8655-kube-api-access-6nrjc\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.528495 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-config\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.528573 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-ovsdbserver-sb\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.528628 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-ovsdbserver-nb\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.528688 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-dns-svc\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.529658 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-ovsdbserver-sb\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.529678 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-ovsdbserver-nb\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.529699 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-dns-svc\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.530073 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-config\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.548875 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.551442 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.555359 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vs8c2" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.555453 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.555671 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.556351 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.564936 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.574577 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nrjc\" (UniqueName: \"kubernetes.io/projected/38c1c139-a689-46ca-84f5-d896cced8655-kube-api-access-6nrjc\") pod \"dnsmasq-dns-5d944f7c5f-lkpm6\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.630455 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.630545 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-config-data-custom\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.630575 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f750092b-0463-4772-b9b7-fdacec40e6ac-logs\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.630629 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-config-data\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.630819 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f750092b-0463-4772-b9b7-fdacec40e6ac-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.631011 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-scripts\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.631071 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25fm4\" (UniqueName: \"kubernetes.io/projected/f750092b-0463-4772-b9b7-fdacec40e6ac-kube-api-access-25fm4\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.685064 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.732981 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-config-data-custom\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.733061 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f750092b-0463-4772-b9b7-fdacec40e6ac-logs\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.733112 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-config-data\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.733169 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f750092b-0463-4772-b9b7-fdacec40e6ac-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.733232 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-scripts\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.733266 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25fm4\" (UniqueName: \"kubernetes.io/projected/f750092b-0463-4772-b9b7-fdacec40e6ac-kube-api-access-25fm4\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.733313 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.733911 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f750092b-0463-4772-b9b7-fdacec40e6ac-etc-machine-id\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.734368 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f750092b-0463-4772-b9b7-fdacec40e6ac-logs\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.742003 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-scripts\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.742699 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.743455 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-config-data\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.744964 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-config-data-custom\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.758610 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25fm4\" (UniqueName: \"kubernetes.io/projected/f750092b-0463-4772-b9b7-fdacec40e6ac-kube-api-access-25fm4\") pod \"cinder-api-0\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " pod="openstack/cinder-api-0" Nov 22 08:39:44 crc kubenswrapper[4856]: I1122 08:39:44.908204 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 08:39:45 crc kubenswrapper[4856]: I1122 08:39:45.166404 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d944f7c5f-lkpm6"] Nov 22 08:39:45 crc kubenswrapper[4856]: I1122 08:39:45.382122 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:39:45 crc kubenswrapper[4856]: W1122 08:39:45.385071 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf750092b_0463_4772_b9b7_fdacec40e6ac.slice/crio-3c8d87babadebdd54522706a0e34efe04b25ef684279ca835da79c1a84b2d42b WatchSource:0}: Error finding container 3c8d87babadebdd54522706a0e34efe04b25ef684279ca835da79c1a84b2d42b: Status 404 returned error can't find the container with id 3c8d87babadebdd54522706a0e34efe04b25ef684279ca835da79c1a84b2d42b Nov 22 08:39:46 crc kubenswrapper[4856]: I1122 08:39:46.036398 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f750092b-0463-4772-b9b7-fdacec40e6ac","Type":"ContainerStarted","Data":"30a303cf8123a1c2e7216674f3d314384fd0ff728dbc171820f782f5d449875f"} Nov 22 08:39:46 crc kubenswrapper[4856]: I1122 08:39:46.036711 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f750092b-0463-4772-b9b7-fdacec40e6ac","Type":"ContainerStarted","Data":"3c8d87babadebdd54522706a0e34efe04b25ef684279ca835da79c1a84b2d42b"} Nov 22 08:39:46 crc kubenswrapper[4856]: I1122 08:39:46.040418 4856 generic.go:334] "Generic (PLEG): container finished" podID="38c1c139-a689-46ca-84f5-d896cced8655" containerID="dd87d87fc1a05685a1e7ba10e55ba2153690a97ca29021b257cb2a58951cfc25" exitCode=0 Nov 22 08:39:46 crc kubenswrapper[4856]: I1122 08:39:46.040469 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" event={"ID":"38c1c139-a689-46ca-84f5-d896cced8655","Type":"ContainerDied","Data":"dd87d87fc1a05685a1e7ba10e55ba2153690a97ca29021b257cb2a58951cfc25"} Nov 22 08:39:46 crc kubenswrapper[4856]: I1122 08:39:46.040503 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" event={"ID":"38c1c139-a689-46ca-84f5-d896cced8655","Type":"ContainerStarted","Data":"8182fffd124ff448355154d9b99e0ab8b6c2018b6630ad77ae79ebd2cac7d394"} Nov 22 08:39:46 crc kubenswrapper[4856]: I1122 08:39:46.328939 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:39:47 crc kubenswrapper[4856]: I1122 08:39:47.053132 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f750092b-0463-4772-b9b7-fdacec40e6ac","Type":"ContainerStarted","Data":"8fd568f1e0cf9006c5fb640eab74d555776670d3cf3ca53482af9648952a0057"} Nov 22 08:39:47 crc kubenswrapper[4856]: I1122 08:39:47.053432 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 08:39:47 crc kubenswrapper[4856]: I1122 08:39:47.053274 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f750092b-0463-4772-b9b7-fdacec40e6ac" containerName="cinder-api" containerID="cri-o://8fd568f1e0cf9006c5fb640eab74d555776670d3cf3ca53482af9648952a0057" gracePeriod=30 Nov 22 08:39:47 crc kubenswrapper[4856]: I1122 08:39:47.053223 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="f750092b-0463-4772-b9b7-fdacec40e6ac" containerName="cinder-api-log" containerID="cri-o://30a303cf8123a1c2e7216674f3d314384fd0ff728dbc171820f782f5d449875f" gracePeriod=30 Nov 22 08:39:47 crc kubenswrapper[4856]: I1122 08:39:47.061632 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" event={"ID":"38c1c139-a689-46ca-84f5-d896cced8655","Type":"ContainerStarted","Data":"db843dd12c0cab2e7deee76fbf9ec6bccc9fdbef7d654090b58686777f362181"} Nov 22 08:39:47 crc kubenswrapper[4856]: I1122 08:39:47.062574 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:47 crc kubenswrapper[4856]: I1122 08:39:47.082492 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.082474437 podStartE2EDuration="3.082474437s" podCreationTimestamp="2025-11-22 08:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:39:47.076343872 +0000 UTC m=+5829.489737140" watchObservedRunningTime="2025-11-22 08:39:47.082474437 +0000 UTC m=+5829.495867695" Nov 22 08:39:47 crc kubenswrapper[4856]: I1122 08:39:47.114016 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" podStartSLOduration=3.113989835 podStartE2EDuration="3.113989835s" podCreationTimestamp="2025-11-22 08:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:39:47.097717068 +0000 UTC m=+5829.511110336" watchObservedRunningTime="2025-11-22 08:39:47.113989835 +0000 UTC m=+5829.527383093" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.073839 4856 generic.go:334] "Generic (PLEG): container finished" podID="f750092b-0463-4772-b9b7-fdacec40e6ac" containerID="8fd568f1e0cf9006c5fb640eab74d555776670d3cf3ca53482af9648952a0057" exitCode=0 Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.074108 4856 generic.go:334] "Generic (PLEG): container finished" podID="f750092b-0463-4772-b9b7-fdacec40e6ac" containerID="30a303cf8123a1c2e7216674f3d314384fd0ff728dbc171820f782f5d449875f" exitCode=143 Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.074008 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f750092b-0463-4772-b9b7-fdacec40e6ac","Type":"ContainerDied","Data":"8fd568f1e0cf9006c5fb640eab74d555776670d3cf3ca53482af9648952a0057"} Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.075349 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f750092b-0463-4772-b9b7-fdacec40e6ac","Type":"ContainerDied","Data":"30a303cf8123a1c2e7216674f3d314384fd0ff728dbc171820f782f5d449875f"} Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.075372 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"f750092b-0463-4772-b9b7-fdacec40e6ac","Type":"ContainerDied","Data":"3c8d87babadebdd54522706a0e34efe04b25ef684279ca835da79c1a84b2d42b"} Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.075388 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c8d87babadebdd54522706a0e34efe04b25ef684279ca835da79c1a84b2d42b" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.144934 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.200206 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-config-data-custom\") pod \"f750092b-0463-4772-b9b7-fdacec40e6ac\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.200250 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-scripts\") pod \"f750092b-0463-4772-b9b7-fdacec40e6ac\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.200333 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-config-data\") pod \"f750092b-0463-4772-b9b7-fdacec40e6ac\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.200380 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f750092b-0463-4772-b9b7-fdacec40e6ac-etc-machine-id\") pod \"f750092b-0463-4772-b9b7-fdacec40e6ac\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.200417 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25fm4\" (UniqueName: \"kubernetes.io/projected/f750092b-0463-4772-b9b7-fdacec40e6ac-kube-api-access-25fm4\") pod \"f750092b-0463-4772-b9b7-fdacec40e6ac\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.200485 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f750092b-0463-4772-b9b7-fdacec40e6ac-logs\") pod \"f750092b-0463-4772-b9b7-fdacec40e6ac\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.200521 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-combined-ca-bundle\") pod \"f750092b-0463-4772-b9b7-fdacec40e6ac\" (UID: \"f750092b-0463-4772-b9b7-fdacec40e6ac\") " Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.201917 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f750092b-0463-4772-b9b7-fdacec40e6ac-logs" (OuterVolumeSpecName: "logs") pod "f750092b-0463-4772-b9b7-fdacec40e6ac" (UID: "f750092b-0463-4772-b9b7-fdacec40e6ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.201978 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f750092b-0463-4772-b9b7-fdacec40e6ac-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f750092b-0463-4772-b9b7-fdacec40e6ac" (UID: "f750092b-0463-4772-b9b7-fdacec40e6ac"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.207048 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-scripts" (OuterVolumeSpecName: "scripts") pod "f750092b-0463-4772-b9b7-fdacec40e6ac" (UID: "f750092b-0463-4772-b9b7-fdacec40e6ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.207462 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f750092b-0463-4772-b9b7-fdacec40e6ac-kube-api-access-25fm4" (OuterVolumeSpecName: "kube-api-access-25fm4") pod "f750092b-0463-4772-b9b7-fdacec40e6ac" (UID: "f750092b-0463-4772-b9b7-fdacec40e6ac"). InnerVolumeSpecName "kube-api-access-25fm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.207658 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f750092b-0463-4772-b9b7-fdacec40e6ac" (UID: "f750092b-0463-4772-b9b7-fdacec40e6ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.227699 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f750092b-0463-4772-b9b7-fdacec40e6ac" (UID: "f750092b-0463-4772-b9b7-fdacec40e6ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.252722 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-config-data" (OuterVolumeSpecName: "config-data") pod "f750092b-0463-4772-b9b7-fdacec40e6ac" (UID: "f750092b-0463-4772-b9b7-fdacec40e6ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.303064 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.303110 4856 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f750092b-0463-4772-b9b7-fdacec40e6ac-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.303127 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25fm4\" (UniqueName: \"kubernetes.io/projected/f750092b-0463-4772-b9b7-fdacec40e6ac-kube-api-access-25fm4\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.303138 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f750092b-0463-4772-b9b7-fdacec40e6ac-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.303148 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.303159 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:48 crc kubenswrapper[4856]: I1122 08:39:48.303171 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f750092b-0463-4772-b9b7-fdacec40e6ac-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.081956 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.109644 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.117589 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.134734 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:39:49 crc kubenswrapper[4856]: E1122 08:39:49.135145 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f750092b-0463-4772-b9b7-fdacec40e6ac" containerName="cinder-api" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.135165 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f750092b-0463-4772-b9b7-fdacec40e6ac" containerName="cinder-api" Nov 22 08:39:49 crc kubenswrapper[4856]: E1122 08:39:49.135181 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f750092b-0463-4772-b9b7-fdacec40e6ac" containerName="cinder-api-log" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.135186 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f750092b-0463-4772-b9b7-fdacec40e6ac" containerName="cinder-api-log" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.135371 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f750092b-0463-4772-b9b7-fdacec40e6ac" containerName="cinder-api" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.135402 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f750092b-0463-4772-b9b7-fdacec40e6ac" containerName="cinder-api-log" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.136314 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.139295 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-vs8c2" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.139353 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.139295 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.139771 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.139870 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.140102 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.147667 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.218743 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-config-data-custom\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.218912 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb17631b-7d2a-4457-8d20-01bcc390c220-etc-machine-id\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.219577 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.219633 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-config-data\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.219858 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-scripts\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.219914 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-public-tls-certs\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.219982 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.220115 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb17631b-7d2a-4457-8d20-01bcc390c220-logs\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.220257 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwpng\" (UniqueName: \"kubernetes.io/projected/eb17631b-7d2a-4457-8d20-01bcc390c220-kube-api-access-xwpng\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.322363 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb17631b-7d2a-4457-8d20-01bcc390c220-etc-machine-id\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.322964 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-config-data\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.322994 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.322489 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb17631b-7d2a-4457-8d20-01bcc390c220-etc-machine-id\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.323055 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-scripts\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.323149 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-public-tls-certs\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.323214 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.323283 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb17631b-7d2a-4457-8d20-01bcc390c220-logs\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.323386 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwpng\" (UniqueName: \"kubernetes.io/projected/eb17631b-7d2a-4457-8d20-01bcc390c220-kube-api-access-xwpng\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.323572 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-config-data-custom\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.324120 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb17631b-7d2a-4457-8d20-01bcc390c220-logs\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.330060 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.330234 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-config-data\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.330691 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-config-data-custom\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.330716 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-scripts\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.330884 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.332287 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-public-tls-certs\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.341199 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwpng\" (UniqueName: \"kubernetes.io/projected/eb17631b-7d2a-4457-8d20-01bcc390c220-kube-api-access-xwpng\") pod \"cinder-api-0\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.471011 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 08:39:49 crc kubenswrapper[4856]: I1122 08:39:49.920215 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:39:49 crc kubenswrapper[4856]: W1122 08:39:49.929343 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb17631b_7d2a_4457_8d20_01bcc390c220.slice/crio-0cda85b0c8e3575da9638f0046f0465a2340226cfbd573810a6cc69e3b250f80 WatchSource:0}: Error finding container 0cda85b0c8e3575da9638f0046f0465a2340226cfbd573810a6cc69e3b250f80: Status 404 returned error can't find the container with id 0cda85b0c8e3575da9638f0046f0465a2340226cfbd573810a6cc69e3b250f80 Nov 22 08:39:50 crc kubenswrapper[4856]: I1122 08:39:50.093885 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb17631b-7d2a-4457-8d20-01bcc390c220","Type":"ContainerStarted","Data":"0cda85b0c8e3575da9638f0046f0465a2340226cfbd573810a6cc69e3b250f80"} Nov 22 08:39:50 crc kubenswrapper[4856]: I1122 08:39:50.724480 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f750092b-0463-4772-b9b7-fdacec40e6ac" path="/var/lib/kubelet/pods/f750092b-0463-4772-b9b7-fdacec40e6ac/volumes" Nov 22 08:39:52 crc kubenswrapper[4856]: I1122 08:39:52.112900 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb17631b-7d2a-4457-8d20-01bcc390c220","Type":"ContainerStarted","Data":"df3218513f48c5e58943da42003684d7ef01f7cc4adb645ba3e8e9695c10d96d"} Nov 22 08:39:53 crc kubenswrapper[4856]: I1122 08:39:53.123679 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb17631b-7d2a-4457-8d20-01bcc390c220","Type":"ContainerStarted","Data":"9d5267047f0534321a109a26c325d522e9f3315a22920ce8e788c5ee906647f3"} Nov 22 08:39:53 crc kubenswrapper[4856]: I1122 08:39:53.124007 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 08:39:53 crc kubenswrapper[4856]: I1122 08:39:53.156133 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.15610288 podStartE2EDuration="4.15610288s" podCreationTimestamp="2025-11-22 08:39:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:39:53.14831875 +0000 UTC m=+5835.561712038" watchObservedRunningTime="2025-11-22 08:39:53.15610288 +0000 UTC m=+5835.569496158" Nov 22 08:39:54 crc kubenswrapper[4856]: I1122 08:39:54.686807 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:39:54 crc kubenswrapper[4856]: I1122 08:39:54.754004 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557f8c765f-svdht"] Nov 22 08:39:54 crc kubenswrapper[4856]: I1122 08:39:54.754551 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-557f8c765f-svdht" podUID="42d62543-107b-4d42-a45b-aa1f49b3323c" containerName="dnsmasq-dns" containerID="cri-o://1c2c44d843e73c4b6787792bfe4b60e295fc4ab12bc5ba33086158551c0869fb" gracePeriod=10 Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.143486 4856 generic.go:334] "Generic (PLEG): container finished" podID="42d62543-107b-4d42-a45b-aa1f49b3323c" containerID="1c2c44d843e73c4b6787792bfe4b60e295fc4ab12bc5ba33086158551c0869fb" exitCode=0 Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.143556 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557f8c765f-svdht" event={"ID":"42d62543-107b-4d42-a45b-aa1f49b3323c","Type":"ContainerDied","Data":"1c2c44d843e73c4b6787792bfe4b60e295fc4ab12bc5ba33086158551c0869fb"} Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.785645 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.850111 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-config\") pod \"42d62543-107b-4d42-a45b-aa1f49b3323c\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.850210 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-ovsdbserver-nb\") pod \"42d62543-107b-4d42-a45b-aa1f49b3323c\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.850238 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-ovsdbserver-sb\") pod \"42d62543-107b-4d42-a45b-aa1f49b3323c\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.850290 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z299k\" (UniqueName: \"kubernetes.io/projected/42d62543-107b-4d42-a45b-aa1f49b3323c-kube-api-access-z299k\") pod \"42d62543-107b-4d42-a45b-aa1f49b3323c\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.850316 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-dns-svc\") pod \"42d62543-107b-4d42-a45b-aa1f49b3323c\" (UID: \"42d62543-107b-4d42-a45b-aa1f49b3323c\") " Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.860743 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d62543-107b-4d42-a45b-aa1f49b3323c-kube-api-access-z299k" (OuterVolumeSpecName: "kube-api-access-z299k") pod "42d62543-107b-4d42-a45b-aa1f49b3323c" (UID: "42d62543-107b-4d42-a45b-aa1f49b3323c"). InnerVolumeSpecName "kube-api-access-z299k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.894068 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42d62543-107b-4d42-a45b-aa1f49b3323c" (UID: "42d62543-107b-4d42-a45b-aa1f49b3323c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.896245 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "42d62543-107b-4d42-a45b-aa1f49b3323c" (UID: "42d62543-107b-4d42-a45b-aa1f49b3323c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.897437 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-config" (OuterVolumeSpecName: "config") pod "42d62543-107b-4d42-a45b-aa1f49b3323c" (UID: "42d62543-107b-4d42-a45b-aa1f49b3323c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.899407 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42d62543-107b-4d42-a45b-aa1f49b3323c" (UID: "42d62543-107b-4d42-a45b-aa1f49b3323c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.952965 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.953000 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.953016 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.953028 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z299k\" (UniqueName: \"kubernetes.io/projected/42d62543-107b-4d42-a45b-aa1f49b3323c-kube-api-access-z299k\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:55 crc kubenswrapper[4856]: I1122 08:39:55.953040 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42d62543-107b-4d42-a45b-aa1f49b3323c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:39:56 crc kubenswrapper[4856]: I1122 08:39:56.163283 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-557f8c765f-svdht" event={"ID":"42d62543-107b-4d42-a45b-aa1f49b3323c","Type":"ContainerDied","Data":"9083cb76f203c50a10a17b85ab1e6f6a9f87eafb1b2747001cdb5fb690b80528"} Nov 22 08:39:56 crc kubenswrapper[4856]: I1122 08:39:56.163763 4856 scope.go:117] "RemoveContainer" containerID="1c2c44d843e73c4b6787792bfe4b60e295fc4ab12bc5ba33086158551c0869fb" Nov 22 08:39:56 crc kubenswrapper[4856]: I1122 08:39:56.163798 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-557f8c765f-svdht" Nov 22 08:39:56 crc kubenswrapper[4856]: I1122 08:39:56.192295 4856 scope.go:117] "RemoveContainer" containerID="68cccd80e4a60ecdccd11dfcdaedf805c3ddc70486ad2da449bff5e73eb86c76" Nov 22 08:39:56 crc kubenswrapper[4856]: I1122 08:39:56.195522 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-557f8c765f-svdht"] Nov 22 08:39:56 crc kubenswrapper[4856]: I1122 08:39:56.203317 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-557f8c765f-svdht"] Nov 22 08:39:56 crc kubenswrapper[4856]: I1122 08:39:56.720074 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d62543-107b-4d42-a45b-aa1f49b3323c" path="/var/lib/kubelet/pods/42d62543-107b-4d42-a45b-aa1f49b3323c/volumes" Nov 22 08:40:01 crc kubenswrapper[4856]: I1122 08:40:01.262748 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 22 08:40:19 crc kubenswrapper[4856]: I1122 08:40:19.812186 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 08:40:19 crc kubenswrapper[4856]: E1122 08:40:19.813007 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d62543-107b-4d42-a45b-aa1f49b3323c" containerName="init" Nov 22 08:40:19 crc kubenswrapper[4856]: I1122 08:40:19.813023 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d62543-107b-4d42-a45b-aa1f49b3323c" containerName="init" Nov 22 08:40:19 crc kubenswrapper[4856]: E1122 08:40:19.813062 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d62543-107b-4d42-a45b-aa1f49b3323c" containerName="dnsmasq-dns" Nov 22 08:40:19 crc kubenswrapper[4856]: I1122 08:40:19.813069 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d62543-107b-4d42-a45b-aa1f49b3323c" containerName="dnsmasq-dns" Nov 22 08:40:19 crc kubenswrapper[4856]: I1122 08:40:19.813268 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d62543-107b-4d42-a45b-aa1f49b3323c" containerName="dnsmasq-dns" Nov 22 08:40:19 crc kubenswrapper[4856]: I1122 08:40:19.816486 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 08:40:19 crc kubenswrapper[4856]: I1122 08:40:19.823315 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 08:40:19 crc kubenswrapper[4856]: I1122 08:40:19.830006 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 08:40:19 crc kubenswrapper[4856]: I1122 08:40:19.950781 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6879357e-1e06-433d-b950-80faf5ecc92b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:19 crc kubenswrapper[4856]: I1122 08:40:19.951086 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:19 crc kubenswrapper[4856]: I1122 08:40:19.951218 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-scripts\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:19 crc kubenswrapper[4856]: I1122 08:40:19.951398 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-config-data\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:19 crc kubenswrapper[4856]: I1122 08:40:19.951637 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpxpk\" (UniqueName: \"kubernetes.io/projected/6879357e-1e06-433d-b950-80faf5ecc92b-kube-api-access-zpxpk\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:19 crc kubenswrapper[4856]: I1122 08:40:19.951702 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.053325 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.053400 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-scripts\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.053456 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-config-data\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.053481 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpxpk\" (UniqueName: \"kubernetes.io/projected/6879357e-1e06-433d-b950-80faf5ecc92b-kube-api-access-zpxpk\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.053498 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.053537 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6879357e-1e06-433d-b950-80faf5ecc92b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.053681 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6879357e-1e06-433d-b950-80faf5ecc92b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.061827 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.061896 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-config-data\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.062015 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.062297 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-scripts\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.079760 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpxpk\" (UniqueName: \"kubernetes.io/projected/6879357e-1e06-433d-b950-80faf5ecc92b-kube-api-access-zpxpk\") pod \"cinder-scheduler-0\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.158711 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 08:40:20 crc kubenswrapper[4856]: I1122 08:40:20.603572 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 08:40:21 crc kubenswrapper[4856]: I1122 08:40:21.410855 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:40:21 crc kubenswrapper[4856]: I1122 08:40:21.411242 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="eb17631b-7d2a-4457-8d20-01bcc390c220" containerName="cinder-api-log" containerID="cri-o://df3218513f48c5e58943da42003684d7ef01f7cc4adb645ba3e8e9695c10d96d" gracePeriod=30 Nov 22 08:40:21 crc kubenswrapper[4856]: I1122 08:40:21.411354 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="eb17631b-7d2a-4457-8d20-01bcc390c220" containerName="cinder-api" containerID="cri-o://9d5267047f0534321a109a26c325d522e9f3315a22920ce8e788c5ee906647f3" gracePeriod=30 Nov 22 08:40:21 crc kubenswrapper[4856]: I1122 08:40:21.440709 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6879357e-1e06-433d-b950-80faf5ecc92b","Type":"ContainerStarted","Data":"626d65a2a9ba181ebfcf63623ed7a77fc4bc0117ef23dc0d4c18055d95acb823"} Nov 22 08:40:22 crc kubenswrapper[4856]: I1122 08:40:22.454067 4856 generic.go:334] "Generic (PLEG): container finished" podID="eb17631b-7d2a-4457-8d20-01bcc390c220" containerID="df3218513f48c5e58943da42003684d7ef01f7cc4adb645ba3e8e9695c10d96d" exitCode=143 Nov 22 08:40:22 crc kubenswrapper[4856]: I1122 08:40:22.454189 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb17631b-7d2a-4457-8d20-01bcc390c220","Type":"ContainerDied","Data":"df3218513f48c5e58943da42003684d7ef01f7cc4adb645ba3e8e9695c10d96d"} Nov 22 08:40:22 crc kubenswrapper[4856]: I1122 08:40:22.624121 4856 scope.go:117] "RemoveContainer" containerID="0388f4b5015abc8b5326411d4bbbe616481d9fd72b87afdbcc731d70d01e4921" Nov 22 08:40:23 crc kubenswrapper[4856]: I1122 08:40:23.468027 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6879357e-1e06-433d-b950-80faf5ecc92b","Type":"ContainerStarted","Data":"aa5a36beba87e57497fab657a27c6455112772d96d817424d2732f8efecea666"} Nov 22 08:40:24 crc kubenswrapper[4856]: I1122 08:40:24.481648 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6879357e-1e06-433d-b950-80faf5ecc92b","Type":"ContainerStarted","Data":"50306937493b3b8a68dfbd21a73719e49539a624e2187b36e67d2454fa55b6a4"} Nov 22 08:40:24 crc kubenswrapper[4856]: I1122 08:40:24.518058 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.52847024 podStartE2EDuration="5.518023297s" podCreationTimestamp="2025-11-22 08:40:19 +0000 UTC" firstStartedPulling="2025-11-22 08:40:20.624608758 +0000 UTC m=+5863.038002056" lastFinishedPulling="2025-11-22 08:40:22.614161855 +0000 UTC m=+5865.027555113" observedRunningTime="2025-11-22 08:40:24.507321469 +0000 UTC m=+5866.920714747" watchObservedRunningTime="2025-11-22 08:40:24.518023297 +0000 UTC m=+5866.931416555" Nov 22 08:40:24 crc kubenswrapper[4856]: I1122 08:40:24.576616 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="eb17631b-7d2a-4457-8d20-01bcc390c220" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.1.56:8776/healthcheck\": read tcp 10.217.0.2:44520->10.217.1.56:8776: read: connection reset by peer" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.154991 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.159177 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.264398 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-internal-tls-certs\") pod \"eb17631b-7d2a-4457-8d20-01bcc390c220\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.264479 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-combined-ca-bundle\") pod \"eb17631b-7d2a-4457-8d20-01bcc390c220\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.264588 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-scripts\") pod \"eb17631b-7d2a-4457-8d20-01bcc390c220\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.264659 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwpng\" (UniqueName: \"kubernetes.io/projected/eb17631b-7d2a-4457-8d20-01bcc390c220-kube-api-access-xwpng\") pod \"eb17631b-7d2a-4457-8d20-01bcc390c220\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.264680 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb17631b-7d2a-4457-8d20-01bcc390c220-etc-machine-id\") pod \"eb17631b-7d2a-4457-8d20-01bcc390c220\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.264730 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-public-tls-certs\") pod \"eb17631b-7d2a-4457-8d20-01bcc390c220\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.264769 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-config-data-custom\") pod \"eb17631b-7d2a-4457-8d20-01bcc390c220\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.264816 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-config-data\") pod \"eb17631b-7d2a-4457-8d20-01bcc390c220\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.264860 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb17631b-7d2a-4457-8d20-01bcc390c220-logs\") pod \"eb17631b-7d2a-4457-8d20-01bcc390c220\" (UID: \"eb17631b-7d2a-4457-8d20-01bcc390c220\") " Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.268786 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eb17631b-7d2a-4457-8d20-01bcc390c220-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eb17631b-7d2a-4457-8d20-01bcc390c220" (UID: "eb17631b-7d2a-4457-8d20-01bcc390c220"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.269602 4856 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eb17631b-7d2a-4457-8d20-01bcc390c220-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.271164 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb17631b-7d2a-4457-8d20-01bcc390c220-logs" (OuterVolumeSpecName: "logs") pod "eb17631b-7d2a-4457-8d20-01bcc390c220" (UID: "eb17631b-7d2a-4457-8d20-01bcc390c220"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.272192 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-scripts" (OuterVolumeSpecName: "scripts") pod "eb17631b-7d2a-4457-8d20-01bcc390c220" (UID: "eb17631b-7d2a-4457-8d20-01bcc390c220"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.279794 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "eb17631b-7d2a-4457-8d20-01bcc390c220" (UID: "eb17631b-7d2a-4457-8d20-01bcc390c220"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.296632 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb17631b-7d2a-4457-8d20-01bcc390c220-kube-api-access-xwpng" (OuterVolumeSpecName: "kube-api-access-xwpng") pod "eb17631b-7d2a-4457-8d20-01bcc390c220" (UID: "eb17631b-7d2a-4457-8d20-01bcc390c220"). InnerVolumeSpecName "kube-api-access-xwpng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.305813 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb17631b-7d2a-4457-8d20-01bcc390c220" (UID: "eb17631b-7d2a-4457-8d20-01bcc390c220"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.347690 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-config-data" (OuterVolumeSpecName: "config-data") pod "eb17631b-7d2a-4457-8d20-01bcc390c220" (UID: "eb17631b-7d2a-4457-8d20-01bcc390c220"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.369266 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "eb17631b-7d2a-4457-8d20-01bcc390c220" (UID: "eb17631b-7d2a-4457-8d20-01bcc390c220"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.372348 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.372377 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.372392 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwpng\" (UniqueName: \"kubernetes.io/projected/eb17631b-7d2a-4457-8d20-01bcc390c220-kube-api-access-xwpng\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.372409 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.372420 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.372431 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.372443 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb17631b-7d2a-4457-8d20-01bcc390c220-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.377293 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eb17631b-7d2a-4457-8d20-01bcc390c220" (UID: "eb17631b-7d2a-4457-8d20-01bcc390c220"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.479249 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb17631b-7d2a-4457-8d20-01bcc390c220-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.500268 4856 generic.go:334] "Generic (PLEG): container finished" podID="eb17631b-7d2a-4457-8d20-01bcc390c220" containerID="9d5267047f0534321a109a26c325d522e9f3315a22920ce8e788c5ee906647f3" exitCode=0 Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.500393 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb17631b-7d2a-4457-8d20-01bcc390c220","Type":"ContainerDied","Data":"9d5267047f0534321a109a26c325d522e9f3315a22920ce8e788c5ee906647f3"} Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.500453 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"eb17631b-7d2a-4457-8d20-01bcc390c220","Type":"ContainerDied","Data":"0cda85b0c8e3575da9638f0046f0465a2340226cfbd573810a6cc69e3b250f80"} Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.500477 4856 scope.go:117] "RemoveContainer" containerID="9d5267047f0534321a109a26c325d522e9f3315a22920ce8e788c5ee906647f3" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.500398 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.527303 4856 scope.go:117] "RemoveContainer" containerID="df3218513f48c5e58943da42003684d7ef01f7cc4adb645ba3e8e9695c10d96d" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.559930 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.568599 4856 scope.go:117] "RemoveContainer" containerID="9d5267047f0534321a109a26c325d522e9f3315a22920ce8e788c5ee906647f3" Nov 22 08:40:25 crc kubenswrapper[4856]: E1122 08:40:25.569313 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d5267047f0534321a109a26c325d522e9f3315a22920ce8e788c5ee906647f3\": container with ID starting with 9d5267047f0534321a109a26c325d522e9f3315a22920ce8e788c5ee906647f3 not found: ID does not exist" containerID="9d5267047f0534321a109a26c325d522e9f3315a22920ce8e788c5ee906647f3" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.569353 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5267047f0534321a109a26c325d522e9f3315a22920ce8e788c5ee906647f3"} err="failed to get container status \"9d5267047f0534321a109a26c325d522e9f3315a22920ce8e788c5ee906647f3\": rpc error: code = NotFound desc = could not find container \"9d5267047f0534321a109a26c325d522e9f3315a22920ce8e788c5ee906647f3\": container with ID starting with 9d5267047f0534321a109a26c325d522e9f3315a22920ce8e788c5ee906647f3 not found: ID does not exist" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.569387 4856 scope.go:117] "RemoveContainer" containerID="df3218513f48c5e58943da42003684d7ef01f7cc4adb645ba3e8e9695c10d96d" Nov 22 08:40:25 crc kubenswrapper[4856]: E1122 08:40:25.569639 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df3218513f48c5e58943da42003684d7ef01f7cc4adb645ba3e8e9695c10d96d\": container with ID starting with df3218513f48c5e58943da42003684d7ef01f7cc4adb645ba3e8e9695c10d96d not found: ID does not exist" containerID="df3218513f48c5e58943da42003684d7ef01f7cc4adb645ba3e8e9695c10d96d" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.569669 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df3218513f48c5e58943da42003684d7ef01f7cc4adb645ba3e8e9695c10d96d"} err="failed to get container status \"df3218513f48c5e58943da42003684d7ef01f7cc4adb645ba3e8e9695c10d96d\": rpc error: code = NotFound desc = could not find container \"df3218513f48c5e58943da42003684d7ef01f7cc4adb645ba3e8e9695c10d96d\": container with ID starting with df3218513f48c5e58943da42003684d7ef01f7cc4adb645ba3e8e9695c10d96d not found: ID does not exist" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.572835 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.582202 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:40:25 crc kubenswrapper[4856]: E1122 08:40:25.582877 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb17631b-7d2a-4457-8d20-01bcc390c220" containerName="cinder-api" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.582910 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb17631b-7d2a-4457-8d20-01bcc390c220" containerName="cinder-api" Nov 22 08:40:25 crc kubenswrapper[4856]: E1122 08:40:25.582930 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb17631b-7d2a-4457-8d20-01bcc390c220" containerName="cinder-api-log" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.582940 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb17631b-7d2a-4457-8d20-01bcc390c220" containerName="cinder-api-log" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.583428 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb17631b-7d2a-4457-8d20-01bcc390c220" containerName="cinder-api-log" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.583461 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb17631b-7d2a-4457-8d20-01bcc390c220" containerName="cinder-api" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.585191 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.591598 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.591827 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.591891 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.606305 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.682755 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-config-data\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.682822 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.682856 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-scripts\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.682921 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-config-data-custom\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.682985 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.683143 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-logs\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.683537 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.683603 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5j9d\" (UniqueName: \"kubernetes.io/projected/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-kube-api-access-d5j9d\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.684066 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.786472 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.786587 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-scripts\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.786634 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-config-data-custom\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.786675 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.786700 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-logs\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.786765 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.786802 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5j9d\" (UniqueName: \"kubernetes.io/projected/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-kube-api-access-d5j9d\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.786927 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.786979 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-config-data\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.787266 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-etc-machine-id\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.787878 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-logs\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.792824 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.792941 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.793284 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-scripts\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.793777 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-public-tls-certs\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.793805 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-config-data\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.794765 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-config-data-custom\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.808133 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5j9d\" (UniqueName: \"kubernetes.io/projected/5ba26c8a-e031-4fa4-85e3-e13e63ef1448-kube-api-access-d5j9d\") pod \"cinder-api-0\" (UID: \"5ba26c8a-e031-4fa4-85e3-e13e63ef1448\") " pod="openstack/cinder-api-0" Nov 22 08:40:25 crc kubenswrapper[4856]: I1122 08:40:25.905964 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 22 08:40:26 crc kubenswrapper[4856]: I1122 08:40:26.355923 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 22 08:40:26 crc kubenswrapper[4856]: I1122 08:40:26.514598 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ba26c8a-e031-4fa4-85e3-e13e63ef1448","Type":"ContainerStarted","Data":"68b4114e41b5b61694726285646e46d35e28abb3849ec94db06f5108cb10918f"} Nov 22 08:40:26 crc kubenswrapper[4856]: I1122 08:40:26.722429 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb17631b-7d2a-4457-8d20-01bcc390c220" path="/var/lib/kubelet/pods/eb17631b-7d2a-4457-8d20-01bcc390c220/volumes" Nov 22 08:40:27 crc kubenswrapper[4856]: I1122 08:40:27.529140 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ba26c8a-e031-4fa4-85e3-e13e63ef1448","Type":"ContainerStarted","Data":"9acd68de1880a9c349b4b3baa4d4f75ba305312c579c68ff1cce1e66840f0da8"} Nov 22 08:40:28 crc kubenswrapper[4856]: I1122 08:40:28.550764 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"5ba26c8a-e031-4fa4-85e3-e13e63ef1448","Type":"ContainerStarted","Data":"0362c847dee82df3bb266868e3a0b9e34eb49d7e6d5e7aae113257a21f249deb"} Nov 22 08:40:28 crc kubenswrapper[4856]: I1122 08:40:28.551104 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 22 08:40:28 crc kubenswrapper[4856]: I1122 08:40:28.592393 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.592367341 podStartE2EDuration="3.592367341s" podCreationTimestamp="2025-11-22 08:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:40:28.575920838 +0000 UTC m=+5870.989314126" watchObservedRunningTime="2025-11-22 08:40:28.592367341 +0000 UTC m=+5871.005760599" Nov 22 08:40:30 crc kubenswrapper[4856]: I1122 08:40:30.382532 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 08:40:30 crc kubenswrapper[4856]: I1122 08:40:30.438292 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 08:40:30 crc kubenswrapper[4856]: I1122 08:40:30.570684 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6879357e-1e06-433d-b950-80faf5ecc92b" containerName="cinder-scheduler" containerID="cri-o://aa5a36beba87e57497fab657a27c6455112772d96d817424d2732f8efecea666" gracePeriod=30 Nov 22 08:40:30 crc kubenswrapper[4856]: I1122 08:40:30.570755 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6879357e-1e06-433d-b950-80faf5ecc92b" containerName="probe" containerID="cri-o://50306937493b3b8a68dfbd21a73719e49539a624e2187b36e67d2454fa55b6a4" gracePeriod=30 Nov 22 08:40:32 crc kubenswrapper[4856]: I1122 08:40:32.594077 4856 generic.go:334] "Generic (PLEG): container finished" podID="6879357e-1e06-433d-b950-80faf5ecc92b" containerID="50306937493b3b8a68dfbd21a73719e49539a624e2187b36e67d2454fa55b6a4" exitCode=0 Nov 22 08:40:32 crc kubenswrapper[4856]: I1122 08:40:32.594148 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6879357e-1e06-433d-b950-80faf5ecc92b","Type":"ContainerDied","Data":"50306937493b3b8a68dfbd21a73719e49539a624e2187b36e67d2454fa55b6a4"} Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.618539 4856 generic.go:334] "Generic (PLEG): container finished" podID="6879357e-1e06-433d-b950-80faf5ecc92b" containerID="aa5a36beba87e57497fab657a27c6455112772d96d817424d2732f8efecea666" exitCode=0 Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.618540 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6879357e-1e06-433d-b950-80faf5ecc92b","Type":"ContainerDied","Data":"aa5a36beba87e57497fab657a27c6455112772d96d817424d2732f8efecea666"} Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.751844 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.874901 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-config-data-custom\") pod \"6879357e-1e06-433d-b950-80faf5ecc92b\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.875212 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-combined-ca-bundle\") pod \"6879357e-1e06-433d-b950-80faf5ecc92b\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.875332 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-config-data\") pod \"6879357e-1e06-433d-b950-80faf5ecc92b\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.875441 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-scripts\") pod \"6879357e-1e06-433d-b950-80faf5ecc92b\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.875536 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6879357e-1e06-433d-b950-80faf5ecc92b-etc-machine-id\") pod \"6879357e-1e06-433d-b950-80faf5ecc92b\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.875641 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpxpk\" (UniqueName: \"kubernetes.io/projected/6879357e-1e06-433d-b950-80faf5ecc92b-kube-api-access-zpxpk\") pod \"6879357e-1e06-433d-b950-80faf5ecc92b\" (UID: \"6879357e-1e06-433d-b950-80faf5ecc92b\") " Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.875620 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6879357e-1e06-433d-b950-80faf5ecc92b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6879357e-1e06-433d-b950-80faf5ecc92b" (UID: "6879357e-1e06-433d-b950-80faf5ecc92b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.876214 4856 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6879357e-1e06-433d-b950-80faf5ecc92b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.882219 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6879357e-1e06-433d-b950-80faf5ecc92b-kube-api-access-zpxpk" (OuterVolumeSpecName: "kube-api-access-zpxpk") pod "6879357e-1e06-433d-b950-80faf5ecc92b" (UID: "6879357e-1e06-433d-b950-80faf5ecc92b"). InnerVolumeSpecName "kube-api-access-zpxpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.882398 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-scripts" (OuterVolumeSpecName: "scripts") pod "6879357e-1e06-433d-b950-80faf5ecc92b" (UID: "6879357e-1e06-433d-b950-80faf5ecc92b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.884648 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6879357e-1e06-433d-b950-80faf5ecc92b" (UID: "6879357e-1e06-433d-b950-80faf5ecc92b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.942592 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6879357e-1e06-433d-b950-80faf5ecc92b" (UID: "6879357e-1e06-433d-b950-80faf5ecc92b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.977703 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.977734 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.977744 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.977758 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpxpk\" (UniqueName: \"kubernetes.io/projected/6879357e-1e06-433d-b950-80faf5ecc92b-kube-api-access-zpxpk\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:34 crc kubenswrapper[4856]: I1122 08:40:34.993182 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-config-data" (OuterVolumeSpecName: "config-data") pod "6879357e-1e06-433d-b950-80faf5ecc92b" (UID: "6879357e-1e06-433d-b950-80faf5ecc92b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.079766 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6879357e-1e06-433d-b950-80faf5ecc92b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.630357 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6879357e-1e06-433d-b950-80faf5ecc92b","Type":"ContainerDied","Data":"626d65a2a9ba181ebfcf63623ed7a77fc4bc0117ef23dc0d4c18055d95acb823"} Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.630411 4856 scope.go:117] "RemoveContainer" containerID="50306937493b3b8a68dfbd21a73719e49539a624e2187b36e67d2454fa55b6a4" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.630426 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.652574 4856 scope.go:117] "RemoveContainer" containerID="aa5a36beba87e57497fab657a27c6455112772d96d817424d2732f8efecea666" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.674075 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.685256 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.704238 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 08:40:35 crc kubenswrapper[4856]: E1122 08:40:35.704758 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6879357e-1e06-433d-b950-80faf5ecc92b" containerName="probe" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.704774 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6879357e-1e06-433d-b950-80faf5ecc92b" containerName="probe" Nov 22 08:40:35 crc kubenswrapper[4856]: E1122 08:40:35.704802 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6879357e-1e06-433d-b950-80faf5ecc92b" containerName="cinder-scheduler" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.704811 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="6879357e-1e06-433d-b950-80faf5ecc92b" containerName="cinder-scheduler" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.705051 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="6879357e-1e06-433d-b950-80faf5ecc92b" containerName="probe" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.705064 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="6879357e-1e06-433d-b950-80faf5ecc92b" containerName="cinder-scheduler" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.706294 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.708514 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.718799 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.795437 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56903f1f-89ce-4eca-bd84-0cd0e3814079-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.795865 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56903f1f-89ce-4eca-bd84-0cd0e3814079-scripts\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.795903 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvxwx\" (UniqueName: \"kubernetes.io/projected/56903f1f-89ce-4eca-bd84-0cd0e3814079-kube-api-access-mvxwx\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.795958 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56903f1f-89ce-4eca-bd84-0cd0e3814079-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.795985 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56903f1f-89ce-4eca-bd84-0cd0e3814079-config-data\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.796212 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56903f1f-89ce-4eca-bd84-0cd0e3814079-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.898946 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56903f1f-89ce-4eca-bd84-0cd0e3814079-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.899023 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56903f1f-89ce-4eca-bd84-0cd0e3814079-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.899055 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/56903f1f-89ce-4eca-bd84-0cd0e3814079-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.899065 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56903f1f-89ce-4eca-bd84-0cd0e3814079-scripts\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.899163 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvxwx\" (UniqueName: \"kubernetes.io/projected/56903f1f-89ce-4eca-bd84-0cd0e3814079-kube-api-access-mvxwx\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.899211 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56903f1f-89ce-4eca-bd84-0cd0e3814079-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.899231 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56903f1f-89ce-4eca-bd84-0cd0e3814079-config-data\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.905769 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56903f1f-89ce-4eca-bd84-0cd0e3814079-scripts\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.906047 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/56903f1f-89ce-4eca-bd84-0cd0e3814079-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.907096 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56903f1f-89ce-4eca-bd84-0cd0e3814079-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.919409 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvxwx\" (UniqueName: \"kubernetes.io/projected/56903f1f-89ce-4eca-bd84-0cd0e3814079-kube-api-access-mvxwx\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:35 crc kubenswrapper[4856]: I1122 08:40:35.928957 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56903f1f-89ce-4eca-bd84-0cd0e3814079-config-data\") pod \"cinder-scheduler-0\" (UID: \"56903f1f-89ce-4eca-bd84-0cd0e3814079\") " pod="openstack/cinder-scheduler-0" Nov 22 08:40:36 crc kubenswrapper[4856]: I1122 08:40:36.024057 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 22 08:40:36 crc kubenswrapper[4856]: I1122 08:40:36.478981 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 22 08:40:36 crc kubenswrapper[4856]: W1122 08:40:36.483194 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56903f1f_89ce_4eca_bd84_0cd0e3814079.slice/crio-ae9f350f84b7a76ec2c916bb4b11b44de3ea490c6089116adb253e8a839be913 WatchSource:0}: Error finding container ae9f350f84b7a76ec2c916bb4b11b44de3ea490c6089116adb253e8a839be913: Status 404 returned error can't find the container with id ae9f350f84b7a76ec2c916bb4b11b44de3ea490c6089116adb253e8a839be913 Nov 22 08:40:36 crc kubenswrapper[4856]: I1122 08:40:36.644891 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"56903f1f-89ce-4eca-bd84-0cd0e3814079","Type":"ContainerStarted","Data":"ae9f350f84b7a76ec2c916bb4b11b44de3ea490c6089116adb253e8a839be913"} Nov 22 08:40:36 crc kubenswrapper[4856]: I1122 08:40:36.722627 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6879357e-1e06-433d-b950-80faf5ecc92b" path="/var/lib/kubelet/pods/6879357e-1e06-433d-b950-80faf5ecc92b/volumes" Nov 22 08:40:37 crc kubenswrapper[4856]: I1122 08:40:37.657911 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"56903f1f-89ce-4eca-bd84-0cd0e3814079","Type":"ContainerStarted","Data":"e454761a35f0893a064840760dfa40c4f102f8d4686df1e01b0cb6fd9e28e030"} Nov 22 08:40:37 crc kubenswrapper[4856]: I1122 08:40:37.798746 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 22 08:40:38 crc kubenswrapper[4856]: I1122 08:40:38.670940 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"56903f1f-89ce-4eca-bd84-0cd0e3814079","Type":"ContainerStarted","Data":"101335993c9868fb0a15a477ebfaa1f301cdc77d636ac41c5e966c7b34516aae"} Nov 22 08:40:38 crc kubenswrapper[4856]: I1122 08:40:38.693877 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.693857416 podStartE2EDuration="3.693857416s" podCreationTimestamp="2025-11-22 08:40:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:40:38.691795591 +0000 UTC m=+5881.105188859" watchObservedRunningTime="2025-11-22 08:40:38.693857416 +0000 UTC m=+5881.107250674" Nov 22 08:40:41 crc kubenswrapper[4856]: I1122 08:40:41.025005 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 22 08:40:46 crc kubenswrapper[4856]: I1122 08:40:46.222542 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.619600 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-wz8t9"] Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.621391 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wz8t9" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.631321 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wz8t9"] Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.719067 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-c708-account-create-5pnzj"] Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.720226 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c708-account-create-5pnzj" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.723732 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.727979 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c708-account-create-5pnzj"] Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.733951 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eec3298d-6113-4ba7-84d9-61a961e8128d-operator-scripts\") pod \"glance-db-create-wz8t9\" (UID: \"eec3298d-6113-4ba7-84d9-61a961e8128d\") " pod="openstack/glance-db-create-wz8t9" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.734084 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfqfc\" (UniqueName: \"kubernetes.io/projected/eec3298d-6113-4ba7-84d9-61a961e8128d-kube-api-access-xfqfc\") pod \"glance-db-create-wz8t9\" (UID: \"eec3298d-6113-4ba7-84d9-61a961e8128d\") " pod="openstack/glance-db-create-wz8t9" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.835495 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eec3298d-6113-4ba7-84d9-61a961e8128d-operator-scripts\") pod \"glance-db-create-wz8t9\" (UID: \"eec3298d-6113-4ba7-84d9-61a961e8128d\") " pod="openstack/glance-db-create-wz8t9" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.835585 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2thjh\" (UniqueName: \"kubernetes.io/projected/70b4c046-0b3b-42ea-b75d-ee15442bc981-kube-api-access-2thjh\") pod \"glance-c708-account-create-5pnzj\" (UID: \"70b4c046-0b3b-42ea-b75d-ee15442bc981\") " pod="openstack/glance-c708-account-create-5pnzj" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.835658 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70b4c046-0b3b-42ea-b75d-ee15442bc981-operator-scripts\") pod \"glance-c708-account-create-5pnzj\" (UID: \"70b4c046-0b3b-42ea-b75d-ee15442bc981\") " pod="openstack/glance-c708-account-create-5pnzj" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.835735 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfqfc\" (UniqueName: \"kubernetes.io/projected/eec3298d-6113-4ba7-84d9-61a961e8128d-kube-api-access-xfqfc\") pod \"glance-db-create-wz8t9\" (UID: \"eec3298d-6113-4ba7-84d9-61a961e8128d\") " pod="openstack/glance-db-create-wz8t9" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.836867 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eec3298d-6113-4ba7-84d9-61a961e8128d-operator-scripts\") pod \"glance-db-create-wz8t9\" (UID: \"eec3298d-6113-4ba7-84d9-61a961e8128d\") " pod="openstack/glance-db-create-wz8t9" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.859179 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfqfc\" (UniqueName: \"kubernetes.io/projected/eec3298d-6113-4ba7-84d9-61a961e8128d-kube-api-access-xfqfc\") pod \"glance-db-create-wz8t9\" (UID: \"eec3298d-6113-4ba7-84d9-61a961e8128d\") " pod="openstack/glance-db-create-wz8t9" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.937933 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2thjh\" (UniqueName: \"kubernetes.io/projected/70b4c046-0b3b-42ea-b75d-ee15442bc981-kube-api-access-2thjh\") pod \"glance-c708-account-create-5pnzj\" (UID: \"70b4c046-0b3b-42ea-b75d-ee15442bc981\") " pod="openstack/glance-c708-account-create-5pnzj" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.938056 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70b4c046-0b3b-42ea-b75d-ee15442bc981-operator-scripts\") pod \"glance-c708-account-create-5pnzj\" (UID: \"70b4c046-0b3b-42ea-b75d-ee15442bc981\") " pod="openstack/glance-c708-account-create-5pnzj" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.938806 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70b4c046-0b3b-42ea-b75d-ee15442bc981-operator-scripts\") pod \"glance-c708-account-create-5pnzj\" (UID: \"70b4c046-0b3b-42ea-b75d-ee15442bc981\") " pod="openstack/glance-c708-account-create-5pnzj" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.939043 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wz8t9" Nov 22 08:40:47 crc kubenswrapper[4856]: I1122 08:40:47.954864 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2thjh\" (UniqueName: \"kubernetes.io/projected/70b4c046-0b3b-42ea-b75d-ee15442bc981-kube-api-access-2thjh\") pod \"glance-c708-account-create-5pnzj\" (UID: \"70b4c046-0b3b-42ea-b75d-ee15442bc981\") " pod="openstack/glance-c708-account-create-5pnzj" Nov 22 08:40:48 crc kubenswrapper[4856]: I1122 08:40:48.038725 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c708-account-create-5pnzj" Nov 22 08:40:48 crc kubenswrapper[4856]: I1122 08:40:48.377173 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wz8t9"] Nov 22 08:40:48 crc kubenswrapper[4856]: W1122 08:40:48.383869 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeec3298d_6113_4ba7_84d9_61a961e8128d.slice/crio-17370acbcb2d149c578bec74bdf3ea16a10959aed559c1fb9e4e6b8dff8620cf WatchSource:0}: Error finding container 17370acbcb2d149c578bec74bdf3ea16a10959aed559c1fb9e4e6b8dff8620cf: Status 404 returned error can't find the container with id 17370acbcb2d149c578bec74bdf3ea16a10959aed559c1fb9e4e6b8dff8620cf Nov 22 08:40:48 crc kubenswrapper[4856]: I1122 08:40:48.510787 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-c708-account-create-5pnzj"] Nov 22 08:40:48 crc kubenswrapper[4856]: W1122 08:40:48.513583 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70b4c046_0b3b_42ea_b75d_ee15442bc981.slice/crio-5462f9eac273d83c6cb8b54d2fbde1e64ba8a360d9fdafccbdc623cbae33bddd WatchSource:0}: Error finding container 5462f9eac273d83c6cb8b54d2fbde1e64ba8a360d9fdafccbdc623cbae33bddd: Status 404 returned error can't find the container with id 5462f9eac273d83c6cb8b54d2fbde1e64ba8a360d9fdafccbdc623cbae33bddd Nov 22 08:40:48 crc kubenswrapper[4856]: I1122 08:40:48.775043 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wz8t9" event={"ID":"eec3298d-6113-4ba7-84d9-61a961e8128d","Type":"ContainerStarted","Data":"3d08a6b17d0c267b751afcdc61abccf0504150187cf5a6260aa7a0934df37c3f"} Nov 22 08:40:48 crc kubenswrapper[4856]: I1122 08:40:48.775099 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wz8t9" event={"ID":"eec3298d-6113-4ba7-84d9-61a961e8128d","Type":"ContainerStarted","Data":"17370acbcb2d149c578bec74bdf3ea16a10959aed559c1fb9e4e6b8dff8620cf"} Nov 22 08:40:48 crc kubenswrapper[4856]: I1122 08:40:48.776726 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c708-account-create-5pnzj" event={"ID":"70b4c046-0b3b-42ea-b75d-ee15442bc981","Type":"ContainerStarted","Data":"76617c03f9b62e7148875343c74a91123a984fbed254d762e837e281d8f51154"} Nov 22 08:40:48 crc kubenswrapper[4856]: I1122 08:40:48.776772 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c708-account-create-5pnzj" event={"ID":"70b4c046-0b3b-42ea-b75d-ee15442bc981","Type":"ContainerStarted","Data":"5462f9eac273d83c6cb8b54d2fbde1e64ba8a360d9fdafccbdc623cbae33bddd"} Nov 22 08:40:48 crc kubenswrapper[4856]: I1122 08:40:48.792450 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-wz8t9" podStartSLOduration=1.792432803 podStartE2EDuration="1.792432803s" podCreationTimestamp="2025-11-22 08:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:40:48.786950196 +0000 UTC m=+5891.200343454" watchObservedRunningTime="2025-11-22 08:40:48.792432803 +0000 UTC m=+5891.205826051" Nov 22 08:40:48 crc kubenswrapper[4856]: I1122 08:40:48.801732 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-c708-account-create-5pnzj" podStartSLOduration=1.8017130319999999 podStartE2EDuration="1.801713032s" podCreationTimestamp="2025-11-22 08:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:40:48.799816072 +0000 UTC m=+5891.213209330" watchObservedRunningTime="2025-11-22 08:40:48.801713032 +0000 UTC m=+5891.215106290" Nov 22 08:40:49 crc kubenswrapper[4856]: I1122 08:40:49.789407 4856 generic.go:334] "Generic (PLEG): container finished" podID="70b4c046-0b3b-42ea-b75d-ee15442bc981" containerID="76617c03f9b62e7148875343c74a91123a984fbed254d762e837e281d8f51154" exitCode=0 Nov 22 08:40:49 crc kubenswrapper[4856]: I1122 08:40:49.789591 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c708-account-create-5pnzj" event={"ID":"70b4c046-0b3b-42ea-b75d-ee15442bc981","Type":"ContainerDied","Data":"76617c03f9b62e7148875343c74a91123a984fbed254d762e837e281d8f51154"} Nov 22 08:40:49 crc kubenswrapper[4856]: I1122 08:40:49.792997 4856 generic.go:334] "Generic (PLEG): container finished" podID="eec3298d-6113-4ba7-84d9-61a961e8128d" containerID="3d08a6b17d0c267b751afcdc61abccf0504150187cf5a6260aa7a0934df37c3f" exitCode=0 Nov 22 08:40:49 crc kubenswrapper[4856]: I1122 08:40:49.793038 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wz8t9" event={"ID":"eec3298d-6113-4ba7-84d9-61a961e8128d","Type":"ContainerDied","Data":"3d08a6b17d0c267b751afcdc61abccf0504150187cf5a6260aa7a0934df37c3f"} Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.171871 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c708-account-create-5pnzj" Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.180679 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wz8t9" Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.303590 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfqfc\" (UniqueName: \"kubernetes.io/projected/eec3298d-6113-4ba7-84d9-61a961e8128d-kube-api-access-xfqfc\") pod \"eec3298d-6113-4ba7-84d9-61a961e8128d\" (UID: \"eec3298d-6113-4ba7-84d9-61a961e8128d\") " Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.304046 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2thjh\" (UniqueName: \"kubernetes.io/projected/70b4c046-0b3b-42ea-b75d-ee15442bc981-kube-api-access-2thjh\") pod \"70b4c046-0b3b-42ea-b75d-ee15442bc981\" (UID: \"70b4c046-0b3b-42ea-b75d-ee15442bc981\") " Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.304084 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eec3298d-6113-4ba7-84d9-61a961e8128d-operator-scripts\") pod \"eec3298d-6113-4ba7-84d9-61a961e8128d\" (UID: \"eec3298d-6113-4ba7-84d9-61a961e8128d\") " Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.304131 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70b4c046-0b3b-42ea-b75d-ee15442bc981-operator-scripts\") pod \"70b4c046-0b3b-42ea-b75d-ee15442bc981\" (UID: \"70b4c046-0b3b-42ea-b75d-ee15442bc981\") " Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.305073 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70b4c046-0b3b-42ea-b75d-ee15442bc981-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70b4c046-0b3b-42ea-b75d-ee15442bc981" (UID: "70b4c046-0b3b-42ea-b75d-ee15442bc981"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.305159 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eec3298d-6113-4ba7-84d9-61a961e8128d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eec3298d-6113-4ba7-84d9-61a961e8128d" (UID: "eec3298d-6113-4ba7-84d9-61a961e8128d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.309345 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eec3298d-6113-4ba7-84d9-61a961e8128d-kube-api-access-xfqfc" (OuterVolumeSpecName: "kube-api-access-xfqfc") pod "eec3298d-6113-4ba7-84d9-61a961e8128d" (UID: "eec3298d-6113-4ba7-84d9-61a961e8128d"). InnerVolumeSpecName "kube-api-access-xfqfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.309439 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70b4c046-0b3b-42ea-b75d-ee15442bc981-kube-api-access-2thjh" (OuterVolumeSpecName: "kube-api-access-2thjh") pod "70b4c046-0b3b-42ea-b75d-ee15442bc981" (UID: "70b4c046-0b3b-42ea-b75d-ee15442bc981"). InnerVolumeSpecName "kube-api-access-2thjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.406362 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfqfc\" (UniqueName: \"kubernetes.io/projected/eec3298d-6113-4ba7-84d9-61a961e8128d-kube-api-access-xfqfc\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.406407 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2thjh\" (UniqueName: \"kubernetes.io/projected/70b4c046-0b3b-42ea-b75d-ee15442bc981-kube-api-access-2thjh\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.406418 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eec3298d-6113-4ba7-84d9-61a961e8128d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.406426 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70b4c046-0b3b-42ea-b75d-ee15442bc981-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.813396 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-c708-account-create-5pnzj" Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.813917 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-c708-account-create-5pnzj" event={"ID":"70b4c046-0b3b-42ea-b75d-ee15442bc981","Type":"ContainerDied","Data":"5462f9eac273d83c6cb8b54d2fbde1e64ba8a360d9fdafccbdc623cbae33bddd"} Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.813991 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5462f9eac273d83c6cb8b54d2fbde1e64ba8a360d9fdafccbdc623cbae33bddd" Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.821208 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wz8t9" event={"ID":"eec3298d-6113-4ba7-84d9-61a961e8128d","Type":"ContainerDied","Data":"17370acbcb2d149c578bec74bdf3ea16a10959aed559c1fb9e4e6b8dff8620cf"} Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.821258 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17370acbcb2d149c578bec74bdf3ea16a10959aed559c1fb9e4e6b8dff8620cf" Nov 22 08:40:51 crc kubenswrapper[4856]: I1122 08:40:51.821320 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wz8t9" Nov 22 08:40:52 crc kubenswrapper[4856]: I1122 08:40:52.974875 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-2zs2k"] Nov 22 08:40:52 crc kubenswrapper[4856]: E1122 08:40:52.997050 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec3298d-6113-4ba7-84d9-61a961e8128d" containerName="mariadb-database-create" Nov 22 08:40:52 crc kubenswrapper[4856]: I1122 08:40:52.997095 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec3298d-6113-4ba7-84d9-61a961e8128d" containerName="mariadb-database-create" Nov 22 08:40:52 crc kubenswrapper[4856]: E1122 08:40:52.997132 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70b4c046-0b3b-42ea-b75d-ee15442bc981" containerName="mariadb-account-create" Nov 22 08:40:52 crc kubenswrapper[4856]: I1122 08:40:52.997142 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="70b4c046-0b3b-42ea-b75d-ee15442bc981" containerName="mariadb-account-create" Nov 22 08:40:52 crc kubenswrapper[4856]: I1122 08:40:52.997500 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec3298d-6113-4ba7-84d9-61a961e8128d" containerName="mariadb-database-create" Nov 22 08:40:52 crc kubenswrapper[4856]: I1122 08:40:52.997552 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="70b4c046-0b3b-42ea-b75d-ee15442bc981" containerName="mariadb-account-create" Nov 22 08:40:52 crc kubenswrapper[4856]: I1122 08:40:52.998305 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2zs2k"] Nov 22 08:40:52 crc kubenswrapper[4856]: I1122 08:40:52.998401 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.009247 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.009398 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4f6fv" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.144669 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-combined-ca-bundle\") pod \"glance-db-sync-2zs2k\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.144788 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-config-data\") pod \"glance-db-sync-2zs2k\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.145066 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-db-sync-config-data\") pod \"glance-db-sync-2zs2k\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.145274 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7g4q\" (UniqueName: \"kubernetes.io/projected/267851c9-b132-4a55-a827-0844d57af030-kube-api-access-s7g4q\") pod \"glance-db-sync-2zs2k\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.247617 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-combined-ca-bundle\") pod \"glance-db-sync-2zs2k\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.247736 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-config-data\") pod \"glance-db-sync-2zs2k\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.247837 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-db-sync-config-data\") pod \"glance-db-sync-2zs2k\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.247886 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7g4q\" (UniqueName: \"kubernetes.io/projected/267851c9-b132-4a55-a827-0844d57af030-kube-api-access-s7g4q\") pod \"glance-db-sync-2zs2k\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.260733 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-db-sync-config-data\") pod \"glance-db-sync-2zs2k\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.269331 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7g4q\" (UniqueName: \"kubernetes.io/projected/267851c9-b132-4a55-a827-0844d57af030-kube-api-access-s7g4q\") pod \"glance-db-sync-2zs2k\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.270215 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-combined-ca-bundle\") pod \"glance-db-sync-2zs2k\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.283602 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-config-data\") pod \"glance-db-sync-2zs2k\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.324253 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2zs2k" Nov 22 08:40:53 crc kubenswrapper[4856]: I1122 08:40:53.919060 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-2zs2k"] Nov 22 08:40:53 crc kubenswrapper[4856]: W1122 08:40:53.927249 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod267851c9_b132_4a55_a827_0844d57af030.slice/crio-f20dd49dba34f0e68fb691b5eda5fd8c8236dfcb36b263decd1bfea9fdfd35fb WatchSource:0}: Error finding container f20dd49dba34f0e68fb691b5eda5fd8c8236dfcb36b263decd1bfea9fdfd35fb: Status 404 returned error can't find the container with id f20dd49dba34f0e68fb691b5eda5fd8c8236dfcb36b263decd1bfea9fdfd35fb Nov 22 08:40:54 crc kubenswrapper[4856]: I1122 08:40:54.861156 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2zs2k" event={"ID":"267851c9-b132-4a55-a827-0844d57af030","Type":"ContainerStarted","Data":"f20dd49dba34f0e68fb691b5eda5fd8c8236dfcb36b263decd1bfea9fdfd35fb"} Nov 22 08:41:15 crc kubenswrapper[4856]: I1122 08:41:15.062037 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2zs2k" event={"ID":"267851c9-b132-4a55-a827-0844d57af030","Type":"ContainerStarted","Data":"4b068997da0976c377390de0aa6b56639cf2efa55f68565090c895741512b7b6"} Nov 22 08:41:15 crc kubenswrapper[4856]: I1122 08:41:15.078674 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-2zs2k" podStartSLOduration=3.313714315 podStartE2EDuration="23.078659647s" podCreationTimestamp="2025-11-22 08:40:52 +0000 UTC" firstStartedPulling="2025-11-22 08:40:53.936277791 +0000 UTC m=+5896.349671059" lastFinishedPulling="2025-11-22 08:41:13.701223133 +0000 UTC m=+5916.114616391" observedRunningTime="2025-11-22 08:41:15.076834918 +0000 UTC m=+5917.490228176" watchObservedRunningTime="2025-11-22 08:41:15.078659647 +0000 UTC m=+5917.492052905" Nov 22 08:41:18 crc kubenswrapper[4856]: I1122 08:41:18.091374 4856 generic.go:334] "Generic (PLEG): container finished" podID="267851c9-b132-4a55-a827-0844d57af030" containerID="4b068997da0976c377390de0aa6b56639cf2efa55f68565090c895741512b7b6" exitCode=0 Nov 22 08:41:18 crc kubenswrapper[4856]: I1122 08:41:18.092062 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2zs2k" event={"ID":"267851c9-b132-4a55-a827-0844d57af030","Type":"ContainerDied","Data":"4b068997da0976c377390de0aa6b56639cf2efa55f68565090c895741512b7b6"} Nov 22 08:41:19 crc kubenswrapper[4856]: I1122 08:41:19.476700 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2zs2k" Nov 22 08:41:19 crc kubenswrapper[4856]: I1122 08:41:19.624331 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7g4q\" (UniqueName: \"kubernetes.io/projected/267851c9-b132-4a55-a827-0844d57af030-kube-api-access-s7g4q\") pod \"267851c9-b132-4a55-a827-0844d57af030\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " Nov 22 08:41:19 crc kubenswrapper[4856]: I1122 08:41:19.624419 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-config-data\") pod \"267851c9-b132-4a55-a827-0844d57af030\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " Nov 22 08:41:19 crc kubenswrapper[4856]: I1122 08:41:19.624503 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-db-sync-config-data\") pod \"267851c9-b132-4a55-a827-0844d57af030\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " Nov 22 08:41:19 crc kubenswrapper[4856]: I1122 08:41:19.624682 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-combined-ca-bundle\") pod \"267851c9-b132-4a55-a827-0844d57af030\" (UID: \"267851c9-b132-4a55-a827-0844d57af030\") " Nov 22 08:41:19 crc kubenswrapper[4856]: I1122 08:41:19.633618 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "267851c9-b132-4a55-a827-0844d57af030" (UID: "267851c9-b132-4a55-a827-0844d57af030"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:41:19 crc kubenswrapper[4856]: I1122 08:41:19.634255 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/267851c9-b132-4a55-a827-0844d57af030-kube-api-access-s7g4q" (OuterVolumeSpecName: "kube-api-access-s7g4q") pod "267851c9-b132-4a55-a827-0844d57af030" (UID: "267851c9-b132-4a55-a827-0844d57af030"). InnerVolumeSpecName "kube-api-access-s7g4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:41:19 crc kubenswrapper[4856]: I1122 08:41:19.657116 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "267851c9-b132-4a55-a827-0844d57af030" (UID: "267851c9-b132-4a55-a827-0844d57af030"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:41:19 crc kubenswrapper[4856]: I1122 08:41:19.682991 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-config-data" (OuterVolumeSpecName: "config-data") pod "267851c9-b132-4a55-a827-0844d57af030" (UID: "267851c9-b132-4a55-a827-0844d57af030"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:41:19 crc kubenswrapper[4856]: I1122 08:41:19.727469 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7g4q\" (UniqueName: \"kubernetes.io/projected/267851c9-b132-4a55-a827-0844d57af030-kube-api-access-s7g4q\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:19 crc kubenswrapper[4856]: I1122 08:41:19.727536 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:19 crc kubenswrapper[4856]: I1122 08:41:19.727550 4856 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:19 crc kubenswrapper[4856]: I1122 08:41:19.727562 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267851c9-b132-4a55-a827-0844d57af030-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.109869 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-2zs2k" event={"ID":"267851c9-b132-4a55-a827-0844d57af030","Type":"ContainerDied","Data":"f20dd49dba34f0e68fb691b5eda5fd8c8236dfcb36b263decd1bfea9fdfd35fb"} Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.109912 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f20dd49dba34f0e68fb691b5eda5fd8c8236dfcb36b263decd1bfea9fdfd35fb" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.110393 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-2zs2k" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.520448 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-594596d755-4csnw"] Nov 22 08:41:20 crc kubenswrapper[4856]: E1122 08:41:20.521097 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267851c9-b132-4a55-a827-0844d57af030" containerName="glance-db-sync" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.521119 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="267851c9-b132-4a55-a827-0844d57af030" containerName="glance-db-sync" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.521402 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="267851c9-b132-4a55-a827-0844d57af030" containerName="glance-db-sync" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.522954 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.529442 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.537811 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.554910 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.555041 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.555164 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4f6fv" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.576895 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594596d755-4csnw"] Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.600038 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.646049 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.646161 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zcnj\" (UniqueName: \"kubernetes.io/projected/9793b3c1-724a-4ac4-979e-c599b578ea24-kube-api-access-9zcnj\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.646210 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-ovsdbserver-nb\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.646251 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-ovsdbserver-sb\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.646475 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-config\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.649068 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.649131 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-scripts\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.649170 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-config-data\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.649253 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-logs\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.649303 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5r4t\" (UniqueName: \"kubernetes.io/projected/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-kube-api-access-g5r4t\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.649398 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-dns-svc\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.650057 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.651660 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.656120 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.674814 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.750713 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751055 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd0abe91-f35d-42fc-98b9-72eb322ba07a-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751088 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-logs\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751117 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5r4t\" (UniqueName: \"kubernetes.io/projected/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-kube-api-access-g5r4t\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751164 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-dns-svc\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751196 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751222 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751265 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd0abe91-f35d-42fc-98b9-72eb322ba07a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751319 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zcnj\" (UniqueName: \"kubernetes.io/projected/9793b3c1-724a-4ac4-979e-c599b578ea24-kube-api-access-9zcnj\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751397 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k68s\" (UniqueName: \"kubernetes.io/projected/fd0abe91-f35d-42fc-98b9-72eb322ba07a-kube-api-access-5k68s\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751429 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-ovsdbserver-nb\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751469 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-ovsdbserver-sb\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751494 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751585 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-config\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751613 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751645 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-scripts\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.751674 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-config-data\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.752169 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.752809 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-ovsdbserver-sb\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.752852 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-dns-svc\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.753176 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-logs\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.753182 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-ovsdbserver-nb\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.753728 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-config\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.759280 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-scripts\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.760259 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-config-data\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.768615 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.772649 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zcnj\" (UniqueName: \"kubernetes.io/projected/9793b3c1-724a-4ac4-979e-c599b578ea24-kube-api-access-9zcnj\") pod \"dnsmasq-dns-594596d755-4csnw\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.778876 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5r4t\" (UniqueName: \"kubernetes.io/projected/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-kube-api-access-g5r4t\") pod \"glance-default-external-api-0\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.853556 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.853611 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd0abe91-f35d-42fc-98b9-72eb322ba07a-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.853683 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.854164 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd0abe91-f35d-42fc-98b9-72eb322ba07a-logs\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.854225 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd0abe91-f35d-42fc-98b9-72eb322ba07a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.854291 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5k68s\" (UniqueName: \"kubernetes.io/projected/fd0abe91-f35d-42fc-98b9-72eb322ba07a-kube-api-access-5k68s\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.854330 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.854679 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd0abe91-f35d-42fc-98b9-72eb322ba07a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.857768 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.858120 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.858282 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.860831 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.876393 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5k68s\" (UniqueName: \"kubernetes.io/projected/fd0abe91-f35d-42fc-98b9-72eb322ba07a-kube-api-access-5k68s\") pod \"glance-default-internal-api-0\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.880941 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 08:41:20 crc kubenswrapper[4856]: I1122 08:41:20.975016 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:21 crc kubenswrapper[4856]: I1122 08:41:21.364052 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-594596d755-4csnw"] Nov 22 08:41:21 crc kubenswrapper[4856]: I1122 08:41:21.530401 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:41:21 crc kubenswrapper[4856]: I1122 08:41:21.619430 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:41:22 crc kubenswrapper[4856]: I1122 08:41:22.132977 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e","Type":"ContainerStarted","Data":"535ce7febc5cacd5ea81ab738dc7110ae983e41d7fd010052cab72133dc64604"} Nov 22 08:41:22 crc kubenswrapper[4856]: I1122 08:41:22.135301 4856 generic.go:334] "Generic (PLEG): container finished" podID="9793b3c1-724a-4ac4-979e-c599b578ea24" containerID="1fe8f0c0f1754657d34484a0f82b8ab7fecaf9a3286d0f3c7227f28a07bd14c0" exitCode=0 Nov 22 08:41:22 crc kubenswrapper[4856]: I1122 08:41:22.135396 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594596d755-4csnw" event={"ID":"9793b3c1-724a-4ac4-979e-c599b578ea24","Type":"ContainerDied","Data":"1fe8f0c0f1754657d34484a0f82b8ab7fecaf9a3286d0f3c7227f28a07bd14c0"} Nov 22 08:41:22 crc kubenswrapper[4856]: I1122 08:41:22.135504 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594596d755-4csnw" event={"ID":"9793b3c1-724a-4ac4-979e-c599b578ea24","Type":"ContainerStarted","Data":"6fa6e451633bf61b03e90d6b60ed79bc0b810361639a62c101bf53e5146a584a"} Nov 22 08:41:22 crc kubenswrapper[4856]: I1122 08:41:22.338494 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:41:22 crc kubenswrapper[4856]: W1122 08:41:22.349072 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd0abe91_f35d_42fc_98b9_72eb322ba07a.slice/crio-b1de76aa018567a19196c72f0b19a0126a1371a53c054ed725b430de7e000fb2 WatchSource:0}: Error finding container b1de76aa018567a19196c72f0b19a0126a1371a53c054ed725b430de7e000fb2: Status 404 returned error can't find the container with id b1de76aa018567a19196c72f0b19a0126a1371a53c054ed725b430de7e000fb2 Nov 22 08:41:22 crc kubenswrapper[4856]: I1122 08:41:22.900150 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.165461 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd0abe91-f35d-42fc-98b9-72eb322ba07a","Type":"ContainerStarted","Data":"22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9"} Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.165560 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd0abe91-f35d-42fc-98b9-72eb322ba07a","Type":"ContainerStarted","Data":"b1de76aa018567a19196c72f0b19a0126a1371a53c054ed725b430de7e000fb2"} Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.168478 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594596d755-4csnw" event={"ID":"9793b3c1-724a-4ac4-979e-c599b578ea24","Type":"ContainerStarted","Data":"3b4ae1e4819eec39b691cd248b971c88728476686b03d8b85062fd4611557e9e"} Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.170128 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.179203 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e","Type":"ContainerStarted","Data":"d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f"} Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.179258 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e","Type":"ContainerStarted","Data":"b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c"} Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.179404 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" containerName="glance-log" containerID="cri-o://b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c" gracePeriod=30 Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.179698 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" containerName="glance-httpd" containerID="cri-o://d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f" gracePeriod=30 Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.203116 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-594596d755-4csnw" podStartSLOduration=3.203091132 podStartE2EDuration="3.203091132s" podCreationTimestamp="2025-11-22 08:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:41:23.194351078 +0000 UTC m=+5925.607744346" watchObservedRunningTime="2025-11-22 08:41:23.203091132 +0000 UTC m=+5925.616484390" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.225159 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.225131995 podStartE2EDuration="3.225131995s" podCreationTimestamp="2025-11-22 08:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:41:23.215590839 +0000 UTC m=+5925.628984117" watchObservedRunningTime="2025-11-22 08:41:23.225131995 +0000 UTC m=+5925.638525253" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.719748 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.840110 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5r4t\" (UniqueName: \"kubernetes.io/projected/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-kube-api-access-g5r4t\") pod \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.840203 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-scripts\") pod \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.840278 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-config-data\") pod \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.840384 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-combined-ca-bundle\") pod \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.840445 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-logs\") pod \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.840497 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-httpd-run\") pod \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\" (UID: \"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e\") " Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.841224 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" (UID: "7a6b45e3-a294-4b32-8a66-b4deb6e23b2e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.841425 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-logs" (OuterVolumeSpecName: "logs") pod "7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" (UID: "7a6b45e3-a294-4b32-8a66-b4deb6e23b2e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.851570 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-scripts" (OuterVolumeSpecName: "scripts") pod "7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" (UID: "7a6b45e3-a294-4b32-8a66-b4deb6e23b2e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.861711 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-kube-api-access-g5r4t" (OuterVolumeSpecName: "kube-api-access-g5r4t") pod "7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" (UID: "7a6b45e3-a294-4b32-8a66-b4deb6e23b2e"). InnerVolumeSpecName "kube-api-access-g5r4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.883676 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" (UID: "7a6b45e3-a294-4b32-8a66-b4deb6e23b2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.915573 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-config-data" (OuterVolumeSpecName: "config-data") pod "7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" (UID: "7a6b45e3-a294-4b32-8a66-b4deb6e23b2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.943798 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.943978 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.944057 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.944306 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.944386 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:23 crc kubenswrapper[4856]: I1122 08:41:23.944456 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5r4t\" (UniqueName: \"kubernetes.io/projected/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e-kube-api-access-g5r4t\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.191375 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd0abe91-f35d-42fc-98b9-72eb322ba07a","Type":"ContainerStarted","Data":"22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77"} Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.191492 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd0abe91-f35d-42fc-98b9-72eb322ba07a" containerName="glance-log" containerID="cri-o://22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9" gracePeriod=30 Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.191553 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="fd0abe91-f35d-42fc-98b9-72eb322ba07a" containerName="glance-httpd" containerID="cri-o://22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77" gracePeriod=30 Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.193663 4856 generic.go:334] "Generic (PLEG): container finished" podID="7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" containerID="d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f" exitCode=143 Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.193696 4856 generic.go:334] "Generic (PLEG): container finished" podID="7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" containerID="b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c" exitCode=143 Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.193715 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.193768 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e","Type":"ContainerDied","Data":"d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f"} Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.193827 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e","Type":"ContainerDied","Data":"b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c"} Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.193842 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7a6b45e3-a294-4b32-8a66-b4deb6e23b2e","Type":"ContainerDied","Data":"535ce7febc5cacd5ea81ab738dc7110ae983e41d7fd010052cab72133dc64604"} Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.193864 4856 scope.go:117] "RemoveContainer" containerID="d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.227564 4856 scope.go:117] "RemoveContainer" containerID="b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.239992 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.239940318 podStartE2EDuration="4.239940318s" podCreationTimestamp="2025-11-22 08:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:41:24.221992885 +0000 UTC m=+5926.635386163" watchObservedRunningTime="2025-11-22 08:41:24.239940318 +0000 UTC m=+5926.653333586" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.243253 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.253165 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.260712 4856 scope.go:117] "RemoveContainer" containerID="d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f" Nov 22 08:41:24 crc kubenswrapper[4856]: E1122 08:41:24.261785 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f\": container with ID starting with d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f not found: ID does not exist" containerID="d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.261841 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f"} err="failed to get container status \"d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f\": rpc error: code = NotFound desc = could not find container \"d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f\": container with ID starting with d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f not found: ID does not exist" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.261876 4856 scope.go:117] "RemoveContainer" containerID="b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c" Nov 22 08:41:24 crc kubenswrapper[4856]: E1122 08:41:24.262450 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c\": container with ID starting with b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c not found: ID does not exist" containerID="b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.262499 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c"} err="failed to get container status \"b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c\": rpc error: code = NotFound desc = could not find container \"b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c\": container with ID starting with b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c not found: ID does not exist" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.262552 4856 scope.go:117] "RemoveContainer" containerID="d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.262878 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f"} err="failed to get container status \"d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f\": rpc error: code = NotFound desc = could not find container \"d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f\": container with ID starting with d74b6e6bc0399de98e0cbd25b276b81bcc95b0e0f325cc3a046e317d2743a49f not found: ID does not exist" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.262918 4856 scope.go:117] "RemoveContainer" containerID="b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.263235 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c"} err="failed to get container status \"b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c\": rpc error: code = NotFound desc = could not find container \"b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c\": container with ID starting with b747ec2c7d0eb9a0c0318f2fd0f041830f27a6cdfd767a99a789f13f3793ac1c not found: ID does not exist" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.273857 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:41:24 crc kubenswrapper[4856]: E1122 08:41:24.274359 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" containerName="glance-httpd" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.274381 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" containerName="glance-httpd" Nov 22 08:41:24 crc kubenswrapper[4856]: E1122 08:41:24.274445 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" containerName="glance-log" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.274455 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" containerName="glance-log" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.274701 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" containerName="glance-httpd" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.274748 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" containerName="glance-log" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.276065 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.281747 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.284251 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.314777 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.356088 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr9fk\" (UniqueName: \"kubernetes.io/projected/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-kube-api-access-tr9fk\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.356180 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-config-data\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.356203 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-scripts\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.356224 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.356273 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-logs\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.356295 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.356334 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.457673 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-config-data\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.457728 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-scripts\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.457777 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.458415 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.458459 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-logs\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.458477 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-logs\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.459088 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.459165 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.459300 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr9fk\" (UniqueName: \"kubernetes.io/projected/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-kube-api-access-tr9fk\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.462522 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.463345 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-config-data\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.466567 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-scripts\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.468589 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.481596 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr9fk\" (UniqueName: \"kubernetes.io/projected/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-kube-api-access-tr9fk\") pod \"glance-default-external-api-0\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.601900 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 08:41:24 crc kubenswrapper[4856]: I1122 08:41:24.725669 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a6b45e3-a294-4b32-8a66-b4deb6e23b2e" path="/var/lib/kubelet/pods/7a6b45e3-a294-4b32-8a66-b4deb6e23b2e/volumes" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.169973 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.209579 4856 generic.go:334] "Generic (PLEG): container finished" podID="fd0abe91-f35d-42fc-98b9-72eb322ba07a" containerID="22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77" exitCode=0 Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.209626 4856 generic.go:334] "Generic (PLEG): container finished" podID="fd0abe91-f35d-42fc-98b9-72eb322ba07a" containerID="22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9" exitCode=143 Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.209677 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd0abe91-f35d-42fc-98b9-72eb322ba07a","Type":"ContainerDied","Data":"22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77"} Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.209715 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd0abe91-f35d-42fc-98b9-72eb322ba07a","Type":"ContainerDied","Data":"22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9"} Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.209731 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"fd0abe91-f35d-42fc-98b9-72eb322ba07a","Type":"ContainerDied","Data":"b1de76aa018567a19196c72f0b19a0126a1371a53c054ed725b430de7e000fb2"} Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.209751 4856 scope.go:117] "RemoveContainer" containerID="22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.209898 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.212269 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.276024 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-combined-ca-bundle\") pod \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.276110 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd0abe91-f35d-42fc-98b9-72eb322ba07a-logs\") pod \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.276145 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-config-data\") pod \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.276175 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd0abe91-f35d-42fc-98b9-72eb322ba07a-httpd-run\") pod \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.276218 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-scripts\") pod \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.276299 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5k68s\" (UniqueName: \"kubernetes.io/projected/fd0abe91-f35d-42fc-98b9-72eb322ba07a-kube-api-access-5k68s\") pod \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\" (UID: \"fd0abe91-f35d-42fc-98b9-72eb322ba07a\") " Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.277051 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd0abe91-f35d-42fc-98b9-72eb322ba07a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fd0abe91-f35d-42fc-98b9-72eb322ba07a" (UID: "fd0abe91-f35d-42fc-98b9-72eb322ba07a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.277072 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd0abe91-f35d-42fc-98b9-72eb322ba07a-logs" (OuterVolumeSpecName: "logs") pod "fd0abe91-f35d-42fc-98b9-72eb322ba07a" (UID: "fd0abe91-f35d-42fc-98b9-72eb322ba07a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.277869 4856 scope.go:117] "RemoveContainer" containerID="22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.282146 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-scripts" (OuterVolumeSpecName: "scripts") pod "fd0abe91-f35d-42fc-98b9-72eb322ba07a" (UID: "fd0abe91-f35d-42fc-98b9-72eb322ba07a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.283982 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd0abe91-f35d-42fc-98b9-72eb322ba07a-kube-api-access-5k68s" (OuterVolumeSpecName: "kube-api-access-5k68s") pod "fd0abe91-f35d-42fc-98b9-72eb322ba07a" (UID: "fd0abe91-f35d-42fc-98b9-72eb322ba07a"). InnerVolumeSpecName "kube-api-access-5k68s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.304389 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd0abe91-f35d-42fc-98b9-72eb322ba07a" (UID: "fd0abe91-f35d-42fc-98b9-72eb322ba07a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.331045 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-config-data" (OuterVolumeSpecName: "config-data") pod "fd0abe91-f35d-42fc-98b9-72eb322ba07a" (UID: "fd0abe91-f35d-42fc-98b9-72eb322ba07a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.340216 4856 scope.go:117] "RemoveContainer" containerID="22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77" Nov 22 08:41:25 crc kubenswrapper[4856]: E1122 08:41:25.340879 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77\": container with ID starting with 22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77 not found: ID does not exist" containerID="22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.341024 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77"} err="failed to get container status \"22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77\": rpc error: code = NotFound desc = could not find container \"22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77\": container with ID starting with 22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77 not found: ID does not exist" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.341074 4856 scope.go:117] "RemoveContainer" containerID="22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9" Nov 22 08:41:25 crc kubenswrapper[4856]: E1122 08:41:25.341474 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9\": container with ID starting with 22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9 not found: ID does not exist" containerID="22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.341705 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9"} err="failed to get container status \"22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9\": rpc error: code = NotFound desc = could not find container \"22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9\": container with ID starting with 22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9 not found: ID does not exist" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.341851 4856 scope.go:117] "RemoveContainer" containerID="22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.342305 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77"} err="failed to get container status \"22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77\": rpc error: code = NotFound desc = could not find container \"22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77\": container with ID starting with 22fe66009ada0d08cebd067de17084e937a541bd5bd6ee4116b4be1fac440b77 not found: ID does not exist" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.342352 4856 scope.go:117] "RemoveContainer" containerID="22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.342886 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9"} err="failed to get container status \"22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9\": rpc error: code = NotFound desc = could not find container \"22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9\": container with ID starting with 22bfb05ba81c3c76ae0d63dd6e56ac92d7ac57f004e3be0655370858e9f0b9b9 not found: ID does not exist" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.378370 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.378404 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd0abe91-f35d-42fc-98b9-72eb322ba07a-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.378414 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.378424 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd0abe91-f35d-42fc-98b9-72eb322ba07a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.378432 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd0abe91-f35d-42fc-98b9-72eb322ba07a-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.378441 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5k68s\" (UniqueName: \"kubernetes.io/projected/fd0abe91-f35d-42fc-98b9-72eb322ba07a-kube-api-access-5k68s\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.570157 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.584685 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.601032 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:41:25 crc kubenswrapper[4856]: E1122 08:41:25.601836 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0abe91-f35d-42fc-98b9-72eb322ba07a" containerName="glance-httpd" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.601866 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0abe91-f35d-42fc-98b9-72eb322ba07a" containerName="glance-httpd" Nov 22 08:41:25 crc kubenswrapper[4856]: E1122 08:41:25.601926 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0abe91-f35d-42fc-98b9-72eb322ba07a" containerName="glance-log" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.601937 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0abe91-f35d-42fc-98b9-72eb322ba07a" containerName="glance-log" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.602186 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0abe91-f35d-42fc-98b9-72eb322ba07a" containerName="glance-log" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.602223 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0abe91-f35d-42fc-98b9-72eb322ba07a" containerName="glance-httpd" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.603975 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.606351 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.607418 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.616775 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.788745 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f971a400-35ad-42f4-a6a2-818bb7dc026d-logs\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.789043 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.789159 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.789344 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxlc4\" (UniqueName: \"kubernetes.io/projected/f971a400-35ad-42f4-a6a2-818bb7dc026d-kube-api-access-fxlc4\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.789464 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f971a400-35ad-42f4-a6a2-818bb7dc026d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.789502 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.789677 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.892703 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f971a400-35ad-42f4-a6a2-818bb7dc026d-logs\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.892792 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.892855 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.892914 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxlc4\" (UniqueName: \"kubernetes.io/projected/f971a400-35ad-42f4-a6a2-818bb7dc026d-kube-api-access-fxlc4\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.892949 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f971a400-35ad-42f4-a6a2-818bb7dc026d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.892968 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.893009 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.893320 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f971a400-35ad-42f4-a6a2-818bb7dc026d-logs\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.893619 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f971a400-35ad-42f4-a6a2-818bb7dc026d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.897709 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.897971 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.899090 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.899478 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.919053 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxlc4\" (UniqueName: \"kubernetes.io/projected/f971a400-35ad-42f4-a6a2-818bb7dc026d-kube-api-access-fxlc4\") pod \"glance-default-internal-api-0\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:41:25 crc kubenswrapper[4856]: I1122 08:41:25.927943 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:26 crc kubenswrapper[4856]: I1122 08:41:26.230457 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9","Type":"ContainerStarted","Data":"04d58e3bab54109f892b11187ac913c49228ece8e0caa2609a2fdc4443ea589e"} Nov 22 08:41:26 crc kubenswrapper[4856]: I1122 08:41:26.230785 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9","Type":"ContainerStarted","Data":"a5806f31084f90b6a9ee7dc10b638b5efd6d62e0c6554b6d5f1b2807eeacfeaa"} Nov 22 08:41:26 crc kubenswrapper[4856]: I1122 08:41:26.507152 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:41:26 crc kubenswrapper[4856]: W1122 08:41:26.507658 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf971a400_35ad_42f4_a6a2_818bb7dc026d.slice/crio-30ac2e82fb4aff5a0c00655282219de1054482599d2e60e966a6ffe77e133d5f WatchSource:0}: Error finding container 30ac2e82fb4aff5a0c00655282219de1054482599d2e60e966a6ffe77e133d5f: Status 404 returned error can't find the container with id 30ac2e82fb4aff5a0c00655282219de1054482599d2e60e966a6ffe77e133d5f Nov 22 08:41:26 crc kubenswrapper[4856]: I1122 08:41:26.724173 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd0abe91-f35d-42fc-98b9-72eb322ba07a" path="/var/lib/kubelet/pods/fd0abe91-f35d-42fc-98b9-72eb322ba07a/volumes" Nov 22 08:41:27 crc kubenswrapper[4856]: I1122 08:41:27.249103 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f971a400-35ad-42f4-a6a2-818bb7dc026d","Type":"ContainerStarted","Data":"b72eedcea9c06f61ebd0b7cf16cb557e4af668471f9df424cbfc26e446141c59"} Nov 22 08:41:27 crc kubenswrapper[4856]: I1122 08:41:27.249809 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f971a400-35ad-42f4-a6a2-818bb7dc026d","Type":"ContainerStarted","Data":"30ac2e82fb4aff5a0c00655282219de1054482599d2e60e966a6ffe77e133d5f"} Nov 22 08:41:27 crc kubenswrapper[4856]: I1122 08:41:27.251427 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9","Type":"ContainerStarted","Data":"19951d2e88630472ef3cf3d4e7f4f4526250c534966287a6ba3d30bd21a61f9e"} Nov 22 08:41:27 crc kubenswrapper[4856]: I1122 08:41:27.279835 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.279814329 podStartE2EDuration="3.279814329s" podCreationTimestamp="2025-11-22 08:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:41:27.273440007 +0000 UTC m=+5929.686833275" watchObservedRunningTime="2025-11-22 08:41:27.279814329 +0000 UTC m=+5929.693207587" Nov 22 08:41:28 crc kubenswrapper[4856]: I1122 08:41:28.267148 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f971a400-35ad-42f4-a6a2-818bb7dc026d","Type":"ContainerStarted","Data":"2acf4bda0bb6a1e742141c119adb7237f8747dc6ee949fa693b63637e58e42e4"} Nov 22 08:41:28 crc kubenswrapper[4856]: I1122 08:41:28.291467 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.291432094 podStartE2EDuration="3.291432094s" podCreationTimestamp="2025-11-22 08:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:41:28.289378039 +0000 UTC m=+5930.702771317" watchObservedRunningTime="2025-11-22 08:41:28.291432094 +0000 UTC m=+5930.704825352" Nov 22 08:41:29 crc kubenswrapper[4856]: I1122 08:41:29.754340 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:41:29 crc kubenswrapper[4856]: I1122 08:41:29.754717 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:41:30 crc kubenswrapper[4856]: I1122 08:41:30.862854 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:41:30 crc kubenswrapper[4856]: I1122 08:41:30.933069 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d944f7c5f-lkpm6"] Nov 22 08:41:30 crc kubenswrapper[4856]: I1122 08:41:30.933354 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" podUID="38c1c139-a689-46ca-84f5-d896cced8655" containerName="dnsmasq-dns" containerID="cri-o://db843dd12c0cab2e7deee76fbf9ec6bccc9fdbef7d654090b58686777f362181" gracePeriod=10 Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.315533 4856 generic.go:334] "Generic (PLEG): container finished" podID="38c1c139-a689-46ca-84f5-d896cced8655" containerID="db843dd12c0cab2e7deee76fbf9ec6bccc9fdbef7d654090b58686777f362181" exitCode=0 Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.315633 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" event={"ID":"38c1c139-a689-46ca-84f5-d896cced8655","Type":"ContainerDied","Data":"db843dd12c0cab2e7deee76fbf9ec6bccc9fdbef7d654090b58686777f362181"} Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.470326 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.529932 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nrjc\" (UniqueName: \"kubernetes.io/projected/38c1c139-a689-46ca-84f5-d896cced8655-kube-api-access-6nrjc\") pod \"38c1c139-a689-46ca-84f5-d896cced8655\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.530309 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-ovsdbserver-sb\") pod \"38c1c139-a689-46ca-84f5-d896cced8655\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.531423 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-dns-svc\") pod \"38c1c139-a689-46ca-84f5-d896cced8655\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.531635 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-config\") pod \"38c1c139-a689-46ca-84f5-d896cced8655\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.531659 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-ovsdbserver-nb\") pod \"38c1c139-a689-46ca-84f5-d896cced8655\" (UID: \"38c1c139-a689-46ca-84f5-d896cced8655\") " Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.544828 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c1c139-a689-46ca-84f5-d896cced8655-kube-api-access-6nrjc" (OuterVolumeSpecName: "kube-api-access-6nrjc") pod "38c1c139-a689-46ca-84f5-d896cced8655" (UID: "38c1c139-a689-46ca-84f5-d896cced8655"). InnerVolumeSpecName "kube-api-access-6nrjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.586401 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38c1c139-a689-46ca-84f5-d896cced8655" (UID: "38c1c139-a689-46ca-84f5-d896cced8655"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.587279 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-config" (OuterVolumeSpecName: "config") pod "38c1c139-a689-46ca-84f5-d896cced8655" (UID: "38c1c139-a689-46ca-84f5-d896cced8655"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.596397 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38c1c139-a689-46ca-84f5-d896cced8655" (UID: "38c1c139-a689-46ca-84f5-d896cced8655"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.599292 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38c1c139-a689-46ca-84f5-d896cced8655" (UID: "38c1c139-a689-46ca-84f5-d896cced8655"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.635420 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.635471 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.635485 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.635529 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nrjc\" (UniqueName: \"kubernetes.io/projected/38c1c139-a689-46ca-84f5-d896cced8655-kube-api-access-6nrjc\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:31 crc kubenswrapper[4856]: I1122 08:41:31.635544 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38c1c139-a689-46ca-84f5-d896cced8655-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:32 crc kubenswrapper[4856]: I1122 08:41:32.326657 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" event={"ID":"38c1c139-a689-46ca-84f5-d896cced8655","Type":"ContainerDied","Data":"8182fffd124ff448355154d9b99e0ab8b6c2018b6630ad77ae79ebd2cac7d394"} Nov 22 08:41:32 crc kubenswrapper[4856]: I1122 08:41:32.326719 4856 scope.go:117] "RemoveContainer" containerID="db843dd12c0cab2e7deee76fbf9ec6bccc9fdbef7d654090b58686777f362181" Nov 22 08:41:32 crc kubenswrapper[4856]: I1122 08:41:32.326777 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d944f7c5f-lkpm6" Nov 22 08:41:32 crc kubenswrapper[4856]: I1122 08:41:32.359616 4856 scope.go:117] "RemoveContainer" containerID="dd87d87fc1a05685a1e7ba10e55ba2153690a97ca29021b257cb2a58951cfc25" Nov 22 08:41:32 crc kubenswrapper[4856]: I1122 08:41:32.366668 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d944f7c5f-lkpm6"] Nov 22 08:41:32 crc kubenswrapper[4856]: I1122 08:41:32.378546 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d944f7c5f-lkpm6"] Nov 22 08:41:32 crc kubenswrapper[4856]: I1122 08:41:32.722988 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c1c139-a689-46ca-84f5-d896cced8655" path="/var/lib/kubelet/pods/38c1c139-a689-46ca-84f5-d896cced8655/volumes" Nov 22 08:41:34 crc kubenswrapper[4856]: I1122 08:41:34.603235 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 08:41:34 crc kubenswrapper[4856]: I1122 08:41:34.603867 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 08:41:34 crc kubenswrapper[4856]: I1122 08:41:34.635547 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 08:41:34 crc kubenswrapper[4856]: I1122 08:41:34.645871 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 08:41:35 crc kubenswrapper[4856]: I1122 08:41:35.355225 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 08:41:35 crc kubenswrapper[4856]: I1122 08:41:35.355320 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 08:41:35 crc kubenswrapper[4856]: I1122 08:41:35.929559 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:35 crc kubenswrapper[4856]: I1122 08:41:35.929624 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:35 crc kubenswrapper[4856]: I1122 08:41:35.962563 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:35 crc kubenswrapper[4856]: I1122 08:41:35.972729 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.316800 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7fdsh"] Nov 22 08:41:36 crc kubenswrapper[4856]: E1122 08:41:36.317149 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c1c139-a689-46ca-84f5-d896cced8655" containerName="dnsmasq-dns" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.317163 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c1c139-a689-46ca-84f5-d896cced8655" containerName="dnsmasq-dns" Nov 22 08:41:36 crc kubenswrapper[4856]: E1122 08:41:36.317182 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c1c139-a689-46ca-84f5-d896cced8655" containerName="init" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.317189 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c1c139-a689-46ca-84f5-d896cced8655" containerName="init" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.317366 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c1c139-a689-46ca-84f5-d896cced8655" containerName="dnsmasq-dns" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.318649 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.341833 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7fdsh"] Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.384335 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.384801 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.428083 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a37b132-fd9a-48d4-8b0a-1cc67822396c-catalog-content\") pod \"redhat-operators-7fdsh\" (UID: \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\") " pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.428130 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbm6\" (UniqueName: \"kubernetes.io/projected/9a37b132-fd9a-48d4-8b0a-1cc67822396c-kube-api-access-dkbm6\") pod \"redhat-operators-7fdsh\" (UID: \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\") " pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.428200 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a37b132-fd9a-48d4-8b0a-1cc67822396c-utilities\") pod \"redhat-operators-7fdsh\" (UID: \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\") " pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.530375 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a37b132-fd9a-48d4-8b0a-1cc67822396c-catalog-content\") pod \"redhat-operators-7fdsh\" (UID: \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\") " pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.530487 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkbm6\" (UniqueName: \"kubernetes.io/projected/9a37b132-fd9a-48d4-8b0a-1cc67822396c-kube-api-access-dkbm6\") pod \"redhat-operators-7fdsh\" (UID: \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\") " pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.530615 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a37b132-fd9a-48d4-8b0a-1cc67822396c-utilities\") pod \"redhat-operators-7fdsh\" (UID: \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\") " pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.531106 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a37b132-fd9a-48d4-8b0a-1cc67822396c-catalog-content\") pod \"redhat-operators-7fdsh\" (UID: \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\") " pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.531142 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a37b132-fd9a-48d4-8b0a-1cc67822396c-utilities\") pod \"redhat-operators-7fdsh\" (UID: \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\") " pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.559035 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkbm6\" (UniqueName: \"kubernetes.io/projected/9a37b132-fd9a-48d4-8b0a-1cc67822396c-kube-api-access-dkbm6\") pod \"redhat-operators-7fdsh\" (UID: \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\") " pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:41:36 crc kubenswrapper[4856]: I1122 08:41:36.651744 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:41:37 crc kubenswrapper[4856]: I1122 08:41:37.165591 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7fdsh"] Nov 22 08:41:37 crc kubenswrapper[4856]: W1122 08:41:37.173043 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a37b132_fd9a_48d4_8b0a_1cc67822396c.slice/crio-1393625bf9e7182b62c593c270ea133ae10db2d0898c4fd641803a6c5f37f4c0 WatchSource:0}: Error finding container 1393625bf9e7182b62c593c270ea133ae10db2d0898c4fd641803a6c5f37f4c0: Status 404 returned error can't find the container with id 1393625bf9e7182b62c593c270ea133ae10db2d0898c4fd641803a6c5f37f4c0 Nov 22 08:41:37 crc kubenswrapper[4856]: I1122 08:41:37.394795 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fdsh" event={"ID":"9a37b132-fd9a-48d4-8b0a-1cc67822396c","Type":"ContainerStarted","Data":"1393625bf9e7182b62c593c270ea133ae10db2d0898c4fd641803a6c5f37f4c0"} Nov 22 08:41:37 crc kubenswrapper[4856]: I1122 08:41:37.496812 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 08:41:37 crc kubenswrapper[4856]: I1122 08:41:37.496938 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 08:41:37 crc kubenswrapper[4856]: I1122 08:41:37.615022 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 08:41:39 crc kubenswrapper[4856]: I1122 08:41:39.056082 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:39 crc kubenswrapper[4856]: I1122 08:41:39.056466 4856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 22 08:41:39 crc kubenswrapper[4856]: I1122 08:41:39.138141 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 08:41:39 crc kubenswrapper[4856]: I1122 08:41:39.414703 4856 generic.go:334] "Generic (PLEG): container finished" podID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" containerID="7dfbe5f320b4e6e05761cacda1fc3ab9a440253551a8d662632054ef54834a4d" exitCode=0 Nov 22 08:41:39 crc kubenswrapper[4856]: I1122 08:41:39.414986 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fdsh" event={"ID":"9a37b132-fd9a-48d4-8b0a-1cc67822396c","Type":"ContainerDied","Data":"7dfbe5f320b4e6e05761cacda1fc3ab9a440253551a8d662632054ef54834a4d"} Nov 22 08:41:41 crc kubenswrapper[4856]: I1122 08:41:41.433995 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fdsh" event={"ID":"9a37b132-fd9a-48d4-8b0a-1cc67822396c","Type":"ContainerStarted","Data":"cb9bfea53e788db46f54e491e4ec15bcf0ecb5cbc5e7ca0766628cf4154e1851"} Nov 22 08:41:42 crc kubenswrapper[4856]: I1122 08:41:42.445398 4856 generic.go:334] "Generic (PLEG): container finished" podID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" containerID="cb9bfea53e788db46f54e491e4ec15bcf0ecb5cbc5e7ca0766628cf4154e1851" exitCode=0 Nov 22 08:41:42 crc kubenswrapper[4856]: I1122 08:41:42.445575 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fdsh" event={"ID":"9a37b132-fd9a-48d4-8b0a-1cc67822396c","Type":"ContainerDied","Data":"cb9bfea53e788db46f54e491e4ec15bcf0ecb5cbc5e7ca0766628cf4154e1851"} Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.468951 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-j6kvd"] Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.470832 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j6kvd" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.487055 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j6kvd"] Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.503795 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fdsh" event={"ID":"9a37b132-fd9a-48d4-8b0a-1cc67822396c","Type":"ContainerStarted","Data":"fc97395f05f934f01462b5dfb996c655564cfe6783aaa6020b6d27ec47a79e15"} Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.524181 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7fdsh" podStartSLOduration=3.884762019 podStartE2EDuration="11.524162052s" podCreationTimestamp="2025-11-22 08:41:36 +0000 UTC" firstStartedPulling="2025-11-22 08:41:39.417089172 +0000 UTC m=+5941.830482430" lastFinishedPulling="2025-11-22 08:41:47.056489185 +0000 UTC m=+5949.469882463" observedRunningTime="2025-11-22 08:41:47.520207296 +0000 UTC m=+5949.933600574" watchObservedRunningTime="2025-11-22 08:41:47.524162052 +0000 UTC m=+5949.937555310" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.564376 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f5wf\" (UniqueName: \"kubernetes.io/projected/b1c0375c-7ae6-478c-a7a4-501faf59190c-kube-api-access-9f5wf\") pod \"placement-db-create-j6kvd\" (UID: \"b1c0375c-7ae6-478c-a7a4-501faf59190c\") " pod="openstack/placement-db-create-j6kvd" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.564469 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1c0375c-7ae6-478c-a7a4-501faf59190c-operator-scripts\") pod \"placement-db-create-j6kvd\" (UID: \"b1c0375c-7ae6-478c-a7a4-501faf59190c\") " pod="openstack/placement-db-create-j6kvd" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.604759 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-38fd-account-create-vmlhx"] Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.606732 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-38fd-account-create-vmlhx" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.609078 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.614584 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-38fd-account-create-vmlhx"] Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.665975 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f5wf\" (UniqueName: \"kubernetes.io/projected/b1c0375c-7ae6-478c-a7a4-501faf59190c-kube-api-access-9f5wf\") pod \"placement-db-create-j6kvd\" (UID: \"b1c0375c-7ae6-478c-a7a4-501faf59190c\") " pod="openstack/placement-db-create-j6kvd" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.666092 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1c0375c-7ae6-478c-a7a4-501faf59190c-operator-scripts\") pod \"placement-db-create-j6kvd\" (UID: \"b1c0375c-7ae6-478c-a7a4-501faf59190c\") " pod="openstack/placement-db-create-j6kvd" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.666134 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcnc\" (UniqueName: \"kubernetes.io/projected/dac198ef-b907-472d-8004-7c5f07fd55f9-kube-api-access-mlcnc\") pod \"placement-38fd-account-create-vmlhx\" (UID: \"dac198ef-b907-472d-8004-7c5f07fd55f9\") " pod="openstack/placement-38fd-account-create-vmlhx" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.666165 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac198ef-b907-472d-8004-7c5f07fd55f9-operator-scripts\") pod \"placement-38fd-account-create-vmlhx\" (UID: \"dac198ef-b907-472d-8004-7c5f07fd55f9\") " pod="openstack/placement-38fd-account-create-vmlhx" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.667005 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1c0375c-7ae6-478c-a7a4-501faf59190c-operator-scripts\") pod \"placement-db-create-j6kvd\" (UID: \"b1c0375c-7ae6-478c-a7a4-501faf59190c\") " pod="openstack/placement-db-create-j6kvd" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.688686 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f5wf\" (UniqueName: \"kubernetes.io/projected/b1c0375c-7ae6-478c-a7a4-501faf59190c-kube-api-access-9f5wf\") pod \"placement-db-create-j6kvd\" (UID: \"b1c0375c-7ae6-478c-a7a4-501faf59190c\") " pod="openstack/placement-db-create-j6kvd" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.768332 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlcnc\" (UniqueName: \"kubernetes.io/projected/dac198ef-b907-472d-8004-7c5f07fd55f9-kube-api-access-mlcnc\") pod \"placement-38fd-account-create-vmlhx\" (UID: \"dac198ef-b907-472d-8004-7c5f07fd55f9\") " pod="openstack/placement-38fd-account-create-vmlhx" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.768380 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac198ef-b907-472d-8004-7c5f07fd55f9-operator-scripts\") pod \"placement-38fd-account-create-vmlhx\" (UID: \"dac198ef-b907-472d-8004-7c5f07fd55f9\") " pod="openstack/placement-38fd-account-create-vmlhx" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.769235 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac198ef-b907-472d-8004-7c5f07fd55f9-operator-scripts\") pod \"placement-38fd-account-create-vmlhx\" (UID: \"dac198ef-b907-472d-8004-7c5f07fd55f9\") " pod="openstack/placement-38fd-account-create-vmlhx" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.787455 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlcnc\" (UniqueName: \"kubernetes.io/projected/dac198ef-b907-472d-8004-7c5f07fd55f9-kube-api-access-mlcnc\") pod \"placement-38fd-account-create-vmlhx\" (UID: \"dac198ef-b907-472d-8004-7c5f07fd55f9\") " pod="openstack/placement-38fd-account-create-vmlhx" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.800202 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j6kvd" Nov 22 08:41:47 crc kubenswrapper[4856]: I1122 08:41:47.928097 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-38fd-account-create-vmlhx" Nov 22 08:41:48 crc kubenswrapper[4856]: I1122 08:41:48.221958 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-j6kvd"] Nov 22 08:41:48 crc kubenswrapper[4856]: W1122 08:41:48.225660 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1c0375c_7ae6_478c_a7a4_501faf59190c.slice/crio-a573730cc153bfdfcde1fe2bfc1ea2aa1a9bfad903c67fb860f1a80a508f54fc WatchSource:0}: Error finding container a573730cc153bfdfcde1fe2bfc1ea2aa1a9bfad903c67fb860f1a80a508f54fc: Status 404 returned error can't find the container with id a573730cc153bfdfcde1fe2bfc1ea2aa1a9bfad903c67fb860f1a80a508f54fc Nov 22 08:41:48 crc kubenswrapper[4856]: I1122 08:41:48.385138 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-38fd-account-create-vmlhx"] Nov 22 08:41:48 crc kubenswrapper[4856]: I1122 08:41:48.514694 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j6kvd" event={"ID":"b1c0375c-7ae6-478c-a7a4-501faf59190c","Type":"ContainerStarted","Data":"c01e1218def5500b9e8246aad5992250187408a00326616c53a3cf2e08346b5b"} Nov 22 08:41:48 crc kubenswrapper[4856]: I1122 08:41:48.515026 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j6kvd" event={"ID":"b1c0375c-7ae6-478c-a7a4-501faf59190c","Type":"ContainerStarted","Data":"a573730cc153bfdfcde1fe2bfc1ea2aa1a9bfad903c67fb860f1a80a508f54fc"} Nov 22 08:41:48 crc kubenswrapper[4856]: I1122 08:41:48.516378 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-38fd-account-create-vmlhx" event={"ID":"dac198ef-b907-472d-8004-7c5f07fd55f9","Type":"ContainerStarted","Data":"638f338c1cae89d62d9c2fa22d952250d55cf6b8b4d59e71fddcee0eddb455c1"} Nov 22 08:41:48 crc kubenswrapper[4856]: I1122 08:41:48.541764 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-j6kvd" podStartSLOduration=1.541743768 podStartE2EDuration="1.541743768s" podCreationTimestamp="2025-11-22 08:41:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:41:48.54031286 +0000 UTC m=+5950.953706118" watchObservedRunningTime="2025-11-22 08:41:48.541743768 +0000 UTC m=+5950.955137026" Nov 22 08:41:49 crc kubenswrapper[4856]: I1122 08:41:49.528946 4856 generic.go:334] "Generic (PLEG): container finished" podID="b1c0375c-7ae6-478c-a7a4-501faf59190c" containerID="c01e1218def5500b9e8246aad5992250187408a00326616c53a3cf2e08346b5b" exitCode=0 Nov 22 08:41:49 crc kubenswrapper[4856]: I1122 08:41:49.529033 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j6kvd" event={"ID":"b1c0375c-7ae6-478c-a7a4-501faf59190c","Type":"ContainerDied","Data":"c01e1218def5500b9e8246aad5992250187408a00326616c53a3cf2e08346b5b"} Nov 22 08:41:49 crc kubenswrapper[4856]: I1122 08:41:49.531297 4856 generic.go:334] "Generic (PLEG): container finished" podID="dac198ef-b907-472d-8004-7c5f07fd55f9" containerID="d5900607e67700e179cab68677f87421590f3b86544f521ada53239be99af627" exitCode=0 Nov 22 08:41:49 crc kubenswrapper[4856]: I1122 08:41:49.531402 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-38fd-account-create-vmlhx" event={"ID":"dac198ef-b907-472d-8004-7c5f07fd55f9","Type":"ContainerDied","Data":"d5900607e67700e179cab68677f87421590f3b86544f521ada53239be99af627"} Nov 22 08:41:50 crc kubenswrapper[4856]: I1122 08:41:50.993382 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j6kvd" Nov 22 08:41:50 crc kubenswrapper[4856]: I1122 08:41:50.999531 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-38fd-account-create-vmlhx" Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.039478 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1c0375c-7ae6-478c-a7a4-501faf59190c-operator-scripts\") pod \"b1c0375c-7ae6-478c-a7a4-501faf59190c\" (UID: \"b1c0375c-7ae6-478c-a7a4-501faf59190c\") " Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.039638 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f5wf\" (UniqueName: \"kubernetes.io/projected/b1c0375c-7ae6-478c-a7a4-501faf59190c-kube-api-access-9f5wf\") pod \"b1c0375c-7ae6-478c-a7a4-501faf59190c\" (UID: \"b1c0375c-7ae6-478c-a7a4-501faf59190c\") " Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.039736 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac198ef-b907-472d-8004-7c5f07fd55f9-operator-scripts\") pod \"dac198ef-b907-472d-8004-7c5f07fd55f9\" (UID: \"dac198ef-b907-472d-8004-7c5f07fd55f9\") " Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.039772 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlcnc\" (UniqueName: \"kubernetes.io/projected/dac198ef-b907-472d-8004-7c5f07fd55f9-kube-api-access-mlcnc\") pod \"dac198ef-b907-472d-8004-7c5f07fd55f9\" (UID: \"dac198ef-b907-472d-8004-7c5f07fd55f9\") " Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.040299 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1c0375c-7ae6-478c-a7a4-501faf59190c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1c0375c-7ae6-478c-a7a4-501faf59190c" (UID: "b1c0375c-7ae6-478c-a7a4-501faf59190c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.040326 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dac198ef-b907-472d-8004-7c5f07fd55f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dac198ef-b907-472d-8004-7c5f07fd55f9" (UID: "dac198ef-b907-472d-8004-7c5f07fd55f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.040436 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dac198ef-b907-472d-8004-7c5f07fd55f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.040461 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1c0375c-7ae6-478c-a7a4-501faf59190c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.045569 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac198ef-b907-472d-8004-7c5f07fd55f9-kube-api-access-mlcnc" (OuterVolumeSpecName: "kube-api-access-mlcnc") pod "dac198ef-b907-472d-8004-7c5f07fd55f9" (UID: "dac198ef-b907-472d-8004-7c5f07fd55f9"). InnerVolumeSpecName "kube-api-access-mlcnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.047005 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1c0375c-7ae6-478c-a7a4-501faf59190c-kube-api-access-9f5wf" (OuterVolumeSpecName: "kube-api-access-9f5wf") pod "b1c0375c-7ae6-478c-a7a4-501faf59190c" (UID: "b1c0375c-7ae6-478c-a7a4-501faf59190c"). InnerVolumeSpecName "kube-api-access-9f5wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.141766 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f5wf\" (UniqueName: \"kubernetes.io/projected/b1c0375c-7ae6-478c-a7a4-501faf59190c-kube-api-access-9f5wf\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.142011 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlcnc\" (UniqueName: \"kubernetes.io/projected/dac198ef-b907-472d-8004-7c5f07fd55f9-kube-api-access-mlcnc\") on node \"crc\" DevicePath \"\"" Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.551299 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-j6kvd" event={"ID":"b1c0375c-7ae6-478c-a7a4-501faf59190c","Type":"ContainerDied","Data":"a573730cc153bfdfcde1fe2bfc1ea2aa1a9bfad903c67fb860f1a80a508f54fc"} Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.551614 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a573730cc153bfdfcde1fe2bfc1ea2aa1a9bfad903c67fb860f1a80a508f54fc" Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.551385 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-j6kvd" Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.559443 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-38fd-account-create-vmlhx" event={"ID":"dac198ef-b907-472d-8004-7c5f07fd55f9","Type":"ContainerDied","Data":"638f338c1cae89d62d9c2fa22d952250d55cf6b8b4d59e71fddcee0eddb455c1"} Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.559558 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="638f338c1cae89d62d9c2fa22d952250d55cf6b8b4d59e71fddcee0eddb455c1" Nov 22 08:41:51 crc kubenswrapper[4856]: I1122 08:41:51.559655 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-38fd-account-create-vmlhx" Nov 22 08:41:52 crc kubenswrapper[4856]: I1122 08:41:52.919714 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fb88bc67f-mcjjq"] Nov 22 08:41:52 crc kubenswrapper[4856]: E1122 08:41:52.920205 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac198ef-b907-472d-8004-7c5f07fd55f9" containerName="mariadb-account-create" Nov 22 08:41:52 crc kubenswrapper[4856]: I1122 08:41:52.920227 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac198ef-b907-472d-8004-7c5f07fd55f9" containerName="mariadb-account-create" Nov 22 08:41:52 crc kubenswrapper[4856]: E1122 08:41:52.920255 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1c0375c-7ae6-478c-a7a4-501faf59190c" containerName="mariadb-database-create" Nov 22 08:41:52 crc kubenswrapper[4856]: I1122 08:41:52.920264 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1c0375c-7ae6-478c-a7a4-501faf59190c" containerName="mariadb-database-create" Nov 22 08:41:52 crc kubenswrapper[4856]: I1122 08:41:52.920562 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1c0375c-7ae6-478c-a7a4-501faf59190c" containerName="mariadb-database-create" Nov 22 08:41:52 crc kubenswrapper[4856]: I1122 08:41:52.920595 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac198ef-b907-472d-8004-7c5f07fd55f9" containerName="mariadb-account-create" Nov 22 08:41:52 crc kubenswrapper[4856]: I1122 08:41:52.921894 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:52 crc kubenswrapper[4856]: I1122 08:41:52.949015 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb88bc67f-mcjjq"] Nov 22 08:41:52 crc kubenswrapper[4856]: I1122 08:41:52.980214 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-dns-svc\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:52 crc kubenswrapper[4856]: I1122 08:41:52.980273 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:52 crc kubenswrapper[4856]: I1122 08:41:52.980307 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:52 crc kubenswrapper[4856]: I1122 08:41:52.980352 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dpjd\" (UniqueName: \"kubernetes.io/projected/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-kube-api-access-7dpjd\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:52 crc kubenswrapper[4856]: I1122 08:41:52.980402 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-config\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.032380 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-cbcx6"] Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.033619 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.042897 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9x44h" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.043111 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.043242 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.065457 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-cbcx6"] Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.082557 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-dns-svc\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.082618 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.082653 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.082694 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dpjd\" (UniqueName: \"kubernetes.io/projected/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-kube-api-access-7dpjd\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.083232 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-config\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.083552 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-dns-svc\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.084063 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-config\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.084128 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.084755 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.125021 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dpjd\" (UniqueName: \"kubernetes.io/projected/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-kube-api-access-7dpjd\") pod \"dnsmasq-dns-7fb88bc67f-mcjjq\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.185004 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9l64\" (UniqueName: \"kubernetes.io/projected/eccf4778-135b-45e6-958d-2ecd55a79d70-kube-api-access-n9l64\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.185099 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eccf4778-135b-45e6-958d-2ecd55a79d70-logs\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.185153 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-combined-ca-bundle\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.185193 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-config-data\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.185271 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-scripts\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.262868 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.286906 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-combined-ca-bundle\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.286958 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-config-data\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.287006 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-scripts\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.287102 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9l64\" (UniqueName: \"kubernetes.io/projected/eccf4778-135b-45e6-958d-2ecd55a79d70-kube-api-access-n9l64\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.287154 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eccf4778-135b-45e6-958d-2ecd55a79d70-logs\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.287794 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eccf4778-135b-45e6-958d-2ecd55a79d70-logs\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.290782 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-combined-ca-bundle\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.291491 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-config-data\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.296333 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-scripts\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.307943 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9l64\" (UniqueName: \"kubernetes.io/projected/eccf4778-135b-45e6-958d-2ecd55a79d70-kube-api-access-n9l64\") pod \"placement-db-sync-cbcx6\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.415798 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cbcx6" Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.772244 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb88bc67f-mcjjq"] Nov 22 08:41:53 crc kubenswrapper[4856]: I1122 08:41:53.925818 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-cbcx6"] Nov 22 08:41:53 crc kubenswrapper[4856]: W1122 08:41:53.929305 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeccf4778_135b_45e6_958d_2ecd55a79d70.slice/crio-31395818938ccb993d593911f9989f17c08e67b1138a335cbe32ebbb88442d63 WatchSource:0}: Error finding container 31395818938ccb993d593911f9989f17c08e67b1138a335cbe32ebbb88442d63: Status 404 returned error can't find the container with id 31395818938ccb993d593911f9989f17c08e67b1138a335cbe32ebbb88442d63 Nov 22 08:41:54 crc kubenswrapper[4856]: I1122 08:41:54.603842 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cbcx6" event={"ID":"eccf4778-135b-45e6-958d-2ecd55a79d70","Type":"ContainerStarted","Data":"31395818938ccb993d593911f9989f17c08e67b1138a335cbe32ebbb88442d63"} Nov 22 08:41:54 crc kubenswrapper[4856]: I1122 08:41:54.606695 4856 generic.go:334] "Generic (PLEG): container finished" podID="9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" containerID="124bdd8092d0e70531c06cce75c64eb159b7221751b4225107e9433dd79a9f65" exitCode=0 Nov 22 08:41:54 crc kubenswrapper[4856]: I1122 08:41:54.606751 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" event={"ID":"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8","Type":"ContainerDied","Data":"124bdd8092d0e70531c06cce75c64eb159b7221751b4225107e9433dd79a9f65"} Nov 22 08:41:54 crc kubenswrapper[4856]: I1122 08:41:54.606828 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" event={"ID":"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8","Type":"ContainerStarted","Data":"6f5f445f244a011c986c6541300153d08c7d45eb24db599f9ae7b688c71c5fd3"} Nov 22 08:41:55 crc kubenswrapper[4856]: I1122 08:41:55.619501 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" event={"ID":"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8","Type":"ContainerStarted","Data":"a1b7d8e7ce98d23edc0ad61c7488d8fc35d50c10dbbd8f3223dd7cea97dec540"} Nov 22 08:41:55 crc kubenswrapper[4856]: I1122 08:41:55.619952 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:41:55 crc kubenswrapper[4856]: I1122 08:41:55.648320 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" podStartSLOduration=3.6483024 podStartE2EDuration="3.6483024s" podCreationTimestamp="2025-11-22 08:41:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:41:55.635955458 +0000 UTC m=+5958.049348726" watchObservedRunningTime="2025-11-22 08:41:55.6483024 +0000 UTC m=+5958.061695658" Nov 22 08:41:56 crc kubenswrapper[4856]: I1122 08:41:56.652232 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:41:56 crc kubenswrapper[4856]: I1122 08:41:56.652535 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:41:57 crc kubenswrapper[4856]: I1122 08:41:57.702497 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7fdsh" podUID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" containerName="registry-server" probeResult="failure" output=< Nov 22 08:41:57 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 08:41:57 crc kubenswrapper[4856]: > Nov 22 08:41:58 crc kubenswrapper[4856]: I1122 08:41:58.653499 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cbcx6" event={"ID":"eccf4778-135b-45e6-958d-2ecd55a79d70","Type":"ContainerStarted","Data":"58575bb00fe7a50af245f127b6ba46e7696c6107a76c3495c0f683146111a042"} Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.120085 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-cbcx6" podStartSLOduration=3.496109468 podStartE2EDuration="7.120058609s" podCreationTimestamp="2025-11-22 08:41:52 +0000 UTC" firstStartedPulling="2025-11-22 08:41:53.931403077 +0000 UTC m=+5956.344796335" lastFinishedPulling="2025-11-22 08:41:57.555352218 +0000 UTC m=+5959.968745476" observedRunningTime="2025-11-22 08:41:58.674642979 +0000 UTC m=+5961.088036237" watchObservedRunningTime="2025-11-22 08:41:59.120058609 +0000 UTC m=+5961.533451867" Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.127228 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2f2s8"] Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.130784 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.140642 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2f2s8"] Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.232305 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-catalog-content\") pod \"certified-operators-2f2s8\" (UID: \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\") " pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.232376 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-utilities\") pod \"certified-operators-2f2s8\" (UID: \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\") " pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.232424 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm282\" (UniqueName: \"kubernetes.io/projected/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-kube-api-access-pm282\") pod \"certified-operators-2f2s8\" (UID: \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\") " pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.335576 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-catalog-content\") pod \"certified-operators-2f2s8\" (UID: \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\") " pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.335723 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-utilities\") pod \"certified-operators-2f2s8\" (UID: \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\") " pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.335838 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm282\" (UniqueName: \"kubernetes.io/projected/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-kube-api-access-pm282\") pod \"certified-operators-2f2s8\" (UID: \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\") " pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.336188 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-catalog-content\") pod \"certified-operators-2f2s8\" (UID: \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\") " pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.336252 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-utilities\") pod \"certified-operators-2f2s8\" (UID: \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\") " pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.365868 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm282\" (UniqueName: \"kubernetes.io/projected/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-kube-api-access-pm282\") pod \"certified-operators-2f2s8\" (UID: \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\") " pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.455521 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.678651 4856 generic.go:334] "Generic (PLEG): container finished" podID="eccf4778-135b-45e6-958d-2ecd55a79d70" containerID="58575bb00fe7a50af245f127b6ba46e7696c6107a76c3495c0f683146111a042" exitCode=0 Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.678742 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cbcx6" event={"ID":"eccf4778-135b-45e6-958d-2ecd55a79d70","Type":"ContainerDied","Data":"58575bb00fe7a50af245f127b6ba46e7696c6107a76c3495c0f683146111a042"} Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.754917 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.755329 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:41:59 crc kubenswrapper[4856]: I1122 08:41:59.995291 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2f2s8"] Nov 22 08:42:00 crc kubenswrapper[4856]: I1122 08:42:00.693271 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f2s8" event={"ID":"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d","Type":"ContainerDied","Data":"652ed0a44715a97a835157f7f07e220d3871f78352ba5f16f47e644fb9deaa7e"} Nov 22 08:42:00 crc kubenswrapper[4856]: I1122 08:42:00.693973 4856 generic.go:334] "Generic (PLEG): container finished" podID="c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" containerID="652ed0a44715a97a835157f7f07e220d3871f78352ba5f16f47e644fb9deaa7e" exitCode=0 Nov 22 08:42:00 crc kubenswrapper[4856]: I1122 08:42:00.694866 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f2s8" event={"ID":"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d","Type":"ContainerStarted","Data":"ef6f44c34de6534964e70fc4bb2e3dcf3a0077d77a99b8f90bbfb5b9d412b523"} Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.016633 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cbcx6" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.073624 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eccf4778-135b-45e6-958d-2ecd55a79d70-logs\") pod \"eccf4778-135b-45e6-958d-2ecd55a79d70\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.073776 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-config-data\") pod \"eccf4778-135b-45e6-958d-2ecd55a79d70\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.073851 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-scripts\") pod \"eccf4778-135b-45e6-958d-2ecd55a79d70\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.073929 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9l64\" (UniqueName: \"kubernetes.io/projected/eccf4778-135b-45e6-958d-2ecd55a79d70-kube-api-access-n9l64\") pod \"eccf4778-135b-45e6-958d-2ecd55a79d70\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.073948 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-combined-ca-bundle\") pod \"eccf4778-135b-45e6-958d-2ecd55a79d70\" (UID: \"eccf4778-135b-45e6-958d-2ecd55a79d70\") " Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.074143 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eccf4778-135b-45e6-958d-2ecd55a79d70-logs" (OuterVolumeSpecName: "logs") pod "eccf4778-135b-45e6-958d-2ecd55a79d70" (UID: "eccf4778-135b-45e6-958d-2ecd55a79d70"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.074462 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eccf4778-135b-45e6-958d-2ecd55a79d70-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.079914 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-scripts" (OuterVolumeSpecName: "scripts") pod "eccf4778-135b-45e6-958d-2ecd55a79d70" (UID: "eccf4778-135b-45e6-958d-2ecd55a79d70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.079989 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eccf4778-135b-45e6-958d-2ecd55a79d70-kube-api-access-n9l64" (OuterVolumeSpecName: "kube-api-access-n9l64") pod "eccf4778-135b-45e6-958d-2ecd55a79d70" (UID: "eccf4778-135b-45e6-958d-2ecd55a79d70"). InnerVolumeSpecName "kube-api-access-n9l64". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.103160 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-config-data" (OuterVolumeSpecName: "config-data") pod "eccf4778-135b-45e6-958d-2ecd55a79d70" (UID: "eccf4778-135b-45e6-958d-2ecd55a79d70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.104746 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eccf4778-135b-45e6-958d-2ecd55a79d70" (UID: "eccf4778-135b-45e6-958d-2ecd55a79d70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.176081 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.176109 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.176126 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9l64\" (UniqueName: \"kubernetes.io/projected/eccf4778-135b-45e6-958d-2ecd55a79d70-kube-api-access-n9l64\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.176134 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eccf4778-135b-45e6-958d-2ecd55a79d70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.707170 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f2s8" event={"ID":"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d","Type":"ContainerStarted","Data":"be217a0f2b6d2290e7cc3dad29690ab2e7841fb7e755c9d3906d4b5e61b569e9"} Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.710317 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-cbcx6" event={"ID":"eccf4778-135b-45e6-958d-2ecd55a79d70","Type":"ContainerDied","Data":"31395818938ccb993d593911f9989f17c08e67b1138a335cbe32ebbb88442d63"} Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.710388 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31395818938ccb993d593911f9989f17c08e67b1138a335cbe32ebbb88442d63" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.710351 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-cbcx6" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.878493 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-764f7c8c76-8hp76"] Nov 22 08:42:01 crc kubenswrapper[4856]: E1122 08:42:01.878965 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eccf4778-135b-45e6-958d-2ecd55a79d70" containerName="placement-db-sync" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.878978 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="eccf4778-135b-45e6-958d-2ecd55a79d70" containerName="placement-db-sync" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.879180 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="eccf4778-135b-45e6-958d-2ecd55a79d70" containerName="placement-db-sync" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.880387 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.885320 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.885642 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-9x44h" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.885653 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.886458 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.894701 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.903020 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-764f7c8c76-8hp76"] Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.992488 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-public-tls-certs\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.992657 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-config-data\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.992695 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-internal-tls-certs\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.992782 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svxd7\" (UniqueName: \"kubernetes.io/projected/c6cb4a05-65f9-4ff0-814d-f7530da47c97-kube-api-access-svxd7\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.992809 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-scripts\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.992829 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-combined-ca-bundle\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:01 crc kubenswrapper[4856]: I1122 08:42:01.992876 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6cb4a05-65f9-4ff0-814d-f7530da47c97-logs\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.094393 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svxd7\" (UniqueName: \"kubernetes.io/projected/c6cb4a05-65f9-4ff0-814d-f7530da47c97-kube-api-access-svxd7\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.094443 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-scripts\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.094470 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-combined-ca-bundle\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.094531 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6cb4a05-65f9-4ff0-814d-f7530da47c97-logs\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.094621 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-public-tls-certs\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.094676 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-config-data\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.094706 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-internal-tls-certs\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.095162 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6cb4a05-65f9-4ff0-814d-f7530da47c97-logs\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.099733 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-internal-tls-certs\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.099834 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-combined-ca-bundle\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.100089 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-scripts\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.100394 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-public-tls-certs\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.100468 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6cb4a05-65f9-4ff0-814d-f7530da47c97-config-data\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.111803 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svxd7\" (UniqueName: \"kubernetes.io/projected/c6cb4a05-65f9-4ff0-814d-f7530da47c97-kube-api-access-svxd7\") pod \"placement-764f7c8c76-8hp76\" (UID: \"c6cb4a05-65f9-4ff0-814d-f7530da47c97\") " pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.210396 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.721551 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-764f7c8c76-8hp76"] Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.725796 4856 generic.go:334] "Generic (PLEG): container finished" podID="c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" containerID="be217a0f2b6d2290e7cc3dad29690ab2e7841fb7e755c9d3906d4b5e61b569e9" exitCode=0 Nov 22 08:42:02 crc kubenswrapper[4856]: I1122 08:42:02.725844 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f2s8" event={"ID":"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d","Type":"ContainerDied","Data":"be217a0f2b6d2290e7cc3dad29690ab2e7841fb7e755c9d3906d4b5e61b569e9"} Nov 22 08:42:03 crc kubenswrapper[4856]: I1122 08:42:03.264415 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:42:03 crc kubenswrapper[4856]: I1122 08:42:03.315819 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594596d755-4csnw"] Nov 22 08:42:03 crc kubenswrapper[4856]: I1122 08:42:03.316078 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-594596d755-4csnw" podUID="9793b3c1-724a-4ac4-979e-c599b578ea24" containerName="dnsmasq-dns" containerID="cri-o://3b4ae1e4819eec39b691cd248b971c88728476686b03d8b85062fd4611557e9e" gracePeriod=10 Nov 22 08:42:03 crc kubenswrapper[4856]: I1122 08:42:03.749079 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-764f7c8c76-8hp76" event={"ID":"c6cb4a05-65f9-4ff0-814d-f7530da47c97","Type":"ContainerStarted","Data":"f3a9c5507d2f2f9ec96f5d0eab166c4da45b844dcca6e63b1534ab6ef2b15aa7"} Nov 22 08:42:03 crc kubenswrapper[4856]: I1122 08:42:03.749374 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-764f7c8c76-8hp76" event={"ID":"c6cb4a05-65f9-4ff0-814d-f7530da47c97","Type":"ContainerStarted","Data":"f015aa33d5d09c0b4ab5e7e989383f757e4f9b465bae863078c4a6075447ef7d"} Nov 22 08:42:03 crc kubenswrapper[4856]: I1122 08:42:03.754534 4856 generic.go:334] "Generic (PLEG): container finished" podID="9793b3c1-724a-4ac4-979e-c599b578ea24" containerID="3b4ae1e4819eec39b691cd248b971c88728476686b03d8b85062fd4611557e9e" exitCode=0 Nov 22 08:42:03 crc kubenswrapper[4856]: I1122 08:42:03.754586 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594596d755-4csnw" event={"ID":"9793b3c1-724a-4ac4-979e-c599b578ea24","Type":"ContainerDied","Data":"3b4ae1e4819eec39b691cd248b971c88728476686b03d8b85062fd4611557e9e"} Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.306590 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.343171 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-config\") pod \"9793b3c1-724a-4ac4-979e-c599b578ea24\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.343347 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-ovsdbserver-sb\") pod \"9793b3c1-724a-4ac4-979e-c599b578ea24\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.343442 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-ovsdbserver-nb\") pod \"9793b3c1-724a-4ac4-979e-c599b578ea24\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.343604 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-dns-svc\") pod \"9793b3c1-724a-4ac4-979e-c599b578ea24\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.343645 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zcnj\" (UniqueName: \"kubernetes.io/projected/9793b3c1-724a-4ac4-979e-c599b578ea24-kube-api-access-9zcnj\") pod \"9793b3c1-724a-4ac4-979e-c599b578ea24\" (UID: \"9793b3c1-724a-4ac4-979e-c599b578ea24\") " Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.353498 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9793b3c1-724a-4ac4-979e-c599b578ea24-kube-api-access-9zcnj" (OuterVolumeSpecName: "kube-api-access-9zcnj") pod "9793b3c1-724a-4ac4-979e-c599b578ea24" (UID: "9793b3c1-724a-4ac4-979e-c599b578ea24"). InnerVolumeSpecName "kube-api-access-9zcnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.407321 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9793b3c1-724a-4ac4-979e-c599b578ea24" (UID: "9793b3c1-724a-4ac4-979e-c599b578ea24"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.409367 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-config" (OuterVolumeSpecName: "config") pod "9793b3c1-724a-4ac4-979e-c599b578ea24" (UID: "9793b3c1-724a-4ac4-979e-c599b578ea24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.412491 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9793b3c1-724a-4ac4-979e-c599b578ea24" (UID: "9793b3c1-724a-4ac4-979e-c599b578ea24"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.412792 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9793b3c1-724a-4ac4-979e-c599b578ea24" (UID: "9793b3c1-724a-4ac4-979e-c599b578ea24"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.448084 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.448160 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.448174 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.448187 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zcnj\" (UniqueName: \"kubernetes.io/projected/9793b3c1-724a-4ac4-979e-c599b578ea24-kube-api-access-9zcnj\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:04 crc kubenswrapper[4856]: I1122 08:42:04.448202 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9793b3c1-724a-4ac4-979e-c599b578ea24-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:05 crc kubenswrapper[4856]: I1122 08:42:05.534670 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-594596d755-4csnw" event={"ID":"9793b3c1-724a-4ac4-979e-c599b578ea24","Type":"ContainerDied","Data":"6fa6e451633bf61b03e90d6b60ed79bc0b810361639a62c101bf53e5146a584a"} Nov 22 08:42:05 crc kubenswrapper[4856]: I1122 08:42:05.534943 4856 scope.go:117] "RemoveContainer" containerID="3b4ae1e4819eec39b691cd248b971c88728476686b03d8b85062fd4611557e9e" Nov 22 08:42:05 crc kubenswrapper[4856]: I1122 08:42:05.535083 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:42:05 crc kubenswrapper[4856]: I1122 08:42:05.539771 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-764f7c8c76-8hp76" event={"ID":"c6cb4a05-65f9-4ff0-814d-f7530da47c97","Type":"ContainerStarted","Data":"3e7d942ca55f2839c686f2503008b018e2233b1f3d43ffc91f0ad875f021d291"} Nov 22 08:42:05 crc kubenswrapper[4856]: I1122 08:42:05.542188 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:05 crc kubenswrapper[4856]: I1122 08:42:05.542240 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:05 crc kubenswrapper[4856]: I1122 08:42:05.550363 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f2s8" event={"ID":"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d","Type":"ContainerStarted","Data":"8da602d487302679465b565d89fa0091b470fec9cfcff04a13c8b8eaa9ee0183"} Nov 22 08:42:05 crc kubenswrapper[4856]: I1122 08:42:05.578054 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-764f7c8c76-8hp76" podStartSLOduration=4.578029785 podStartE2EDuration="4.578029785s" podCreationTimestamp="2025-11-22 08:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:42:05.567910563 +0000 UTC m=+5967.981303821" watchObservedRunningTime="2025-11-22 08:42:05.578029785 +0000 UTC m=+5967.991423053" Nov 22 08:42:05 crc kubenswrapper[4856]: I1122 08:42:05.591297 4856 scope.go:117] "RemoveContainer" containerID="1fe8f0c0f1754657d34484a0f82b8ab7fecaf9a3286d0f3c7227f28a07bd14c0" Nov 22 08:42:05 crc kubenswrapper[4856]: I1122 08:42:05.598451 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2f2s8" podStartSLOduration=3.588774323 podStartE2EDuration="6.598430144s" podCreationTimestamp="2025-11-22 08:41:59 +0000 UTC" firstStartedPulling="2025-11-22 08:42:00.696306409 +0000 UTC m=+5963.109699657" lastFinishedPulling="2025-11-22 08:42:03.70596222 +0000 UTC m=+5966.119355478" observedRunningTime="2025-11-22 08:42:05.592622217 +0000 UTC m=+5968.006015485" watchObservedRunningTime="2025-11-22 08:42:05.598430144 +0000 UTC m=+5968.011823392" Nov 22 08:42:07 crc kubenswrapper[4856]: I1122 08:42:07.700273 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7fdsh" podUID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" containerName="registry-server" probeResult="failure" output=< Nov 22 08:42:07 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 08:42:07 crc kubenswrapper[4856]: > Nov 22 08:42:09 crc kubenswrapper[4856]: I1122 08:42:09.455904 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:42:09 crc kubenswrapper[4856]: I1122 08:42:09.456043 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:42:09 crc kubenswrapper[4856]: I1122 08:42:09.498696 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:42:09 crc kubenswrapper[4856]: I1122 08:42:09.638727 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:42:09 crc kubenswrapper[4856]: I1122 08:42:09.732264 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2f2s8"] Nov 22 08:42:11 crc kubenswrapper[4856]: I1122 08:42:11.603237 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2f2s8" podUID="c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" containerName="registry-server" containerID="cri-o://8da602d487302679465b565d89fa0091b470fec9cfcff04a13c8b8eaa9ee0183" gracePeriod=2 Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.091118 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.189306 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-utilities\") pod \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\" (UID: \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\") " Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.189450 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm282\" (UniqueName: \"kubernetes.io/projected/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-kube-api-access-pm282\") pod \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\" (UID: \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\") " Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.189578 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-catalog-content\") pod \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\" (UID: \"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d\") " Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.190182 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-utilities" (OuterVolumeSpecName: "utilities") pod "c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" (UID: "c4030b78-44c0-461b-8eb4-d7a2e50b3f5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.196915 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-kube-api-access-pm282" (OuterVolumeSpecName: "kube-api-access-pm282") pod "c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" (UID: "c4030b78-44c0-461b-8eb4-d7a2e50b3f5d"). InnerVolumeSpecName "kube-api-access-pm282". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.227467 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" (UID: "c4030b78-44c0-461b-8eb4-d7a2e50b3f5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.292300 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm282\" (UniqueName: \"kubernetes.io/projected/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-kube-api-access-pm282\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.292343 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.292356 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.615571 4856 generic.go:334] "Generic (PLEG): container finished" podID="c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" containerID="8da602d487302679465b565d89fa0091b470fec9cfcff04a13c8b8eaa9ee0183" exitCode=0 Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.615630 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f2s8" event={"ID":"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d","Type":"ContainerDied","Data":"8da602d487302679465b565d89fa0091b470fec9cfcff04a13c8b8eaa9ee0183"} Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.615669 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2f2s8" event={"ID":"c4030b78-44c0-461b-8eb4-d7a2e50b3f5d","Type":"ContainerDied","Data":"ef6f44c34de6534964e70fc4bb2e3dcf3a0077d77a99b8f90bbfb5b9d412b523"} Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.615691 4856 scope.go:117] "RemoveContainer" containerID="8da602d487302679465b565d89fa0091b470fec9cfcff04a13c8b8eaa9ee0183" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.616185 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2f2s8" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.649252 4856 scope.go:117] "RemoveContainer" containerID="be217a0f2b6d2290e7cc3dad29690ab2e7841fb7e755c9d3906d4b5e61b569e9" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.666622 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2f2s8"] Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.675300 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2f2s8"] Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.680225 4856 scope.go:117] "RemoveContainer" containerID="652ed0a44715a97a835157f7f07e220d3871f78352ba5f16f47e644fb9deaa7e" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.710367 4856 scope.go:117] "RemoveContainer" containerID="8da602d487302679465b565d89fa0091b470fec9cfcff04a13c8b8eaa9ee0183" Nov 22 08:42:12 crc kubenswrapper[4856]: E1122 08:42:12.711486 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8da602d487302679465b565d89fa0091b470fec9cfcff04a13c8b8eaa9ee0183\": container with ID starting with 8da602d487302679465b565d89fa0091b470fec9cfcff04a13c8b8eaa9ee0183 not found: ID does not exist" containerID="8da602d487302679465b565d89fa0091b470fec9cfcff04a13c8b8eaa9ee0183" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.711540 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8da602d487302679465b565d89fa0091b470fec9cfcff04a13c8b8eaa9ee0183"} err="failed to get container status \"8da602d487302679465b565d89fa0091b470fec9cfcff04a13c8b8eaa9ee0183\": rpc error: code = NotFound desc = could not find container \"8da602d487302679465b565d89fa0091b470fec9cfcff04a13c8b8eaa9ee0183\": container with ID starting with 8da602d487302679465b565d89fa0091b470fec9cfcff04a13c8b8eaa9ee0183 not found: ID does not exist" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.711567 4856 scope.go:117] "RemoveContainer" containerID="be217a0f2b6d2290e7cc3dad29690ab2e7841fb7e755c9d3906d4b5e61b569e9" Nov 22 08:42:12 crc kubenswrapper[4856]: E1122 08:42:12.712166 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be217a0f2b6d2290e7cc3dad29690ab2e7841fb7e755c9d3906d4b5e61b569e9\": container with ID starting with be217a0f2b6d2290e7cc3dad29690ab2e7841fb7e755c9d3906d4b5e61b569e9 not found: ID does not exist" containerID="be217a0f2b6d2290e7cc3dad29690ab2e7841fb7e755c9d3906d4b5e61b569e9" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.712196 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be217a0f2b6d2290e7cc3dad29690ab2e7841fb7e755c9d3906d4b5e61b569e9"} err="failed to get container status \"be217a0f2b6d2290e7cc3dad29690ab2e7841fb7e755c9d3906d4b5e61b569e9\": rpc error: code = NotFound desc = could not find container \"be217a0f2b6d2290e7cc3dad29690ab2e7841fb7e755c9d3906d4b5e61b569e9\": container with ID starting with be217a0f2b6d2290e7cc3dad29690ab2e7841fb7e755c9d3906d4b5e61b569e9 not found: ID does not exist" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.712216 4856 scope.go:117] "RemoveContainer" containerID="652ed0a44715a97a835157f7f07e220d3871f78352ba5f16f47e644fb9deaa7e" Nov 22 08:42:12 crc kubenswrapper[4856]: E1122 08:42:12.712483 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"652ed0a44715a97a835157f7f07e220d3871f78352ba5f16f47e644fb9deaa7e\": container with ID starting with 652ed0a44715a97a835157f7f07e220d3871f78352ba5f16f47e644fb9deaa7e not found: ID does not exist" containerID="652ed0a44715a97a835157f7f07e220d3871f78352ba5f16f47e644fb9deaa7e" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.712529 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"652ed0a44715a97a835157f7f07e220d3871f78352ba5f16f47e644fb9deaa7e"} err="failed to get container status \"652ed0a44715a97a835157f7f07e220d3871f78352ba5f16f47e644fb9deaa7e\": rpc error: code = NotFound desc = could not find container \"652ed0a44715a97a835157f7f07e220d3871f78352ba5f16f47e644fb9deaa7e\": container with ID starting with 652ed0a44715a97a835157f7f07e220d3871f78352ba5f16f47e644fb9deaa7e not found: ID does not exist" Nov 22 08:42:12 crc kubenswrapper[4856]: I1122 08:42:12.725203 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" path="/var/lib/kubelet/pods/c4030b78-44c0-461b-8eb4-d7a2e50b3f5d/volumes" Nov 22 08:42:12 crc kubenswrapper[4856]: E1122 08:42:12.821260 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4030b78_44c0_461b_8eb4_d7a2e50b3f5d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4030b78_44c0_461b_8eb4_d7a2e50b3f5d.slice/crio-ef6f44c34de6534964e70fc4bb2e3dcf3a0077d77a99b8f90bbfb5b9d412b523\": RecentStats: unable to find data in memory cache]" Nov 22 08:42:16 crc kubenswrapper[4856]: I1122 08:42:16.700668 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:42:16 crc kubenswrapper[4856]: I1122 08:42:16.754954 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:42:16 crc kubenswrapper[4856]: I1122 08:42:16.947979 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7fdsh"] Nov 22 08:42:18 crc kubenswrapper[4856]: I1122 08:42:18.669530 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7fdsh" podUID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" containerName="registry-server" containerID="cri-o://fc97395f05f934f01462b5dfb996c655564cfe6783aaa6020b6d27ec47a79e15" gracePeriod=2 Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.226273 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.339195 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a37b132-fd9a-48d4-8b0a-1cc67822396c-utilities\") pod \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\" (UID: \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\") " Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.339255 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkbm6\" (UniqueName: \"kubernetes.io/projected/9a37b132-fd9a-48d4-8b0a-1cc67822396c-kube-api-access-dkbm6\") pod \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\" (UID: \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\") " Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.339362 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a37b132-fd9a-48d4-8b0a-1cc67822396c-catalog-content\") pod \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\" (UID: \"9a37b132-fd9a-48d4-8b0a-1cc67822396c\") " Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.339943 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a37b132-fd9a-48d4-8b0a-1cc67822396c-utilities" (OuterVolumeSpecName: "utilities") pod "9a37b132-fd9a-48d4-8b0a-1cc67822396c" (UID: "9a37b132-fd9a-48d4-8b0a-1cc67822396c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.345471 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a37b132-fd9a-48d4-8b0a-1cc67822396c-kube-api-access-dkbm6" (OuterVolumeSpecName: "kube-api-access-dkbm6") pod "9a37b132-fd9a-48d4-8b0a-1cc67822396c" (UID: "9a37b132-fd9a-48d4-8b0a-1cc67822396c"). InnerVolumeSpecName "kube-api-access-dkbm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.422821 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a37b132-fd9a-48d4-8b0a-1cc67822396c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a37b132-fd9a-48d4-8b0a-1cc67822396c" (UID: "9a37b132-fd9a-48d4-8b0a-1cc67822396c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.441351 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a37b132-fd9a-48d4-8b0a-1cc67822396c-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.441610 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkbm6\" (UniqueName: \"kubernetes.io/projected/9a37b132-fd9a-48d4-8b0a-1cc67822396c-kube-api-access-dkbm6\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.441673 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a37b132-fd9a-48d4-8b0a-1cc67822396c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.682434 4856 generic.go:334] "Generic (PLEG): container finished" podID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" containerID="fc97395f05f934f01462b5dfb996c655564cfe6783aaa6020b6d27ec47a79e15" exitCode=0 Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.682485 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fdsh" event={"ID":"9a37b132-fd9a-48d4-8b0a-1cc67822396c","Type":"ContainerDied","Data":"fc97395f05f934f01462b5dfb996c655564cfe6783aaa6020b6d27ec47a79e15"} Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.682499 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7fdsh" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.682560 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7fdsh" event={"ID":"9a37b132-fd9a-48d4-8b0a-1cc67822396c","Type":"ContainerDied","Data":"1393625bf9e7182b62c593c270ea133ae10db2d0898c4fd641803a6c5f37f4c0"} Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.682600 4856 scope.go:117] "RemoveContainer" containerID="fc97395f05f934f01462b5dfb996c655564cfe6783aaa6020b6d27ec47a79e15" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.708153 4856 scope.go:117] "RemoveContainer" containerID="cb9bfea53e788db46f54e491e4ec15bcf0ecb5cbc5e7ca0766628cf4154e1851" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.726538 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7fdsh"] Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.733279 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7fdsh"] Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.754151 4856 scope.go:117] "RemoveContainer" containerID="7dfbe5f320b4e6e05761cacda1fc3ab9a440253551a8d662632054ef54834a4d" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.784712 4856 scope.go:117] "RemoveContainer" containerID="fc97395f05f934f01462b5dfb996c655564cfe6783aaa6020b6d27ec47a79e15" Nov 22 08:42:19 crc kubenswrapper[4856]: E1122 08:42:19.785961 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc97395f05f934f01462b5dfb996c655564cfe6783aaa6020b6d27ec47a79e15\": container with ID starting with fc97395f05f934f01462b5dfb996c655564cfe6783aaa6020b6d27ec47a79e15 not found: ID does not exist" containerID="fc97395f05f934f01462b5dfb996c655564cfe6783aaa6020b6d27ec47a79e15" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.786046 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc97395f05f934f01462b5dfb996c655564cfe6783aaa6020b6d27ec47a79e15"} err="failed to get container status \"fc97395f05f934f01462b5dfb996c655564cfe6783aaa6020b6d27ec47a79e15\": rpc error: code = NotFound desc = could not find container \"fc97395f05f934f01462b5dfb996c655564cfe6783aaa6020b6d27ec47a79e15\": container with ID starting with fc97395f05f934f01462b5dfb996c655564cfe6783aaa6020b6d27ec47a79e15 not found: ID does not exist" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.786103 4856 scope.go:117] "RemoveContainer" containerID="cb9bfea53e788db46f54e491e4ec15bcf0ecb5cbc5e7ca0766628cf4154e1851" Nov 22 08:42:19 crc kubenswrapper[4856]: E1122 08:42:19.786817 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb9bfea53e788db46f54e491e4ec15bcf0ecb5cbc5e7ca0766628cf4154e1851\": container with ID starting with cb9bfea53e788db46f54e491e4ec15bcf0ecb5cbc5e7ca0766628cf4154e1851 not found: ID does not exist" containerID="cb9bfea53e788db46f54e491e4ec15bcf0ecb5cbc5e7ca0766628cf4154e1851" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.786857 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9bfea53e788db46f54e491e4ec15bcf0ecb5cbc5e7ca0766628cf4154e1851"} err="failed to get container status \"cb9bfea53e788db46f54e491e4ec15bcf0ecb5cbc5e7ca0766628cf4154e1851\": rpc error: code = NotFound desc = could not find container \"cb9bfea53e788db46f54e491e4ec15bcf0ecb5cbc5e7ca0766628cf4154e1851\": container with ID starting with cb9bfea53e788db46f54e491e4ec15bcf0ecb5cbc5e7ca0766628cf4154e1851 not found: ID does not exist" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.786908 4856 scope.go:117] "RemoveContainer" containerID="7dfbe5f320b4e6e05761cacda1fc3ab9a440253551a8d662632054ef54834a4d" Nov 22 08:42:19 crc kubenswrapper[4856]: E1122 08:42:19.787240 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dfbe5f320b4e6e05761cacda1fc3ab9a440253551a8d662632054ef54834a4d\": container with ID starting with 7dfbe5f320b4e6e05761cacda1fc3ab9a440253551a8d662632054ef54834a4d not found: ID does not exist" containerID="7dfbe5f320b4e6e05761cacda1fc3ab9a440253551a8d662632054ef54834a4d" Nov 22 08:42:19 crc kubenswrapper[4856]: I1122 08:42:19.787267 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dfbe5f320b4e6e05761cacda1fc3ab9a440253551a8d662632054ef54834a4d"} err="failed to get container status \"7dfbe5f320b4e6e05761cacda1fc3ab9a440253551a8d662632054ef54834a4d\": rpc error: code = NotFound desc = could not find container \"7dfbe5f320b4e6e05761cacda1fc3ab9a440253551a8d662632054ef54834a4d\": container with ID starting with 7dfbe5f320b4e6e05761cacda1fc3ab9a440253551a8d662632054ef54834a4d not found: ID does not exist" Nov 22 08:42:20 crc kubenswrapper[4856]: I1122 08:42:20.724759 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" path="/var/lib/kubelet/pods/9a37b132-fd9a-48d4-8b0a-1cc67822396c/volumes" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.082727 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6gr6j"] Nov 22 08:42:28 crc kubenswrapper[4856]: E1122 08:42:28.085520 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" containerName="registry-server" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.085669 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" containerName="registry-server" Nov 22 08:42:28 crc kubenswrapper[4856]: E1122 08:42:28.085737 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9793b3c1-724a-4ac4-979e-c599b578ea24" containerName="init" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.085798 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9793b3c1-724a-4ac4-979e-c599b578ea24" containerName="init" Nov 22 08:42:28 crc kubenswrapper[4856]: E1122 08:42:28.085872 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" containerName="extract-utilities" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.085927 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" containerName="extract-utilities" Nov 22 08:42:28 crc kubenswrapper[4856]: E1122 08:42:28.092968 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" containerName="extract-utilities" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.093007 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" containerName="extract-utilities" Nov 22 08:42:28 crc kubenswrapper[4856]: E1122 08:42:28.093042 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9793b3c1-724a-4ac4-979e-c599b578ea24" containerName="dnsmasq-dns" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.093050 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9793b3c1-724a-4ac4-979e-c599b578ea24" containerName="dnsmasq-dns" Nov 22 08:42:28 crc kubenswrapper[4856]: E1122 08:42:28.093070 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" containerName="extract-content" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.093090 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" containerName="extract-content" Nov 22 08:42:28 crc kubenswrapper[4856]: E1122 08:42:28.093117 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" containerName="extract-content" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.093127 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" containerName="extract-content" Nov 22 08:42:28 crc kubenswrapper[4856]: E1122 08:42:28.093142 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" containerName="registry-server" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.093151 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" containerName="registry-server" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.093747 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9793b3c1-724a-4ac4-979e-c599b578ea24" containerName="dnsmasq-dns" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.093766 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4030b78-44c0-461b-8eb4-d7a2e50b3f5d" containerName="registry-server" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.093797 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a37b132-fd9a-48d4-8b0a-1cc67822396c" containerName="registry-server" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.095411 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.095663 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gr6j"] Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.111297 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv9w4\" (UniqueName: \"kubernetes.io/projected/700e70ad-903c-45a6-9029-f32993e31566-kube-api-access-sv9w4\") pod \"redhat-marketplace-6gr6j\" (UID: \"700e70ad-903c-45a6-9029-f32993e31566\") " pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.112167 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700e70ad-903c-45a6-9029-f32993e31566-catalog-content\") pod \"redhat-marketplace-6gr6j\" (UID: \"700e70ad-903c-45a6-9029-f32993e31566\") " pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.112373 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700e70ad-903c-45a6-9029-f32993e31566-utilities\") pod \"redhat-marketplace-6gr6j\" (UID: \"700e70ad-903c-45a6-9029-f32993e31566\") " pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.214586 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv9w4\" (UniqueName: \"kubernetes.io/projected/700e70ad-903c-45a6-9029-f32993e31566-kube-api-access-sv9w4\") pod \"redhat-marketplace-6gr6j\" (UID: \"700e70ad-903c-45a6-9029-f32993e31566\") " pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.214655 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700e70ad-903c-45a6-9029-f32993e31566-catalog-content\") pod \"redhat-marketplace-6gr6j\" (UID: \"700e70ad-903c-45a6-9029-f32993e31566\") " pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.214754 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700e70ad-903c-45a6-9029-f32993e31566-utilities\") pod \"redhat-marketplace-6gr6j\" (UID: \"700e70ad-903c-45a6-9029-f32993e31566\") " pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.215389 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700e70ad-903c-45a6-9029-f32993e31566-utilities\") pod \"redhat-marketplace-6gr6j\" (UID: \"700e70ad-903c-45a6-9029-f32993e31566\") " pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.215408 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700e70ad-903c-45a6-9029-f32993e31566-catalog-content\") pod \"redhat-marketplace-6gr6j\" (UID: \"700e70ad-903c-45a6-9029-f32993e31566\") " pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.233506 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv9w4\" (UniqueName: \"kubernetes.io/projected/700e70ad-903c-45a6-9029-f32993e31566-kube-api-access-sv9w4\") pod \"redhat-marketplace-6gr6j\" (UID: \"700e70ad-903c-45a6-9029-f32993e31566\") " pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.427129 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:28 crc kubenswrapper[4856]: I1122 08:42:28.885816 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gr6j"] Nov 22 08:42:29 crc kubenswrapper[4856]: I1122 08:42:29.755003 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:42:29 crc kubenswrapper[4856]: I1122 08:42:29.755480 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:42:29 crc kubenswrapper[4856]: I1122 08:42:29.755583 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 08:42:29 crc kubenswrapper[4856]: I1122 08:42:29.757409 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:42:29 crc kubenswrapper[4856]: I1122 08:42:29.757528 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" gracePeriod=600 Nov 22 08:42:29 crc kubenswrapper[4856]: I1122 08:42:29.794794 4856 generic.go:334] "Generic (PLEG): container finished" podID="700e70ad-903c-45a6-9029-f32993e31566" containerID="6f4fa18ff7708210932d78ee21c2169c1c9bfc2aa9db7e059d2b89d0f6dc4503" exitCode=0 Nov 22 08:42:29 crc kubenswrapper[4856]: I1122 08:42:29.794845 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gr6j" event={"ID":"700e70ad-903c-45a6-9029-f32993e31566","Type":"ContainerDied","Data":"6f4fa18ff7708210932d78ee21c2169c1c9bfc2aa9db7e059d2b89d0f6dc4503"} Nov 22 08:42:29 crc kubenswrapper[4856]: I1122 08:42:29.794872 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gr6j" event={"ID":"700e70ad-903c-45a6-9029-f32993e31566","Type":"ContainerStarted","Data":"f8b1f9a709b4ad1b9d9c6a828373517933c3d980343fb43820a74deeb099242c"} Nov 22 08:42:29 crc kubenswrapper[4856]: E1122 08:42:29.887902 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:42:30 crc kubenswrapper[4856]: I1122 08:42:30.805479 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" exitCode=0 Nov 22 08:42:30 crc kubenswrapper[4856]: I1122 08:42:30.805537 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76"} Nov 22 08:42:30 crc kubenswrapper[4856]: I1122 08:42:30.805857 4856 scope.go:117] "RemoveContainer" containerID="d24613435baa98fcf4fed1d58784844b252bad2c404ea5e6d83f40d6769faaee" Nov 22 08:42:30 crc kubenswrapper[4856]: I1122 08:42:30.806520 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:42:30 crc kubenswrapper[4856]: E1122 08:42:30.806777 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:42:30 crc kubenswrapper[4856]: I1122 08:42:30.808132 4856 generic.go:334] "Generic (PLEG): container finished" podID="700e70ad-903c-45a6-9029-f32993e31566" containerID="9ded0e9663dcee04ef2c3101f73ba8148b4361f8f195e99f55a103968653a5ce" exitCode=0 Nov 22 08:42:30 crc kubenswrapper[4856]: I1122 08:42:30.808295 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gr6j" event={"ID":"700e70ad-903c-45a6-9029-f32993e31566","Type":"ContainerDied","Data":"9ded0e9663dcee04ef2c3101f73ba8148b4361f8f195e99f55a103968653a5ce"} Nov 22 08:42:31 crc kubenswrapper[4856]: I1122 08:42:31.824367 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gr6j" event={"ID":"700e70ad-903c-45a6-9029-f32993e31566","Type":"ContainerStarted","Data":"e78e9d9961b438837ee488135b16034ecd59769041b55d57729b4d4e4c408c35"} Nov 22 08:42:31 crc kubenswrapper[4856]: I1122 08:42:31.845286 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6gr6j" podStartSLOduration=2.339305274 podStartE2EDuration="3.845268444s" podCreationTimestamp="2025-11-22 08:42:28 +0000 UTC" firstStartedPulling="2025-11-22 08:42:29.796149487 +0000 UTC m=+5992.209542745" lastFinishedPulling="2025-11-22 08:42:31.302112657 +0000 UTC m=+5993.715505915" observedRunningTime="2025-11-22 08:42:31.84024942 +0000 UTC m=+5994.253642688" watchObservedRunningTime="2025-11-22 08:42:31.845268444 +0000 UTC m=+5994.258661702" Nov 22 08:42:33 crc kubenswrapper[4856]: I1122 08:42:33.254868 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:33 crc kubenswrapper[4856]: I1122 08:42:33.267478 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-764f7c8c76-8hp76" Nov 22 08:42:35 crc kubenswrapper[4856]: I1122 08:42:35.566972 4856 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod9793b3c1-724a-4ac4-979e-c599b578ea24"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod9793b3c1-724a-4ac4-979e-c599b578ea24] : Timed out while waiting for systemd to remove kubepods-besteffort-pod9793b3c1_724a_4ac4_979e_c599b578ea24.slice" Nov 22 08:42:35 crc kubenswrapper[4856]: E1122 08:42:35.567048 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod9793b3c1-724a-4ac4-979e-c599b578ea24] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod9793b3c1-724a-4ac4-979e-c599b578ea24] : Timed out while waiting for systemd to remove kubepods-besteffort-pod9793b3c1_724a_4ac4_979e_c599b578ea24.slice" pod="openstack/dnsmasq-dns-594596d755-4csnw" podUID="9793b3c1-724a-4ac4-979e-c599b578ea24" Nov 22 08:42:35 crc kubenswrapper[4856]: I1122 08:42:35.867810 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-594596d755-4csnw" Nov 22 08:42:35 crc kubenswrapper[4856]: I1122 08:42:35.904161 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-594596d755-4csnw"] Nov 22 08:42:35 crc kubenswrapper[4856]: I1122 08:42:35.910774 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-594596d755-4csnw"] Nov 22 08:42:36 crc kubenswrapper[4856]: I1122 08:42:36.722411 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9793b3c1-724a-4ac4-979e-c599b578ea24" path="/var/lib/kubelet/pods/9793b3c1-724a-4ac4-979e-c599b578ea24/volumes" Nov 22 08:42:38 crc kubenswrapper[4856]: I1122 08:42:38.428205 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:38 crc kubenswrapper[4856]: I1122 08:42:38.428268 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:38 crc kubenswrapper[4856]: I1122 08:42:38.482326 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:38 crc kubenswrapper[4856]: I1122 08:42:38.953969 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:39 crc kubenswrapper[4856]: I1122 08:42:39.029847 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gr6j"] Nov 22 08:42:40 crc kubenswrapper[4856]: I1122 08:42:40.910555 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6gr6j" podUID="700e70ad-903c-45a6-9029-f32993e31566" containerName="registry-server" containerID="cri-o://e78e9d9961b438837ee488135b16034ecd59769041b55d57729b4d4e4c408c35" gracePeriod=2 Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.304982 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.465058 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700e70ad-903c-45a6-9029-f32993e31566-utilities\") pod \"700e70ad-903c-45a6-9029-f32993e31566\" (UID: \"700e70ad-903c-45a6-9029-f32993e31566\") " Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.465143 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sv9w4\" (UniqueName: \"kubernetes.io/projected/700e70ad-903c-45a6-9029-f32993e31566-kube-api-access-sv9w4\") pod \"700e70ad-903c-45a6-9029-f32993e31566\" (UID: \"700e70ad-903c-45a6-9029-f32993e31566\") " Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.465210 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700e70ad-903c-45a6-9029-f32993e31566-catalog-content\") pod \"700e70ad-903c-45a6-9029-f32993e31566\" (UID: \"700e70ad-903c-45a6-9029-f32993e31566\") " Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.466468 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/700e70ad-903c-45a6-9029-f32993e31566-utilities" (OuterVolumeSpecName: "utilities") pod "700e70ad-903c-45a6-9029-f32993e31566" (UID: "700e70ad-903c-45a6-9029-f32993e31566"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.467111 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/700e70ad-903c-45a6-9029-f32993e31566-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.475712 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700e70ad-903c-45a6-9029-f32993e31566-kube-api-access-sv9w4" (OuterVolumeSpecName: "kube-api-access-sv9w4") pod "700e70ad-903c-45a6-9029-f32993e31566" (UID: "700e70ad-903c-45a6-9029-f32993e31566"). InnerVolumeSpecName "kube-api-access-sv9w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.488406 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/700e70ad-903c-45a6-9029-f32993e31566-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "700e70ad-903c-45a6-9029-f32993e31566" (UID: "700e70ad-903c-45a6-9029-f32993e31566"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.568843 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/700e70ad-903c-45a6-9029-f32993e31566-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.568886 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sv9w4\" (UniqueName: \"kubernetes.io/projected/700e70ad-903c-45a6-9029-f32993e31566-kube-api-access-sv9w4\") on node \"crc\" DevicePath \"\"" Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.709710 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:42:41 crc kubenswrapper[4856]: E1122 08:42:41.710071 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.920705 4856 generic.go:334] "Generic (PLEG): container finished" podID="700e70ad-903c-45a6-9029-f32993e31566" containerID="e78e9d9961b438837ee488135b16034ecd59769041b55d57729b4d4e4c408c35" exitCode=0 Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.920774 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gr6j" event={"ID":"700e70ad-903c-45a6-9029-f32993e31566","Type":"ContainerDied","Data":"e78e9d9961b438837ee488135b16034ecd59769041b55d57729b4d4e4c408c35"} Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.920809 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6gr6j" event={"ID":"700e70ad-903c-45a6-9029-f32993e31566","Type":"ContainerDied","Data":"f8b1f9a709b4ad1b9d9c6a828373517933c3d980343fb43820a74deeb099242c"} Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.920807 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6gr6j" Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.920828 4856 scope.go:117] "RemoveContainer" containerID="e78e9d9961b438837ee488135b16034ecd59769041b55d57729b4d4e4c408c35" Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.942934 4856 scope.go:117] "RemoveContainer" containerID="9ded0e9663dcee04ef2c3101f73ba8148b4361f8f195e99f55a103968653a5ce" Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.966962 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gr6j"] Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.977784 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6gr6j"] Nov 22 08:42:41 crc kubenswrapper[4856]: I1122 08:42:41.987743 4856 scope.go:117] "RemoveContainer" containerID="6f4fa18ff7708210932d78ee21c2169c1c9bfc2aa9db7e059d2b89d0f6dc4503" Nov 22 08:42:42 crc kubenswrapper[4856]: I1122 08:42:42.100649 4856 scope.go:117] "RemoveContainer" containerID="e78e9d9961b438837ee488135b16034ecd59769041b55d57729b4d4e4c408c35" Nov 22 08:42:42 crc kubenswrapper[4856]: E1122 08:42:42.101139 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e78e9d9961b438837ee488135b16034ecd59769041b55d57729b4d4e4c408c35\": container with ID starting with e78e9d9961b438837ee488135b16034ecd59769041b55d57729b4d4e4c408c35 not found: ID does not exist" containerID="e78e9d9961b438837ee488135b16034ecd59769041b55d57729b4d4e4c408c35" Nov 22 08:42:42 crc kubenswrapper[4856]: I1122 08:42:42.101185 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e78e9d9961b438837ee488135b16034ecd59769041b55d57729b4d4e4c408c35"} err="failed to get container status \"e78e9d9961b438837ee488135b16034ecd59769041b55d57729b4d4e4c408c35\": rpc error: code = NotFound desc = could not find container \"e78e9d9961b438837ee488135b16034ecd59769041b55d57729b4d4e4c408c35\": container with ID starting with e78e9d9961b438837ee488135b16034ecd59769041b55d57729b4d4e4c408c35 not found: ID does not exist" Nov 22 08:42:42 crc kubenswrapper[4856]: I1122 08:42:42.101216 4856 scope.go:117] "RemoveContainer" containerID="9ded0e9663dcee04ef2c3101f73ba8148b4361f8f195e99f55a103968653a5ce" Nov 22 08:42:42 crc kubenswrapper[4856]: E1122 08:42:42.101659 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ded0e9663dcee04ef2c3101f73ba8148b4361f8f195e99f55a103968653a5ce\": container with ID starting with 9ded0e9663dcee04ef2c3101f73ba8148b4361f8f195e99f55a103968653a5ce not found: ID does not exist" containerID="9ded0e9663dcee04ef2c3101f73ba8148b4361f8f195e99f55a103968653a5ce" Nov 22 08:42:42 crc kubenswrapper[4856]: I1122 08:42:42.101729 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ded0e9663dcee04ef2c3101f73ba8148b4361f8f195e99f55a103968653a5ce"} err="failed to get container status \"9ded0e9663dcee04ef2c3101f73ba8148b4361f8f195e99f55a103968653a5ce\": rpc error: code = NotFound desc = could not find container \"9ded0e9663dcee04ef2c3101f73ba8148b4361f8f195e99f55a103968653a5ce\": container with ID starting with 9ded0e9663dcee04ef2c3101f73ba8148b4361f8f195e99f55a103968653a5ce not found: ID does not exist" Nov 22 08:42:42 crc kubenswrapper[4856]: I1122 08:42:42.101764 4856 scope.go:117] "RemoveContainer" containerID="6f4fa18ff7708210932d78ee21c2169c1c9bfc2aa9db7e059d2b89d0f6dc4503" Nov 22 08:42:42 crc kubenswrapper[4856]: E1122 08:42:42.102739 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f4fa18ff7708210932d78ee21c2169c1c9bfc2aa9db7e059d2b89d0f6dc4503\": container with ID starting with 6f4fa18ff7708210932d78ee21c2169c1c9bfc2aa9db7e059d2b89d0f6dc4503 not found: ID does not exist" containerID="6f4fa18ff7708210932d78ee21c2169c1c9bfc2aa9db7e059d2b89d0f6dc4503" Nov 22 08:42:42 crc kubenswrapper[4856]: I1122 08:42:42.102778 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f4fa18ff7708210932d78ee21c2169c1c9bfc2aa9db7e059d2b89d0f6dc4503"} err="failed to get container status \"6f4fa18ff7708210932d78ee21c2169c1c9bfc2aa9db7e059d2b89d0f6dc4503\": rpc error: code = NotFound desc = could not find container \"6f4fa18ff7708210932d78ee21c2169c1c9bfc2aa9db7e059d2b89d0f6dc4503\": container with ID starting with 6f4fa18ff7708210932d78ee21c2169c1c9bfc2aa9db7e059d2b89d0f6dc4503 not found: ID does not exist" Nov 22 08:42:42 crc kubenswrapper[4856]: I1122 08:42:42.731743 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="700e70ad-903c-45a6-9029-f32993e31566" path="/var/lib/kubelet/pods/700e70ad-903c-45a6-9029-f32993e31566/volumes" Nov 22 08:42:53 crc kubenswrapper[4856]: I1122 08:42:53.709882 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:42:53 crc kubenswrapper[4856]: E1122 08:42:53.710903 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.042919 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-s9znq"] Nov 22 08:42:58 crc kubenswrapper[4856]: E1122 08:42:58.043960 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700e70ad-903c-45a6-9029-f32993e31566" containerName="extract-utilities" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.043981 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="700e70ad-903c-45a6-9029-f32993e31566" containerName="extract-utilities" Nov 22 08:42:58 crc kubenswrapper[4856]: E1122 08:42:58.044017 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700e70ad-903c-45a6-9029-f32993e31566" containerName="registry-server" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.044025 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="700e70ad-903c-45a6-9029-f32993e31566" containerName="registry-server" Nov 22 08:42:58 crc kubenswrapper[4856]: E1122 08:42:58.044052 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="700e70ad-903c-45a6-9029-f32993e31566" containerName="extract-content" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.044062 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="700e70ad-903c-45a6-9029-f32993e31566" containerName="extract-content" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.044267 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="700e70ad-903c-45a6-9029-f32993e31566" containerName="registry-server" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.045038 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s9znq" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.055418 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-s9znq"] Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.096282 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be-operator-scripts\") pod \"nova-api-db-create-s9znq\" (UID: \"1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be\") " pod="openstack/nova-api-db-create-s9znq" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.096570 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmjnv\" (UniqueName: \"kubernetes.io/projected/1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be-kube-api-access-gmjnv\") pod \"nova-api-db-create-s9znq\" (UID: \"1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be\") " pod="openstack/nova-api-db-create-s9znq" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.133252 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-7vrtm"] Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.134830 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7vrtm" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.141182 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7vrtm"] Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.198216 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmjnv\" (UniqueName: \"kubernetes.io/projected/1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be-kube-api-access-gmjnv\") pod \"nova-api-db-create-s9znq\" (UID: \"1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be\") " pod="openstack/nova-api-db-create-s9znq" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.198280 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chht9\" (UniqueName: \"kubernetes.io/projected/afa3b1c6-8a5e-4182-8cdb-6a229c647fe0-kube-api-access-chht9\") pod \"nova-cell0-db-create-7vrtm\" (UID: \"afa3b1c6-8a5e-4182-8cdb-6a229c647fe0\") " pod="openstack/nova-cell0-db-create-7vrtm" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.198336 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa3b1c6-8a5e-4182-8cdb-6a229c647fe0-operator-scripts\") pod \"nova-cell0-db-create-7vrtm\" (UID: \"afa3b1c6-8a5e-4182-8cdb-6a229c647fe0\") " pod="openstack/nova-cell0-db-create-7vrtm" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.198533 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be-operator-scripts\") pod \"nova-api-db-create-s9znq\" (UID: \"1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be\") " pod="openstack/nova-api-db-create-s9znq" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.199332 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be-operator-scripts\") pod \"nova-api-db-create-s9znq\" (UID: \"1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be\") " pod="openstack/nova-api-db-create-s9znq" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.233122 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmjnv\" (UniqueName: \"kubernetes.io/projected/1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be-kube-api-access-gmjnv\") pod \"nova-api-db-create-s9znq\" (UID: \"1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be\") " pod="openstack/nova-api-db-create-s9znq" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.244903 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3e38-account-create-5xm72"] Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.245923 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3e38-account-create-5xm72" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.248076 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.255987 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3e38-account-create-5xm72"] Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.299976 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chht9\" (UniqueName: \"kubernetes.io/projected/afa3b1c6-8a5e-4182-8cdb-6a229c647fe0-kube-api-access-chht9\") pod \"nova-cell0-db-create-7vrtm\" (UID: \"afa3b1c6-8a5e-4182-8cdb-6a229c647fe0\") " pod="openstack/nova-cell0-db-create-7vrtm" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.300064 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa3b1c6-8a5e-4182-8cdb-6a229c647fe0-operator-scripts\") pod \"nova-cell0-db-create-7vrtm\" (UID: \"afa3b1c6-8a5e-4182-8cdb-6a229c647fe0\") " pod="openstack/nova-cell0-db-create-7vrtm" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.300109 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5f20c18-ea7b-4018-a74d-3e18bcd85250-operator-scripts\") pod \"nova-api-3e38-account-create-5xm72\" (UID: \"b5f20c18-ea7b-4018-a74d-3e18bcd85250\") " pod="openstack/nova-api-3e38-account-create-5xm72" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.300137 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmkw7\" (UniqueName: \"kubernetes.io/projected/b5f20c18-ea7b-4018-a74d-3e18bcd85250-kube-api-access-nmkw7\") pod \"nova-api-3e38-account-create-5xm72\" (UID: \"b5f20c18-ea7b-4018-a74d-3e18bcd85250\") " pod="openstack/nova-api-3e38-account-create-5xm72" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.300873 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa3b1c6-8a5e-4182-8cdb-6a229c647fe0-operator-scripts\") pod \"nova-cell0-db-create-7vrtm\" (UID: \"afa3b1c6-8a5e-4182-8cdb-6a229c647fe0\") " pod="openstack/nova-cell0-db-create-7vrtm" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.323063 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chht9\" (UniqueName: \"kubernetes.io/projected/afa3b1c6-8a5e-4182-8cdb-6a229c647fe0-kube-api-access-chht9\") pod \"nova-cell0-db-create-7vrtm\" (UID: \"afa3b1c6-8a5e-4182-8cdb-6a229c647fe0\") " pod="openstack/nova-cell0-db-create-7vrtm" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.334394 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-92ttx"] Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.336139 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-92ttx" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.342386 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-92ttx"] Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.384456 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s9znq" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.401697 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p66m\" (UniqueName: \"kubernetes.io/projected/c755cad0-e196-4b7a-ba18-c10722c9b550-kube-api-access-2p66m\") pod \"nova-cell1-db-create-92ttx\" (UID: \"c755cad0-e196-4b7a-ba18-c10722c9b550\") " pod="openstack/nova-cell1-db-create-92ttx" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.401865 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c755cad0-e196-4b7a-ba18-c10722c9b550-operator-scripts\") pod \"nova-cell1-db-create-92ttx\" (UID: \"c755cad0-e196-4b7a-ba18-c10722c9b550\") " pod="openstack/nova-cell1-db-create-92ttx" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.401921 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5f20c18-ea7b-4018-a74d-3e18bcd85250-operator-scripts\") pod \"nova-api-3e38-account-create-5xm72\" (UID: \"b5f20c18-ea7b-4018-a74d-3e18bcd85250\") " pod="openstack/nova-api-3e38-account-create-5xm72" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.401952 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmkw7\" (UniqueName: \"kubernetes.io/projected/b5f20c18-ea7b-4018-a74d-3e18bcd85250-kube-api-access-nmkw7\") pod \"nova-api-3e38-account-create-5xm72\" (UID: \"b5f20c18-ea7b-4018-a74d-3e18bcd85250\") " pod="openstack/nova-api-3e38-account-create-5xm72" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.403111 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5f20c18-ea7b-4018-a74d-3e18bcd85250-operator-scripts\") pod \"nova-api-3e38-account-create-5xm72\" (UID: \"b5f20c18-ea7b-4018-a74d-3e18bcd85250\") " pod="openstack/nova-api-3e38-account-create-5xm72" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.425170 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmkw7\" (UniqueName: \"kubernetes.io/projected/b5f20c18-ea7b-4018-a74d-3e18bcd85250-kube-api-access-nmkw7\") pod \"nova-api-3e38-account-create-5xm72\" (UID: \"b5f20c18-ea7b-4018-a74d-3e18bcd85250\") " pod="openstack/nova-api-3e38-account-create-5xm72" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.444632 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1d70-account-create-r64jw"] Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.446024 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1d70-account-create-r64jw" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.448348 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.457675 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7vrtm" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.475859 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1d70-account-create-r64jw"] Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.503519 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr8r2\" (UniqueName: \"kubernetes.io/projected/20510931-5b0d-4be7-beec-83051479beb3-kube-api-access-vr8r2\") pod \"nova-cell0-1d70-account-create-r64jw\" (UID: \"20510931-5b0d-4be7-beec-83051479beb3\") " pod="openstack/nova-cell0-1d70-account-create-r64jw" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.503929 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20510931-5b0d-4be7-beec-83051479beb3-operator-scripts\") pod \"nova-cell0-1d70-account-create-r64jw\" (UID: \"20510931-5b0d-4be7-beec-83051479beb3\") " pod="openstack/nova-cell0-1d70-account-create-r64jw" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.503964 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p66m\" (UniqueName: \"kubernetes.io/projected/c755cad0-e196-4b7a-ba18-c10722c9b550-kube-api-access-2p66m\") pod \"nova-cell1-db-create-92ttx\" (UID: \"c755cad0-e196-4b7a-ba18-c10722c9b550\") " pod="openstack/nova-cell1-db-create-92ttx" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.504245 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c755cad0-e196-4b7a-ba18-c10722c9b550-operator-scripts\") pod \"nova-cell1-db-create-92ttx\" (UID: \"c755cad0-e196-4b7a-ba18-c10722c9b550\") " pod="openstack/nova-cell1-db-create-92ttx" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.505676 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c755cad0-e196-4b7a-ba18-c10722c9b550-operator-scripts\") pod \"nova-cell1-db-create-92ttx\" (UID: \"c755cad0-e196-4b7a-ba18-c10722c9b550\") " pod="openstack/nova-cell1-db-create-92ttx" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.535232 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p66m\" (UniqueName: \"kubernetes.io/projected/c755cad0-e196-4b7a-ba18-c10722c9b550-kube-api-access-2p66m\") pod \"nova-cell1-db-create-92ttx\" (UID: \"c755cad0-e196-4b7a-ba18-c10722c9b550\") " pod="openstack/nova-cell1-db-create-92ttx" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.589142 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3e38-account-create-5xm72" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.607039 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr8r2\" (UniqueName: \"kubernetes.io/projected/20510931-5b0d-4be7-beec-83051479beb3-kube-api-access-vr8r2\") pod \"nova-cell0-1d70-account-create-r64jw\" (UID: \"20510931-5b0d-4be7-beec-83051479beb3\") " pod="openstack/nova-cell0-1d70-account-create-r64jw" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.608433 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20510931-5b0d-4be7-beec-83051479beb3-operator-scripts\") pod \"nova-cell0-1d70-account-create-r64jw\" (UID: \"20510931-5b0d-4be7-beec-83051479beb3\") " pod="openstack/nova-cell0-1d70-account-create-r64jw" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.609834 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20510931-5b0d-4be7-beec-83051479beb3-operator-scripts\") pod \"nova-cell0-1d70-account-create-r64jw\" (UID: \"20510931-5b0d-4be7-beec-83051479beb3\") " pod="openstack/nova-cell0-1d70-account-create-r64jw" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.635811 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr8r2\" (UniqueName: \"kubernetes.io/projected/20510931-5b0d-4be7-beec-83051479beb3-kube-api-access-vr8r2\") pod \"nova-cell0-1d70-account-create-r64jw\" (UID: \"20510931-5b0d-4be7-beec-83051479beb3\") " pod="openstack/nova-cell0-1d70-account-create-r64jw" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.652828 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ef37-account-create-hdh4q"] Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.653944 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ef37-account-create-hdh4q" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.660267 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ef37-account-create-hdh4q"] Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.660413 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.694122 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-92ttx" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.718187 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk78r\" (UniqueName: \"kubernetes.io/projected/08390c68-8119-42e6-a654-44b0ccd422ad-kube-api-access-mk78r\") pod \"nova-cell1-ef37-account-create-hdh4q\" (UID: \"08390c68-8119-42e6-a654-44b0ccd422ad\") " pod="openstack/nova-cell1-ef37-account-create-hdh4q" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.718415 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08390c68-8119-42e6-a654-44b0ccd422ad-operator-scripts\") pod \"nova-cell1-ef37-account-create-hdh4q\" (UID: \"08390c68-8119-42e6-a654-44b0ccd422ad\") " pod="openstack/nova-cell1-ef37-account-create-hdh4q" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.821884 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08390c68-8119-42e6-a654-44b0ccd422ad-operator-scripts\") pod \"nova-cell1-ef37-account-create-hdh4q\" (UID: \"08390c68-8119-42e6-a654-44b0ccd422ad\") " pod="openstack/nova-cell1-ef37-account-create-hdh4q" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.822486 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk78r\" (UniqueName: \"kubernetes.io/projected/08390c68-8119-42e6-a654-44b0ccd422ad-kube-api-access-mk78r\") pod \"nova-cell1-ef37-account-create-hdh4q\" (UID: \"08390c68-8119-42e6-a654-44b0ccd422ad\") " pod="openstack/nova-cell1-ef37-account-create-hdh4q" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.822901 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08390c68-8119-42e6-a654-44b0ccd422ad-operator-scripts\") pod \"nova-cell1-ef37-account-create-hdh4q\" (UID: \"08390c68-8119-42e6-a654-44b0ccd422ad\") " pod="openstack/nova-cell1-ef37-account-create-hdh4q" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.846345 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk78r\" (UniqueName: \"kubernetes.io/projected/08390c68-8119-42e6-a654-44b0ccd422ad-kube-api-access-mk78r\") pod \"nova-cell1-ef37-account-create-hdh4q\" (UID: \"08390c68-8119-42e6-a654-44b0ccd422ad\") " pod="openstack/nova-cell1-ef37-account-create-hdh4q" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.866408 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1d70-account-create-r64jw" Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.917178 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-s9znq"] Nov 22 08:42:58 crc kubenswrapper[4856]: I1122 08:42:58.976397 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ef37-account-create-hdh4q" Nov 22 08:42:59 crc kubenswrapper[4856]: I1122 08:42:59.027775 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7vrtm"] Nov 22 08:42:59 crc kubenswrapper[4856]: W1122 08:42:59.063830 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafa3b1c6_8a5e_4182_8cdb_6a229c647fe0.slice/crio-668e10859f044731a6091e9f1549a76826bd0284ce40697a4bc1657055705dd7 WatchSource:0}: Error finding container 668e10859f044731a6091e9f1549a76826bd0284ce40697a4bc1657055705dd7: Status 404 returned error can't find the container with id 668e10859f044731a6091e9f1549a76826bd0284ce40697a4bc1657055705dd7 Nov 22 08:42:59 crc kubenswrapper[4856]: I1122 08:42:59.125000 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s9znq" event={"ID":"1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be","Type":"ContainerStarted","Data":"50615e50ab1876a043cc0659a8279f34a311d5e59970431e323567e5d9c00c43"} Nov 22 08:42:59 crc kubenswrapper[4856]: I1122 08:42:59.127235 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7vrtm" event={"ID":"afa3b1c6-8a5e-4182-8cdb-6a229c647fe0","Type":"ContainerStarted","Data":"668e10859f044731a6091e9f1549a76826bd0284ce40697a4bc1657055705dd7"} Nov 22 08:42:59 crc kubenswrapper[4856]: I1122 08:42:59.143064 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3e38-account-create-5xm72"] Nov 22 08:42:59 crc kubenswrapper[4856]: W1122 08:42:59.164709 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5f20c18_ea7b_4018_a74d_3e18bcd85250.slice/crio-250586cecca782e8ab3f3ebac988d1ae59ae23ba39f0765adf0600dce3882c87 WatchSource:0}: Error finding container 250586cecca782e8ab3f3ebac988d1ae59ae23ba39f0765adf0600dce3882c87: Status 404 returned error can't find the container with id 250586cecca782e8ab3f3ebac988d1ae59ae23ba39f0765adf0600dce3882c87 Nov 22 08:42:59 crc kubenswrapper[4856]: I1122 08:42:59.242363 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1d70-account-create-r64jw"] Nov 22 08:42:59 crc kubenswrapper[4856]: I1122 08:42:59.252456 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-92ttx"] Nov 22 08:42:59 crc kubenswrapper[4856]: I1122 08:42:59.541882 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ef37-account-create-hdh4q"] Nov 22 08:42:59 crc kubenswrapper[4856]: W1122 08:42:59.558765 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08390c68_8119_42e6_a654_44b0ccd422ad.slice/crio-8859fbfced98b19dd9f7ccdf2d06c43fe929e1d193efbfcb3472f971bfc0d67c WatchSource:0}: Error finding container 8859fbfced98b19dd9f7ccdf2d06c43fe929e1d193efbfcb3472f971bfc0d67c: Status 404 returned error can't find the container with id 8859fbfced98b19dd9f7ccdf2d06c43fe929e1d193efbfcb3472f971bfc0d67c Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.139021 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3e38-account-create-5xm72" event={"ID":"b5f20c18-ea7b-4018-a74d-3e18bcd85250","Type":"ContainerStarted","Data":"15e0aaa1e96ae564811931d1e8608e46203e4e5379a0f50605507df045701c2b"} Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.139358 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3e38-account-create-5xm72" event={"ID":"b5f20c18-ea7b-4018-a74d-3e18bcd85250","Type":"ContainerStarted","Data":"250586cecca782e8ab3f3ebac988d1ae59ae23ba39f0765adf0600dce3882c87"} Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.141808 4856 generic.go:334] "Generic (PLEG): container finished" podID="afa3b1c6-8a5e-4182-8cdb-6a229c647fe0" containerID="6d7485cf8959d8d16fa37e46d28d9362ae449ea080df9a35b78475fe456e4e5a" exitCode=0 Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.141913 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7vrtm" event={"ID":"afa3b1c6-8a5e-4182-8cdb-6a229c647fe0","Type":"ContainerDied","Data":"6d7485cf8959d8d16fa37e46d28d9362ae449ea080df9a35b78475fe456e4e5a"} Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.144398 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ef37-account-create-hdh4q" event={"ID":"08390c68-8119-42e6-a654-44b0ccd422ad","Type":"ContainerStarted","Data":"5a7a126b36eddf6debb9120e04d42173a79b2102b757179ef35d1eab58bbdb2a"} Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.144462 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ef37-account-create-hdh4q" event={"ID":"08390c68-8119-42e6-a654-44b0ccd422ad","Type":"ContainerStarted","Data":"8859fbfced98b19dd9f7ccdf2d06c43fe929e1d193efbfcb3472f971bfc0d67c"} Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.147535 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1d70-account-create-r64jw" event={"ID":"20510931-5b0d-4be7-beec-83051479beb3","Type":"ContainerStarted","Data":"5791ae700212e39b9b30454ad838493176357c551bc6ba3c5ed35232de81d9a3"} Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.147598 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1d70-account-create-r64jw" event={"ID":"20510931-5b0d-4be7-beec-83051479beb3","Type":"ContainerStarted","Data":"99f17711e4b1ac8862ace1cd5f4640fc45773098cf3a80b5d30ceab4e2b149b5"} Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.149897 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-92ttx" event={"ID":"c755cad0-e196-4b7a-ba18-c10722c9b550","Type":"ContainerStarted","Data":"8907f44112bc37b7415ca0cd25d1319ed3f57d93a6ae48f72004cfdf1f6d8b73"} Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.149931 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-92ttx" event={"ID":"c755cad0-e196-4b7a-ba18-c10722c9b550","Type":"ContainerStarted","Data":"e6186ad8d4f3bc2668b98fbb7315573301a7ca991fcad9913360fc1d8a1cf276"} Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.152042 4856 generic.go:334] "Generic (PLEG): container finished" podID="1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be" containerID="d6746db713e2a7719f2bb82079706c80a7c92bb7919249692dc1e608cf514e78" exitCode=0 Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.152100 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s9znq" event={"ID":"1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be","Type":"ContainerDied","Data":"d6746db713e2a7719f2bb82079706c80a7c92bb7919249692dc1e608cf514e78"} Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.170081 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-3e38-account-create-5xm72" podStartSLOduration=2.17005449 podStartE2EDuration="2.17005449s" podCreationTimestamp="2025-11-22 08:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:43:00.160972345 +0000 UTC m=+6022.574365623" watchObservedRunningTime="2025-11-22 08:43:00.17005449 +0000 UTC m=+6022.583447748" Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.191758 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1d70-account-create-r64jw" podStartSLOduration=2.191721154 podStartE2EDuration="2.191721154s" podCreationTimestamp="2025-11-22 08:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:43:00.17896077 +0000 UTC m=+6022.592354028" watchObservedRunningTime="2025-11-22 08:43:00.191721154 +0000 UTC m=+6022.605114412" Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.233695 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-ef37-account-create-hdh4q" podStartSLOduration=2.233664605 podStartE2EDuration="2.233664605s" podCreationTimestamp="2025-11-22 08:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:43:00.225138555 +0000 UTC m=+6022.638531823" watchObservedRunningTime="2025-11-22 08:43:00.233664605 +0000 UTC m=+6022.647057863" Nov 22 08:43:00 crc kubenswrapper[4856]: I1122 08:43:00.274120 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-92ttx" podStartSLOduration=2.274102555 podStartE2EDuration="2.274102555s" podCreationTimestamp="2025-11-22 08:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:43:00.26356325 +0000 UTC m=+6022.676956508" watchObservedRunningTime="2025-11-22 08:43:00.274102555 +0000 UTC m=+6022.687495813" Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.162731 4856 generic.go:334] "Generic (PLEG): container finished" podID="08390c68-8119-42e6-a654-44b0ccd422ad" containerID="5a7a126b36eddf6debb9120e04d42173a79b2102b757179ef35d1eab58bbdb2a" exitCode=0 Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.162882 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ef37-account-create-hdh4q" event={"ID":"08390c68-8119-42e6-a654-44b0ccd422ad","Type":"ContainerDied","Data":"5a7a126b36eddf6debb9120e04d42173a79b2102b757179ef35d1eab58bbdb2a"} Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.166481 4856 generic.go:334] "Generic (PLEG): container finished" podID="20510931-5b0d-4be7-beec-83051479beb3" containerID="5791ae700212e39b9b30454ad838493176357c551bc6ba3c5ed35232de81d9a3" exitCode=0 Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.166623 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1d70-account-create-r64jw" event={"ID":"20510931-5b0d-4be7-beec-83051479beb3","Type":"ContainerDied","Data":"5791ae700212e39b9b30454ad838493176357c551bc6ba3c5ed35232de81d9a3"} Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.169390 4856 generic.go:334] "Generic (PLEG): container finished" podID="c755cad0-e196-4b7a-ba18-c10722c9b550" containerID="8907f44112bc37b7415ca0cd25d1319ed3f57d93a6ae48f72004cfdf1f6d8b73" exitCode=0 Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.169426 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-92ttx" event={"ID":"c755cad0-e196-4b7a-ba18-c10722c9b550","Type":"ContainerDied","Data":"8907f44112bc37b7415ca0cd25d1319ed3f57d93a6ae48f72004cfdf1f6d8b73"} Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.175759 4856 generic.go:334] "Generic (PLEG): container finished" podID="b5f20c18-ea7b-4018-a74d-3e18bcd85250" containerID="15e0aaa1e96ae564811931d1e8608e46203e4e5379a0f50605507df045701c2b" exitCode=0 Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.176239 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3e38-account-create-5xm72" event={"ID":"b5f20c18-ea7b-4018-a74d-3e18bcd85250","Type":"ContainerDied","Data":"15e0aaa1e96ae564811931d1e8608e46203e4e5379a0f50605507df045701c2b"} Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.719219 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7vrtm" Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.739296 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s9znq" Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.833170 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chht9\" (UniqueName: \"kubernetes.io/projected/afa3b1c6-8a5e-4182-8cdb-6a229c647fe0-kube-api-access-chht9\") pod \"afa3b1c6-8a5e-4182-8cdb-6a229c647fe0\" (UID: \"afa3b1c6-8a5e-4182-8cdb-6a229c647fe0\") " Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.833276 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmjnv\" (UniqueName: \"kubernetes.io/projected/1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be-kube-api-access-gmjnv\") pod \"1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be\" (UID: \"1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be\") " Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.833321 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa3b1c6-8a5e-4182-8cdb-6a229c647fe0-operator-scripts\") pod \"afa3b1c6-8a5e-4182-8cdb-6a229c647fe0\" (UID: \"afa3b1c6-8a5e-4182-8cdb-6a229c647fe0\") " Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.833589 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be-operator-scripts\") pod \"1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be\" (UID: \"1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be\") " Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.834288 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa3b1c6-8a5e-4182-8cdb-6a229c647fe0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afa3b1c6-8a5e-4182-8cdb-6a229c647fe0" (UID: "afa3b1c6-8a5e-4182-8cdb-6a229c647fe0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.834473 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be" (UID: "1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.844157 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa3b1c6-8a5e-4182-8cdb-6a229c647fe0-kube-api-access-chht9" (OuterVolumeSpecName: "kube-api-access-chht9") pod "afa3b1c6-8a5e-4182-8cdb-6a229c647fe0" (UID: "afa3b1c6-8a5e-4182-8cdb-6a229c647fe0"). InnerVolumeSpecName "kube-api-access-chht9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.846549 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be-kube-api-access-gmjnv" (OuterVolumeSpecName: "kube-api-access-gmjnv") pod "1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be" (UID: "1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be"). InnerVolumeSpecName "kube-api-access-gmjnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.936189 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.936250 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chht9\" (UniqueName: \"kubernetes.io/projected/afa3b1c6-8a5e-4182-8cdb-6a229c647fe0-kube-api-access-chht9\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.936265 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmjnv\" (UniqueName: \"kubernetes.io/projected/1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be-kube-api-access-gmjnv\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:01 crc kubenswrapper[4856]: I1122 08:43:01.936277 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa3b1c6-8a5e-4182-8cdb-6a229c647fe0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.194864 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-s9znq" event={"ID":"1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be","Type":"ContainerDied","Data":"50615e50ab1876a043cc0659a8279f34a311d5e59970431e323567e5d9c00c43"} Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.194949 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50615e50ab1876a043cc0659a8279f34a311d5e59970431e323567e5d9c00c43" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.194944 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-s9znq" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.197156 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7vrtm" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.197206 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7vrtm" event={"ID":"afa3b1c6-8a5e-4182-8cdb-6a229c647fe0","Type":"ContainerDied","Data":"668e10859f044731a6091e9f1549a76826bd0284ce40697a4bc1657055705dd7"} Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.197247 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="668e10859f044731a6091e9f1549a76826bd0284ce40697a4bc1657055705dd7" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.430866 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-92ttx" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.546492 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c755cad0-e196-4b7a-ba18-c10722c9b550-operator-scripts\") pod \"c755cad0-e196-4b7a-ba18-c10722c9b550\" (UID: \"c755cad0-e196-4b7a-ba18-c10722c9b550\") " Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.546811 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p66m\" (UniqueName: \"kubernetes.io/projected/c755cad0-e196-4b7a-ba18-c10722c9b550-kube-api-access-2p66m\") pod \"c755cad0-e196-4b7a-ba18-c10722c9b550\" (UID: \"c755cad0-e196-4b7a-ba18-c10722c9b550\") " Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.547350 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c755cad0-e196-4b7a-ba18-c10722c9b550-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c755cad0-e196-4b7a-ba18-c10722c9b550" (UID: "c755cad0-e196-4b7a-ba18-c10722c9b550"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.552319 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c755cad0-e196-4b7a-ba18-c10722c9b550-kube-api-access-2p66m" (OuterVolumeSpecName: "kube-api-access-2p66m") pod "c755cad0-e196-4b7a-ba18-c10722c9b550" (UID: "c755cad0-e196-4b7a-ba18-c10722c9b550"). InnerVolumeSpecName "kube-api-access-2p66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.649828 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c755cad0-e196-4b7a-ba18-c10722c9b550-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.649882 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p66m\" (UniqueName: \"kubernetes.io/projected/c755cad0-e196-4b7a-ba18-c10722c9b550-kube-api-access-2p66m\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.724488 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ef37-account-create-hdh4q" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.736392 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1d70-account-create-r64jw" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.749210 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3e38-account-create-5xm72" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.751190 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08390c68-8119-42e6-a654-44b0ccd422ad-operator-scripts\") pod \"08390c68-8119-42e6-a654-44b0ccd422ad\" (UID: \"08390c68-8119-42e6-a654-44b0ccd422ad\") " Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.751471 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk78r\" (UniqueName: \"kubernetes.io/projected/08390c68-8119-42e6-a654-44b0ccd422ad-kube-api-access-mk78r\") pod \"08390c68-8119-42e6-a654-44b0ccd422ad\" (UID: \"08390c68-8119-42e6-a654-44b0ccd422ad\") " Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.754342 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08390c68-8119-42e6-a654-44b0ccd422ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08390c68-8119-42e6-a654-44b0ccd422ad" (UID: "08390c68-8119-42e6-a654-44b0ccd422ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.758779 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08390c68-8119-42e6-a654-44b0ccd422ad-kube-api-access-mk78r" (OuterVolumeSpecName: "kube-api-access-mk78r") pod "08390c68-8119-42e6-a654-44b0ccd422ad" (UID: "08390c68-8119-42e6-a654-44b0ccd422ad"). InnerVolumeSpecName "kube-api-access-mk78r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.855244 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20510931-5b0d-4be7-beec-83051479beb3-operator-scripts\") pod \"20510931-5b0d-4be7-beec-83051479beb3\" (UID: \"20510931-5b0d-4be7-beec-83051479beb3\") " Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.856264 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5f20c18-ea7b-4018-a74d-3e18bcd85250-operator-scripts\") pod \"b5f20c18-ea7b-4018-a74d-3e18bcd85250\" (UID: \"b5f20c18-ea7b-4018-a74d-3e18bcd85250\") " Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.856933 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20510931-5b0d-4be7-beec-83051479beb3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "20510931-5b0d-4be7-beec-83051479beb3" (UID: "20510931-5b0d-4be7-beec-83051479beb3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.856942 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr8r2\" (UniqueName: \"kubernetes.io/projected/20510931-5b0d-4be7-beec-83051479beb3-kube-api-access-vr8r2\") pod \"20510931-5b0d-4be7-beec-83051479beb3\" (UID: \"20510931-5b0d-4be7-beec-83051479beb3\") " Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.857059 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmkw7\" (UniqueName: \"kubernetes.io/projected/b5f20c18-ea7b-4018-a74d-3e18bcd85250-kube-api-access-nmkw7\") pod \"b5f20c18-ea7b-4018-a74d-3e18bcd85250\" (UID: \"b5f20c18-ea7b-4018-a74d-3e18bcd85250\") " Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.857413 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5f20c18-ea7b-4018-a74d-3e18bcd85250-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b5f20c18-ea7b-4018-a74d-3e18bcd85250" (UID: "b5f20c18-ea7b-4018-a74d-3e18bcd85250"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.858168 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk78r\" (UniqueName: \"kubernetes.io/projected/08390c68-8119-42e6-a654-44b0ccd422ad-kube-api-access-mk78r\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.858198 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/20510931-5b0d-4be7-beec-83051479beb3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.858212 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b5f20c18-ea7b-4018-a74d-3e18bcd85250-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.858224 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08390c68-8119-42e6-a654-44b0ccd422ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.861339 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f20c18-ea7b-4018-a74d-3e18bcd85250-kube-api-access-nmkw7" (OuterVolumeSpecName: "kube-api-access-nmkw7") pod "b5f20c18-ea7b-4018-a74d-3e18bcd85250" (UID: "b5f20c18-ea7b-4018-a74d-3e18bcd85250"). InnerVolumeSpecName "kube-api-access-nmkw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.864169 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20510931-5b0d-4be7-beec-83051479beb3-kube-api-access-vr8r2" (OuterVolumeSpecName: "kube-api-access-vr8r2") pod "20510931-5b0d-4be7-beec-83051479beb3" (UID: "20510931-5b0d-4be7-beec-83051479beb3"). InnerVolumeSpecName "kube-api-access-vr8r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.959655 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr8r2\" (UniqueName: \"kubernetes.io/projected/20510931-5b0d-4be7-beec-83051479beb3-kube-api-access-vr8r2\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:02 crc kubenswrapper[4856]: I1122 08:43:02.959871 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmkw7\" (UniqueName: \"kubernetes.io/projected/b5f20c18-ea7b-4018-a74d-3e18bcd85250-kube-api-access-nmkw7\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.206195 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3e38-account-create-5xm72" event={"ID":"b5f20c18-ea7b-4018-a74d-3e18bcd85250","Type":"ContainerDied","Data":"250586cecca782e8ab3f3ebac988d1ae59ae23ba39f0765adf0600dce3882c87"} Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.206242 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="250586cecca782e8ab3f3ebac988d1ae59ae23ba39f0765adf0600dce3882c87" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.206261 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3e38-account-create-5xm72" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.208371 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ef37-account-create-hdh4q" event={"ID":"08390c68-8119-42e6-a654-44b0ccd422ad","Type":"ContainerDied","Data":"8859fbfced98b19dd9f7ccdf2d06c43fe929e1d193efbfcb3472f971bfc0d67c"} Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.208434 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8859fbfced98b19dd9f7ccdf2d06c43fe929e1d193efbfcb3472f971bfc0d67c" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.208430 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ef37-account-create-hdh4q" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.210462 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1d70-account-create-r64jw" event={"ID":"20510931-5b0d-4be7-beec-83051479beb3","Type":"ContainerDied","Data":"99f17711e4b1ac8862ace1cd5f4640fc45773098cf3a80b5d30ceab4e2b149b5"} Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.210497 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1d70-account-create-r64jw" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.210528 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99f17711e4b1ac8862ace1cd5f4640fc45773098cf3a80b5d30ceab4e2b149b5" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.212140 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-92ttx" event={"ID":"c755cad0-e196-4b7a-ba18-c10722c9b550","Type":"ContainerDied","Data":"e6186ad8d4f3bc2668b98fbb7315573301a7ca991fcad9913360fc1d8a1cf276"} Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.212168 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6186ad8d4f3bc2668b98fbb7315573301a7ca991fcad9913360fc1d8a1cf276" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.212284 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-92ttx" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.915471 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qvrrw"] Nov 22 08:43:03 crc kubenswrapper[4856]: E1122 08:43:03.916570 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c755cad0-e196-4b7a-ba18-c10722c9b550" containerName="mariadb-database-create" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.916651 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c755cad0-e196-4b7a-ba18-c10722c9b550" containerName="mariadb-database-create" Nov 22 08:43:03 crc kubenswrapper[4856]: E1122 08:43:03.916740 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08390c68-8119-42e6-a654-44b0ccd422ad" containerName="mariadb-account-create" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.916795 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="08390c68-8119-42e6-a654-44b0ccd422ad" containerName="mariadb-account-create" Nov 22 08:43:03 crc kubenswrapper[4856]: E1122 08:43:03.916858 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa3b1c6-8a5e-4182-8cdb-6a229c647fe0" containerName="mariadb-database-create" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.916929 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa3b1c6-8a5e-4182-8cdb-6a229c647fe0" containerName="mariadb-database-create" Nov 22 08:43:03 crc kubenswrapper[4856]: E1122 08:43:03.916993 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20510931-5b0d-4be7-beec-83051479beb3" containerName="mariadb-account-create" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.917048 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="20510931-5b0d-4be7-beec-83051479beb3" containerName="mariadb-account-create" Nov 22 08:43:03 crc kubenswrapper[4856]: E1122 08:43:03.917108 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f20c18-ea7b-4018-a74d-3e18bcd85250" containerName="mariadb-account-create" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.917160 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f20c18-ea7b-4018-a74d-3e18bcd85250" containerName="mariadb-account-create" Nov 22 08:43:03 crc kubenswrapper[4856]: E1122 08:43:03.917213 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be" containerName="mariadb-database-create" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.917271 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be" containerName="mariadb-database-create" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.917485 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="08390c68-8119-42e6-a654-44b0ccd422ad" containerName="mariadb-account-create" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.917771 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c755cad0-e196-4b7a-ba18-c10722c9b550" containerName="mariadb-database-create" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.917841 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="20510931-5b0d-4be7-beec-83051479beb3" containerName="mariadb-account-create" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.917925 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa3b1c6-8a5e-4182-8cdb-6a229c647fe0" containerName="mariadb-database-create" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.918023 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f20c18-ea7b-4018-a74d-3e18bcd85250" containerName="mariadb-account-create" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.918094 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be" containerName="mariadb-database-create" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.918896 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.924618 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.924836 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lqv9b" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.925462 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 08:43:03 crc kubenswrapper[4856]: I1122 08:43:03.945690 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qvrrw"] Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.079772 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-config-data\") pod \"nova-cell0-conductor-db-sync-qvrrw\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.079821 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5plx\" (UniqueName: \"kubernetes.io/projected/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-kube-api-access-q5plx\") pod \"nova-cell0-conductor-db-sync-qvrrw\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.079874 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-scripts\") pod \"nova-cell0-conductor-db-sync-qvrrw\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.079979 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qvrrw\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.181649 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qvrrw\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.181723 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-config-data\") pod \"nova-cell0-conductor-db-sync-qvrrw\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.181749 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5plx\" (UniqueName: \"kubernetes.io/projected/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-kube-api-access-q5plx\") pod \"nova-cell0-conductor-db-sync-qvrrw\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.181789 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-scripts\") pod \"nova-cell0-conductor-db-sync-qvrrw\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.187007 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-scripts\") pod \"nova-cell0-conductor-db-sync-qvrrw\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.187205 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qvrrw\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.187421 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-config-data\") pod \"nova-cell0-conductor-db-sync-qvrrw\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.211304 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5plx\" (UniqueName: \"kubernetes.io/projected/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-kube-api-access-q5plx\") pod \"nova-cell0-conductor-db-sync-qvrrw\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.249770 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:04 crc kubenswrapper[4856]: I1122 08:43:04.721957 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qvrrw"] Nov 22 08:43:04 crc kubenswrapper[4856]: W1122 08:43:04.727393 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53c4a37a_d990_4b77_bed7_8537e9d9a0ad.slice/crio-33120a54374c6e516b08cc92220ede68f6c0989ff98379dcd5a605156834b236 WatchSource:0}: Error finding container 33120a54374c6e516b08cc92220ede68f6c0989ff98379dcd5a605156834b236: Status 404 returned error can't find the container with id 33120a54374c6e516b08cc92220ede68f6c0989ff98379dcd5a605156834b236 Nov 22 08:43:05 crc kubenswrapper[4856]: I1122 08:43:05.234287 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qvrrw" event={"ID":"53c4a37a-d990-4b77-bed7-8537e9d9a0ad","Type":"ContainerStarted","Data":"33120a54374c6e516b08cc92220ede68f6c0989ff98379dcd5a605156834b236"} Nov 22 08:43:07 crc kubenswrapper[4856]: I1122 08:43:07.710189 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:43:07 crc kubenswrapper[4856]: E1122 08:43:07.711126 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:43:15 crc kubenswrapper[4856]: I1122 08:43:15.333587 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qvrrw" event={"ID":"53c4a37a-d990-4b77-bed7-8537e9d9a0ad","Type":"ContainerStarted","Data":"31d8c15268959b5a7a6965b9e26b6f903b5d5ede686115db1f6458d04ffeebaa"} Nov 22 08:43:15 crc kubenswrapper[4856]: I1122 08:43:15.354297 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qvrrw" podStartSLOduration=2.756282041 podStartE2EDuration="12.354278584s" podCreationTimestamp="2025-11-22 08:43:03 +0000 UTC" firstStartedPulling="2025-11-22 08:43:04.73043538 +0000 UTC m=+6027.143828638" lastFinishedPulling="2025-11-22 08:43:14.328431923 +0000 UTC m=+6036.741825181" observedRunningTime="2025-11-22 08:43:15.34595467 +0000 UTC m=+6037.759347928" watchObservedRunningTime="2025-11-22 08:43:15.354278584 +0000 UTC m=+6037.767671842" Nov 22 08:43:18 crc kubenswrapper[4856]: I1122 08:43:18.728765 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:43:18 crc kubenswrapper[4856]: E1122 08:43:18.729598 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:43:20 crc kubenswrapper[4856]: I1122 08:43:20.398864 4856 generic.go:334] "Generic (PLEG): container finished" podID="53c4a37a-d990-4b77-bed7-8537e9d9a0ad" containerID="31d8c15268959b5a7a6965b9e26b6f903b5d5ede686115db1f6458d04ffeebaa" exitCode=0 Nov 22 08:43:20 crc kubenswrapper[4856]: I1122 08:43:20.398922 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qvrrw" event={"ID":"53c4a37a-d990-4b77-bed7-8537e9d9a0ad","Type":"ContainerDied","Data":"31d8c15268959b5a7a6965b9e26b6f903b5d5ede686115db1f6458d04ffeebaa"} Nov 22 08:43:21 crc kubenswrapper[4856]: I1122 08:43:21.737543 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:21 crc kubenswrapper[4856]: I1122 08:43:21.845072 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-combined-ca-bundle\") pod \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " Nov 22 08:43:21 crc kubenswrapper[4856]: I1122 08:43:21.845291 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-config-data\") pod \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " Nov 22 08:43:21 crc kubenswrapper[4856]: I1122 08:43:21.845334 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-scripts\") pod \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " Nov 22 08:43:21 crc kubenswrapper[4856]: I1122 08:43:21.845404 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5plx\" (UniqueName: \"kubernetes.io/projected/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-kube-api-access-q5plx\") pod \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\" (UID: \"53c4a37a-d990-4b77-bed7-8537e9d9a0ad\") " Nov 22 08:43:21 crc kubenswrapper[4856]: I1122 08:43:21.860028 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-kube-api-access-q5plx" (OuterVolumeSpecName: "kube-api-access-q5plx") pod "53c4a37a-d990-4b77-bed7-8537e9d9a0ad" (UID: "53c4a37a-d990-4b77-bed7-8537e9d9a0ad"). InnerVolumeSpecName "kube-api-access-q5plx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:21 crc kubenswrapper[4856]: I1122 08:43:21.860055 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-scripts" (OuterVolumeSpecName: "scripts") pod "53c4a37a-d990-4b77-bed7-8537e9d9a0ad" (UID: "53c4a37a-d990-4b77-bed7-8537e9d9a0ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:21 crc kubenswrapper[4856]: I1122 08:43:21.870825 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-config-data" (OuterVolumeSpecName: "config-data") pod "53c4a37a-d990-4b77-bed7-8537e9d9a0ad" (UID: "53c4a37a-d990-4b77-bed7-8537e9d9a0ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:21 crc kubenswrapper[4856]: I1122 08:43:21.877497 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53c4a37a-d990-4b77-bed7-8537e9d9a0ad" (UID: "53c4a37a-d990-4b77-bed7-8537e9d9a0ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:21 crc kubenswrapper[4856]: I1122 08:43:21.947791 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:21 crc kubenswrapper[4856]: I1122 08:43:21.947831 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:21 crc kubenswrapper[4856]: I1122 08:43:21.947842 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:21 crc kubenswrapper[4856]: I1122 08:43:21.947852 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5plx\" (UniqueName: \"kubernetes.io/projected/53c4a37a-d990-4b77-bed7-8537e9d9a0ad-kube-api-access-q5plx\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.417316 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qvrrw" event={"ID":"53c4a37a-d990-4b77-bed7-8537e9d9a0ad","Type":"ContainerDied","Data":"33120a54374c6e516b08cc92220ede68f6c0989ff98379dcd5a605156834b236"} Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.417360 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33120a54374c6e516b08cc92220ede68f6c0989ff98379dcd5a605156834b236" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.417414 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qvrrw" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.489086 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 08:43:22 crc kubenswrapper[4856]: E1122 08:43:22.489683 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53c4a37a-d990-4b77-bed7-8537e9d9a0ad" containerName="nova-cell0-conductor-db-sync" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.489708 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="53c4a37a-d990-4b77-bed7-8537e9d9a0ad" containerName="nova-cell0-conductor-db-sync" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.489941 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="53c4a37a-d990-4b77-bed7-8537e9d9a0ad" containerName="nova-cell0-conductor-db-sync" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.490765 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.494580 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-lqv9b" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.494876 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.499138 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.558382 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc40524-1b5e-4265-b926-8714e07bc20d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9dc40524-1b5e-4265-b926-8714e07bc20d\") " pod="openstack/nova-cell0-conductor-0" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.558630 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zgxs\" (UniqueName: \"kubernetes.io/projected/9dc40524-1b5e-4265-b926-8714e07bc20d-kube-api-access-9zgxs\") pod \"nova-cell0-conductor-0\" (UID: \"9dc40524-1b5e-4265-b926-8714e07bc20d\") " pod="openstack/nova-cell0-conductor-0" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.558667 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc40524-1b5e-4265-b926-8714e07bc20d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9dc40524-1b5e-4265-b926-8714e07bc20d\") " pod="openstack/nova-cell0-conductor-0" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.660277 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc40524-1b5e-4265-b926-8714e07bc20d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9dc40524-1b5e-4265-b926-8714e07bc20d\") " pod="openstack/nova-cell0-conductor-0" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.660480 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zgxs\" (UniqueName: \"kubernetes.io/projected/9dc40524-1b5e-4265-b926-8714e07bc20d-kube-api-access-9zgxs\") pod \"nova-cell0-conductor-0\" (UID: \"9dc40524-1b5e-4265-b926-8714e07bc20d\") " pod="openstack/nova-cell0-conductor-0" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.660544 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc40524-1b5e-4265-b926-8714e07bc20d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9dc40524-1b5e-4265-b926-8714e07bc20d\") " pod="openstack/nova-cell0-conductor-0" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.664310 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc40524-1b5e-4265-b926-8714e07bc20d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"9dc40524-1b5e-4265-b926-8714e07bc20d\") " pod="openstack/nova-cell0-conductor-0" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.669005 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc40524-1b5e-4265-b926-8714e07bc20d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"9dc40524-1b5e-4265-b926-8714e07bc20d\") " pod="openstack/nova-cell0-conductor-0" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.676157 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zgxs\" (UniqueName: \"kubernetes.io/projected/9dc40524-1b5e-4265-b926-8714e07bc20d-kube-api-access-9zgxs\") pod \"nova-cell0-conductor-0\" (UID: \"9dc40524-1b5e-4265-b926-8714e07bc20d\") " pod="openstack/nova-cell0-conductor-0" Nov 22 08:43:22 crc kubenswrapper[4856]: I1122 08:43:22.810131 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 08:43:23 crc kubenswrapper[4856]: I1122 08:43:23.236905 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 08:43:23 crc kubenswrapper[4856]: I1122 08:43:23.433230 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9dc40524-1b5e-4265-b926-8714e07bc20d","Type":"ContainerStarted","Data":"4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785"} Nov 22 08:43:23 crc kubenswrapper[4856]: I1122 08:43:23.433276 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9dc40524-1b5e-4265-b926-8714e07bc20d","Type":"ContainerStarted","Data":"9655c0d336c0c5fe388f0a646a9e95d768e8641e0194311b0aa6487972cb682e"} Nov 22 08:43:23 crc kubenswrapper[4856]: I1122 08:43:23.433413 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 22 08:43:23 crc kubenswrapper[4856]: I1122 08:43:23.448970 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.44895625 podStartE2EDuration="1.44895625s" podCreationTimestamp="2025-11-22 08:43:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:43:23.44709227 +0000 UTC m=+6045.860485528" watchObservedRunningTime="2025-11-22 08:43:23.44895625 +0000 UTC m=+6045.862349508" Nov 22 08:43:29 crc kubenswrapper[4856]: I1122 08:43:29.710594 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:43:29 crc kubenswrapper[4856]: E1122 08:43:29.711432 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:43:32 crc kubenswrapper[4856]: I1122 08:43:32.840658 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.263265 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-k7dt8"] Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.265056 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.272040 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.272057 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.273814 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-k7dt8"] Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.361563 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-config-data\") pod \"nova-cell0-cell-mapping-k7dt8\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.361612 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k7dt8\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.361647 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-scripts\") pod \"nova-cell0-cell-mapping-k7dt8\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.361760 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nls9p\" (UniqueName: \"kubernetes.io/projected/e6b23a7e-3095-43b9-846f-48d7a5b9b628-kube-api-access-nls9p\") pod \"nova-cell0-cell-mapping-k7dt8\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.385437 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.388673 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.392489 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.417033 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.463682 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703d7352-0a2c-419a-ad78-89c510f60a22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.463746 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703d7352-0a2c-419a-ad78-89c510f60a22-config-data\") pod \"nova-api-0\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.463809 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-config-data\") pod \"nova-cell0-cell-mapping-k7dt8\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.463844 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k7dt8\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.463889 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-scripts\") pod \"nova-cell0-cell-mapping-k7dt8\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.463967 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/703d7352-0a2c-419a-ad78-89c510f60a22-logs\") pod \"nova-api-0\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.464043 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nls9p\" (UniqueName: \"kubernetes.io/projected/e6b23a7e-3095-43b9-846f-48d7a5b9b628-kube-api-access-nls9p\") pod \"nova-cell0-cell-mapping-k7dt8\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.464086 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b8k4\" (UniqueName: \"kubernetes.io/projected/703d7352-0a2c-419a-ad78-89c510f60a22-kube-api-access-2b8k4\") pod \"nova-api-0\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.471802 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-k7dt8\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.473934 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-scripts\") pod \"nova-cell0-cell-mapping-k7dt8\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.474456 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-config-data\") pod \"nova-cell0-cell-mapping-k7dt8\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.496165 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nls9p\" (UniqueName: \"kubernetes.io/projected/e6b23a7e-3095-43b9-846f-48d7a5b9b628-kube-api-access-nls9p\") pod \"nova-cell0-cell-mapping-k7dt8\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.513353 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.514905 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.518008 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.543997 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.576485 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/703d7352-0a2c-419a-ad78-89c510f60a22-logs\") pod \"nova-api-0\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.576558 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b8k4\" (UniqueName: \"kubernetes.io/projected/703d7352-0a2c-419a-ad78-89c510f60a22-kube-api-access-2b8k4\") pod \"nova-api-0\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.576626 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703d7352-0a2c-419a-ad78-89c510f60a22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.576643 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703d7352-0a2c-419a-ad78-89c510f60a22-config-data\") pod \"nova-api-0\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.578004 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/703d7352-0a2c-419a-ad78-89c510f60a22-logs\") pod \"nova-api-0\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.585292 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703d7352-0a2c-419a-ad78-89c510f60a22-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.588498 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.594530 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703d7352-0a2c-419a-ad78-89c510f60a22-config-data\") pod \"nova-api-0\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.602285 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.604739 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.605167 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b8k4\" (UniqueName: \"kubernetes.io/projected/703d7352-0a2c-419a-ad78-89c510f60a22-kube-api-access-2b8k4\") pod \"nova-api-0\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.613915 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.636397 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.678942 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f98e511-1796-4994-9530-eadf8b7d54e4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f98e511-1796-4994-9530-eadf8b7d54e4\") " pod="openstack/nova-scheduler-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.679338 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eee9938c-75b9-4737-9210-edb57a6dc1c2-logs\") pod \"nova-metadata-0\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " pod="openstack/nova-metadata-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.679431 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee9938c-75b9-4737-9210-edb57a6dc1c2-config-data\") pod \"nova-metadata-0\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " pod="openstack/nova-metadata-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.679482 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l589h\" (UniqueName: \"kubernetes.io/projected/0f98e511-1796-4994-9530-eadf8b7d54e4-kube-api-access-l589h\") pod \"nova-scheduler-0\" (UID: \"0f98e511-1796-4994-9530-eadf8b7d54e4\") " pod="openstack/nova-scheduler-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.679506 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c99p2\" (UniqueName: \"kubernetes.io/projected/eee9938c-75b9-4737-9210-edb57a6dc1c2-kube-api-access-c99p2\") pod \"nova-metadata-0\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " pod="openstack/nova-metadata-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.679558 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f98e511-1796-4994-9530-eadf8b7d54e4-config-data\") pod \"nova-scheduler-0\" (UID: \"0f98e511-1796-4994-9530-eadf8b7d54e4\") " pod="openstack/nova-scheduler-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.679586 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee9938c-75b9-4737-9210-edb57a6dc1c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " pod="openstack/nova-metadata-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.730049 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.736801 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fbbd65c89-495gj"] Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.738378 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.769585 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.771094 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.783863 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee9938c-75b9-4737-9210-edb57a6dc1c2-config-data\") pod \"nova-metadata-0\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " pod="openstack/nova-metadata-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.783973 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l589h\" (UniqueName: \"kubernetes.io/projected/0f98e511-1796-4994-9530-eadf8b7d54e4-kube-api-access-l589h\") pod \"nova-scheduler-0\" (UID: \"0f98e511-1796-4994-9530-eadf8b7d54e4\") " pod="openstack/nova-scheduler-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.784001 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c99p2\" (UniqueName: \"kubernetes.io/projected/eee9938c-75b9-4737-9210-edb57a6dc1c2-kube-api-access-c99p2\") pod \"nova-metadata-0\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " pod="openstack/nova-metadata-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.784033 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f98e511-1796-4994-9530-eadf8b7d54e4-config-data\") pod \"nova-scheduler-0\" (UID: \"0f98e511-1796-4994-9530-eadf8b7d54e4\") " pod="openstack/nova-scheduler-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.784063 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee9938c-75b9-4737-9210-edb57a6dc1c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " pod="openstack/nova-metadata-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.784117 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f98e511-1796-4994-9530-eadf8b7d54e4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f98e511-1796-4994-9530-eadf8b7d54e4\") " pod="openstack/nova-scheduler-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.784164 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eee9938c-75b9-4737-9210-edb57a6dc1c2-logs\") pod \"nova-metadata-0\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " pod="openstack/nova-metadata-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.784908 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eee9938c-75b9-4737-9210-edb57a6dc1c2-logs\") pod \"nova-metadata-0\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " pod="openstack/nova-metadata-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.795283 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee9938c-75b9-4737-9210-edb57a6dc1c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " pod="openstack/nova-metadata-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.795354 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f98e511-1796-4994-9530-eadf8b7d54e4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f98e511-1796-4994-9530-eadf8b7d54e4\") " pod="openstack/nova-scheduler-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.796045 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f98e511-1796-4994-9530-eadf8b7d54e4-config-data\") pod \"nova-scheduler-0\" (UID: \"0f98e511-1796-4994-9530-eadf8b7d54e4\") " pod="openstack/nova-scheduler-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.800576 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbbd65c89-495gj"] Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.805082 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee9938c-75b9-4737-9210-edb57a6dc1c2-config-data\") pod \"nova-metadata-0\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " pod="openstack/nova-metadata-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.810265 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.840429 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.880080 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c99p2\" (UniqueName: \"kubernetes.io/projected/eee9938c-75b9-4737-9210-edb57a6dc1c2-kube-api-access-c99p2\") pod \"nova-metadata-0\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " pod="openstack/nova-metadata-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.880233 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l589h\" (UniqueName: \"kubernetes.io/projected/0f98e511-1796-4994-9530-eadf8b7d54e4-kube-api-access-l589h\") pod \"nova-scheduler-0\" (UID: \"0f98e511-1796-4994-9530-eadf8b7d54e4\") " pod="openstack/nova-scheduler-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.906964 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.923930 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-config\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.923987 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm79t\" (UniqueName: \"kubernetes.io/projected/ec9ab510-20f1-4265-8735-596ca8e18ae9-kube-api-access-sm79t\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec9ab510-20f1-4265-8735-596ca8e18ae9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.924069 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-dns-svc\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.924178 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2b7d\" (UniqueName: \"kubernetes.io/projected/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-kube-api-access-t2b7d\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.924286 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9ab510-20f1-4265-8735-596ca8e18ae9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec9ab510-20f1-4265-8735-596ca8e18ae9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.924384 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:33 crc kubenswrapper[4856]: I1122 08:43:33.924417 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9ab510-20f1-4265-8735-596ca8e18ae9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec9ab510-20f1-4265-8735-596ca8e18ae9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.033421 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-config\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.033783 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm79t\" (UniqueName: \"kubernetes.io/projected/ec9ab510-20f1-4265-8735-596ca8e18ae9-kube-api-access-sm79t\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec9ab510-20f1-4265-8735-596ca8e18ae9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.033827 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-dns-svc\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.033895 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2b7d\" (UniqueName: \"kubernetes.io/projected/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-kube-api-access-t2b7d\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.033962 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9ab510-20f1-4265-8735-596ca8e18ae9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec9ab510-20f1-4265-8735-596ca8e18ae9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.034006 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.034031 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9ab510-20f1-4265-8735-596ca8e18ae9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec9ab510-20f1-4265-8735-596ca8e18ae9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.034067 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.036034 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-ovsdbserver-sb\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.036579 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-config\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.037390 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-dns-svc\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.042776 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-ovsdbserver-nb\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.056110 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9ab510-20f1-4265-8735-596ca8e18ae9-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec9ab510-20f1-4265-8735-596ca8e18ae9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.058135 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.058759 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9ab510-20f1-4265-8735-596ca8e18ae9-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec9ab510-20f1-4265-8735-596ca8e18ae9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.062620 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm79t\" (UniqueName: \"kubernetes.io/projected/ec9ab510-20f1-4265-8735-596ca8e18ae9-kube-api-access-sm79t\") pod \"nova-cell1-novncproxy-0\" (UID: \"ec9ab510-20f1-4265-8735-596ca8e18ae9\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.064468 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2b7d\" (UniqueName: \"kubernetes.io/projected/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-kube-api-access-t2b7d\") pod \"dnsmasq-dns-5fbbd65c89-495gj\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.070428 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.149005 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.226098 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.388415 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-k7dt8"] Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.574124 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.575726 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k7dt8" event={"ID":"e6b23a7e-3095-43b9-846f-48d7a5b9b628","Type":"ContainerStarted","Data":"2d990f77a8b94bd92f10d7aa0e946fddebcf7407b067437b8990935cd89dcdcf"} Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.594431 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.629595 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.645624 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5bk7q"] Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.647832 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.649456 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.651504 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.669423 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5bk7q"] Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.703916 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.751812 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmvjd\" (UniqueName: \"kubernetes.io/projected/0e1a60af-c38b-436e-99aa-e3140fb55829-kube-api-access-cmvjd\") pod \"nova-cell1-conductor-db-sync-5bk7q\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.751867 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5bk7q\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.751936 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-config-data\") pod \"nova-cell1-conductor-db-sync-5bk7q\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.751962 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-scripts\") pod \"nova-cell1-conductor-db-sync-5bk7q\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.856022 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmvjd\" (UniqueName: \"kubernetes.io/projected/0e1a60af-c38b-436e-99aa-e3140fb55829-kube-api-access-cmvjd\") pod \"nova-cell1-conductor-db-sync-5bk7q\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.857214 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5bk7q\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.857396 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-config-data\") pod \"nova-cell1-conductor-db-sync-5bk7q\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.857447 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-scripts\") pod \"nova-cell1-conductor-db-sync-5bk7q\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.863122 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-scripts\") pod \"nova-cell1-conductor-db-sync-5bk7q\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.863191 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-config-data\") pod \"nova-cell1-conductor-db-sync-5bk7q\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.863460 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5bk7q\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.877363 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmvjd\" (UniqueName: \"kubernetes.io/projected/0e1a60af-c38b-436e-99aa-e3140fb55829-kube-api-access-cmvjd\") pod \"nova-cell1-conductor-db-sync-5bk7q\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.922280 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fbbd65c89-495gj"] Nov 22 08:43:34 crc kubenswrapper[4856]: W1122 08:43:34.922436 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5fe2f94_8f3a_4f5c_bc88_76994a6d73b8.slice/crio-c084438205302fa5dad35445d5512dc7123144a8c0d2df3597816c718aead610 WatchSource:0}: Error finding container c084438205302fa5dad35445d5512dc7123144a8c0d2df3597816c718aead610: Status 404 returned error can't find the container with id c084438205302fa5dad35445d5512dc7123144a8c0d2df3597816c718aead610 Nov 22 08:43:34 crc kubenswrapper[4856]: I1122 08:43:34.993999 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 08:43:35 crc kubenswrapper[4856]: W1122 08:43:35.000954 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec9ab510_20f1_4265_8735_596ca8e18ae9.slice/crio-004ca29d429b340b046256af29709e0b9149a7402a5d8d345c782c3e5dbec429 WatchSource:0}: Error finding container 004ca29d429b340b046256af29709e0b9149a7402a5d8d345c782c3e5dbec429: Status 404 returned error can't find the container with id 004ca29d429b340b046256af29709e0b9149a7402a5d8d345c782c3e5dbec429 Nov 22 08:43:35 crc kubenswrapper[4856]: I1122 08:43:35.102043 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:35 crc kubenswrapper[4856]: I1122 08:43:35.600648 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" event={"ID":"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8","Type":"ContainerDied","Data":"f1e9e90546389b0088fe327346b91714495ff6d955bfb1d17e8e444a855843c3"} Nov 22 08:43:35 crc kubenswrapper[4856]: I1122 08:43:35.600393 4856 generic.go:334] "Generic (PLEG): container finished" podID="e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" containerID="f1e9e90546389b0088fe327346b91714495ff6d955bfb1d17e8e444a855843c3" exitCode=0 Nov 22 08:43:35 crc kubenswrapper[4856]: I1122 08:43:35.602484 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" event={"ID":"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8","Type":"ContainerStarted","Data":"c084438205302fa5dad35445d5512dc7123144a8c0d2df3597816c718aead610"} Nov 22 08:43:35 crc kubenswrapper[4856]: I1122 08:43:35.604041 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5bk7q"] Nov 22 08:43:35 crc kubenswrapper[4856]: I1122 08:43:35.623033 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"703d7352-0a2c-419a-ad78-89c510f60a22","Type":"ContainerStarted","Data":"d14b0886a9385f91c0392db1ea76328e2bf1a021bafc3d39eeb3a6d936868e51"} Nov 22 08:43:35 crc kubenswrapper[4856]: I1122 08:43:35.626069 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ec9ab510-20f1-4265-8735-596ca8e18ae9","Type":"ContainerStarted","Data":"004ca29d429b340b046256af29709e0b9149a7402a5d8d345c782c3e5dbec429"} Nov 22 08:43:35 crc kubenswrapper[4856]: I1122 08:43:35.635357 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f98e511-1796-4994-9530-eadf8b7d54e4","Type":"ContainerStarted","Data":"10fb49ea15128641f047cdd33d7b9bfcb342cdb78c10c99d26dc8dc9e04e6747"} Nov 22 08:43:35 crc kubenswrapper[4856]: I1122 08:43:35.639302 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eee9938c-75b9-4737-9210-edb57a6dc1c2","Type":"ContainerStarted","Data":"2db8fa764b9ba5cb32df1d97564ffc097a47ac995df92262266ef282cabdbc38"} Nov 22 08:43:35 crc kubenswrapper[4856]: I1122 08:43:35.641317 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k7dt8" event={"ID":"e6b23a7e-3095-43b9-846f-48d7a5b9b628","Type":"ContainerStarted","Data":"62b2c3be1a42f5cf20f78093fcb1e391742316e277d244bcbf3be0ac71523056"} Nov 22 08:43:35 crc kubenswrapper[4856]: I1122 08:43:35.679441 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-k7dt8" podStartSLOduration=2.679415819 podStartE2EDuration="2.679415819s" podCreationTimestamp="2025-11-22 08:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:43:35.660872919 +0000 UTC m=+6058.074266177" watchObservedRunningTime="2025-11-22 08:43:35.679415819 +0000 UTC m=+6058.092809077" Nov 22 08:43:36 crc kubenswrapper[4856]: I1122 08:43:36.736256 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:36 crc kubenswrapper[4856]: I1122 08:43:36.742176 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" event={"ID":"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8","Type":"ContainerStarted","Data":"ef1e41f2c9277a64d1a6de8db6459f671cdfcf85c5eb15d70795318e3d82fa0d"} Nov 22 08:43:36 crc kubenswrapper[4856]: I1122 08:43:36.748567 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5bk7q" event={"ID":"0e1a60af-c38b-436e-99aa-e3140fb55829","Type":"ContainerStarted","Data":"f97c7bafcf30231b92f955dcf99fccbff8f3409dad368d1c1dad01eb82dbf7b5"} Nov 22 08:43:36 crc kubenswrapper[4856]: I1122 08:43:36.748602 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5bk7q" event={"ID":"0e1a60af-c38b-436e-99aa-e3140fb55829","Type":"ContainerStarted","Data":"8a57e196953ead95cee11355767a511e99e6af2bde71e1fed510480932546e4d"} Nov 22 08:43:36 crc kubenswrapper[4856]: I1122 08:43:36.763992 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" podStartSLOduration=3.763972612 podStartE2EDuration="3.763972612s" podCreationTimestamp="2025-11-22 08:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:43:36.759283926 +0000 UTC m=+6059.172677184" watchObservedRunningTime="2025-11-22 08:43:36.763972612 +0000 UTC m=+6059.177365870" Nov 22 08:43:36 crc kubenswrapper[4856]: I1122 08:43:36.786689 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5bk7q" podStartSLOduration=2.786664224 podStartE2EDuration="2.786664224s" podCreationTimestamp="2025-11-22 08:43:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:43:36.778757171 +0000 UTC m=+6059.192150429" watchObservedRunningTime="2025-11-22 08:43:36.786664224 +0000 UTC m=+6059.200057482" Nov 22 08:43:38 crc kubenswrapper[4856]: I1122 08:43:38.025309 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:38 crc kubenswrapper[4856]: I1122 08:43:38.045891 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 08:43:39 crc kubenswrapper[4856]: I1122 08:43:39.786730 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eee9938c-75b9-4737-9210-edb57a6dc1c2","Type":"ContainerStarted","Data":"695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a"} Nov 22 08:43:39 crc kubenswrapper[4856]: I1122 08:43:39.795339 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"703d7352-0a2c-419a-ad78-89c510f60a22","Type":"ContainerStarted","Data":"7a93ad5b8f171ce19f218b3b342691891e521fb536632eb9c7c9b0c49dadafd0"} Nov 22 08:43:39 crc kubenswrapper[4856]: I1122 08:43:39.797135 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ec9ab510-20f1-4265-8735-596ca8e18ae9","Type":"ContainerStarted","Data":"8d241a4426d6c05f2e50eb47c2cc076b55025b42ddf9ce9295d3ae9f2c87dde5"} Nov 22 08:43:39 crc kubenswrapper[4856]: I1122 08:43:39.797327 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ec9ab510-20f1-4265-8735-596ca8e18ae9" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8d241a4426d6c05f2e50eb47c2cc076b55025b42ddf9ce9295d3ae9f2c87dde5" gracePeriod=30 Nov 22 08:43:39 crc kubenswrapper[4856]: I1122 08:43:39.800293 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f98e511-1796-4994-9530-eadf8b7d54e4","Type":"ContainerStarted","Data":"b3a92759f5d317db1ccdd0e3f66622176f3484d4d0c7ef22779cdf5d04e98264"} Nov 22 08:43:39 crc kubenswrapper[4856]: I1122 08:43:39.820965 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.480140438 podStartE2EDuration="6.820943089s" podCreationTimestamp="2025-11-22 08:43:33 +0000 UTC" firstStartedPulling="2025-11-22 08:43:35.004688343 +0000 UTC m=+6057.418081601" lastFinishedPulling="2025-11-22 08:43:39.345491004 +0000 UTC m=+6061.758884252" observedRunningTime="2025-11-22 08:43:39.81424871 +0000 UTC m=+6062.227641968" watchObservedRunningTime="2025-11-22 08:43:39.820943089 +0000 UTC m=+6062.234336347" Nov 22 08:43:39 crc kubenswrapper[4856]: I1122 08:43:39.832680 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.228008702 podStartE2EDuration="6.832662686s" podCreationTimestamp="2025-11-22 08:43:33 +0000 UTC" firstStartedPulling="2025-11-22 08:43:34.736240297 +0000 UTC m=+6057.149633555" lastFinishedPulling="2025-11-22 08:43:39.340894271 +0000 UTC m=+6061.754287539" observedRunningTime="2025-11-22 08:43:39.831806602 +0000 UTC m=+6062.245199850" watchObservedRunningTime="2025-11-22 08:43:39.832662686 +0000 UTC m=+6062.246055944" Nov 22 08:43:40 crc kubenswrapper[4856]: I1122 08:43:40.815625 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"703d7352-0a2c-419a-ad78-89c510f60a22","Type":"ContainerStarted","Data":"3d94b2e93e319a110512c4b292b238bbea11e95c312c8368ba51264713bcc977"} Nov 22 08:43:40 crc kubenswrapper[4856]: I1122 08:43:40.823470 4856 generic.go:334] "Generic (PLEG): container finished" podID="0e1a60af-c38b-436e-99aa-e3140fb55829" containerID="f97c7bafcf30231b92f955dcf99fccbff8f3409dad368d1c1dad01eb82dbf7b5" exitCode=0 Nov 22 08:43:40 crc kubenswrapper[4856]: I1122 08:43:40.823603 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5bk7q" event={"ID":"0e1a60af-c38b-436e-99aa-e3140fb55829","Type":"ContainerDied","Data":"f97c7bafcf30231b92f955dcf99fccbff8f3409dad368d1c1dad01eb82dbf7b5"} Nov 22 08:43:40 crc kubenswrapper[4856]: I1122 08:43:40.831885 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eee9938c-75b9-4737-9210-edb57a6dc1c2","Type":"ContainerStarted","Data":"d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987"} Nov 22 08:43:40 crc kubenswrapper[4856]: I1122 08:43:40.831960 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eee9938c-75b9-4737-9210-edb57a6dc1c2" containerName="nova-metadata-log" containerID="cri-o://695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a" gracePeriod=30 Nov 22 08:43:40 crc kubenswrapper[4856]: I1122 08:43:40.831991 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="eee9938c-75b9-4737-9210-edb57a6dc1c2" containerName="nova-metadata-metadata" containerID="cri-o://d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987" gracePeriod=30 Nov 22 08:43:40 crc kubenswrapper[4856]: I1122 08:43:40.837194 4856 generic.go:334] "Generic (PLEG): container finished" podID="e6b23a7e-3095-43b9-846f-48d7a5b9b628" containerID="62b2c3be1a42f5cf20f78093fcb1e391742316e277d244bcbf3be0ac71523056" exitCode=0 Nov 22 08:43:40 crc kubenswrapper[4856]: I1122 08:43:40.838215 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k7dt8" event={"ID":"e6b23a7e-3095-43b9-846f-48d7a5b9b628","Type":"ContainerDied","Data":"62b2c3be1a42f5cf20f78093fcb1e391742316e277d244bcbf3be0ac71523056"} Nov 22 08:43:40 crc kubenswrapper[4856]: I1122 08:43:40.843632 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.127314491 podStartE2EDuration="7.843612715s" podCreationTimestamp="2025-11-22 08:43:33 +0000 UTC" firstStartedPulling="2025-11-22 08:43:34.629318354 +0000 UTC m=+6057.042711612" lastFinishedPulling="2025-11-22 08:43:39.345616588 +0000 UTC m=+6061.759009836" observedRunningTime="2025-11-22 08:43:40.842366181 +0000 UTC m=+6063.255759459" watchObservedRunningTime="2025-11-22 08:43:40.843612715 +0000 UTC m=+6063.257005983" Nov 22 08:43:40 crc kubenswrapper[4856]: I1122 08:43:40.862421 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.161424761 podStartE2EDuration="7.862398781s" podCreationTimestamp="2025-11-22 08:43:33 +0000 UTC" firstStartedPulling="2025-11-22 08:43:34.641125523 +0000 UTC m=+6057.054518781" lastFinishedPulling="2025-11-22 08:43:39.342099543 +0000 UTC m=+6061.755492801" observedRunningTime="2025-11-22 08:43:40.858968019 +0000 UTC m=+6063.272361277" watchObservedRunningTime="2025-11-22 08:43:40.862398781 +0000 UTC m=+6063.275792039" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.423137 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.559361 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c99p2\" (UniqueName: \"kubernetes.io/projected/eee9938c-75b9-4737-9210-edb57a6dc1c2-kube-api-access-c99p2\") pod \"eee9938c-75b9-4737-9210-edb57a6dc1c2\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.559421 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee9938c-75b9-4737-9210-edb57a6dc1c2-combined-ca-bundle\") pod \"eee9938c-75b9-4737-9210-edb57a6dc1c2\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.559590 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eee9938c-75b9-4737-9210-edb57a6dc1c2-logs\") pod \"eee9938c-75b9-4737-9210-edb57a6dc1c2\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.559625 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee9938c-75b9-4737-9210-edb57a6dc1c2-config-data\") pod \"eee9938c-75b9-4737-9210-edb57a6dc1c2\" (UID: \"eee9938c-75b9-4737-9210-edb57a6dc1c2\") " Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.560295 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eee9938c-75b9-4737-9210-edb57a6dc1c2-logs" (OuterVolumeSpecName: "logs") pod "eee9938c-75b9-4737-9210-edb57a6dc1c2" (UID: "eee9938c-75b9-4737-9210-edb57a6dc1c2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.565496 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eee9938c-75b9-4737-9210-edb57a6dc1c2-kube-api-access-c99p2" (OuterVolumeSpecName: "kube-api-access-c99p2") pod "eee9938c-75b9-4737-9210-edb57a6dc1c2" (UID: "eee9938c-75b9-4737-9210-edb57a6dc1c2"). InnerVolumeSpecName "kube-api-access-c99p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.596162 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee9938c-75b9-4737-9210-edb57a6dc1c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eee9938c-75b9-4737-9210-edb57a6dc1c2" (UID: "eee9938c-75b9-4737-9210-edb57a6dc1c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.609641 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eee9938c-75b9-4737-9210-edb57a6dc1c2-config-data" (OuterVolumeSpecName: "config-data") pod "eee9938c-75b9-4737-9210-edb57a6dc1c2" (UID: "eee9938c-75b9-4737-9210-edb57a6dc1c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.661351 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c99p2\" (UniqueName: \"kubernetes.io/projected/eee9938c-75b9-4737-9210-edb57a6dc1c2-kube-api-access-c99p2\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.661384 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eee9938c-75b9-4737-9210-edb57a6dc1c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.661394 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eee9938c-75b9-4737-9210-edb57a6dc1c2-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.661403 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eee9938c-75b9-4737-9210-edb57a6dc1c2-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.847894 4856 generic.go:334] "Generic (PLEG): container finished" podID="eee9938c-75b9-4737-9210-edb57a6dc1c2" containerID="d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987" exitCode=0 Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.847930 4856 generic.go:334] "Generic (PLEG): container finished" podID="eee9938c-75b9-4737-9210-edb57a6dc1c2" containerID="695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a" exitCode=143 Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.847978 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.848047 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eee9938c-75b9-4737-9210-edb57a6dc1c2","Type":"ContainerDied","Data":"d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987"} Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.848154 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eee9938c-75b9-4737-9210-edb57a6dc1c2","Type":"ContainerDied","Data":"695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a"} Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.848171 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"eee9938c-75b9-4737-9210-edb57a6dc1c2","Type":"ContainerDied","Data":"2db8fa764b9ba5cb32df1d97564ffc097a47ac995df92262266ef282cabdbc38"} Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.848189 4856 scope.go:117] "RemoveContainer" containerID="d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.889613 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.893251 4856 scope.go:117] "RemoveContainer" containerID="695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.903448 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.925582 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:41 crc kubenswrapper[4856]: E1122 08:43:41.926117 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee9938c-75b9-4737-9210-edb57a6dc1c2" containerName="nova-metadata-log" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.926140 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee9938c-75b9-4737-9210-edb57a6dc1c2" containerName="nova-metadata-log" Nov 22 08:43:41 crc kubenswrapper[4856]: E1122 08:43:41.926176 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eee9938c-75b9-4737-9210-edb57a6dc1c2" containerName="nova-metadata-metadata" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.926187 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="eee9938c-75b9-4737-9210-edb57a6dc1c2" containerName="nova-metadata-metadata" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.926415 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee9938c-75b9-4737-9210-edb57a6dc1c2" containerName="nova-metadata-log" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.926447 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="eee9938c-75b9-4737-9210-edb57a6dc1c2" containerName="nova-metadata-metadata" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.927761 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.932658 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.933006 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.935452 4856 scope.go:117] "RemoveContainer" containerID="d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987" Nov 22 08:43:41 crc kubenswrapper[4856]: E1122 08:43:41.936096 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987\": container with ID starting with d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987 not found: ID does not exist" containerID="d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.936123 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987"} err="failed to get container status \"d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987\": rpc error: code = NotFound desc = could not find container \"d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987\": container with ID starting with d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987 not found: ID does not exist" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.936144 4856 scope.go:117] "RemoveContainer" containerID="695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a" Nov 22 08:43:41 crc kubenswrapper[4856]: E1122 08:43:41.936380 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a\": container with ID starting with 695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a not found: ID does not exist" containerID="695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.936400 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a"} err="failed to get container status \"695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a\": rpc error: code = NotFound desc = could not find container \"695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a\": container with ID starting with 695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a not found: ID does not exist" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.936413 4856 scope.go:117] "RemoveContainer" containerID="d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.936639 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987"} err="failed to get container status \"d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987\": rpc error: code = NotFound desc = could not find container \"d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987\": container with ID starting with d11ed3a7e5732f30c97c0bf95cf4528f266caaebf5e6309be658794bdfcb5987 not found: ID does not exist" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.936658 4856 scope.go:117] "RemoveContainer" containerID="695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.936861 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a"} err="failed to get container status \"695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a\": rpc error: code = NotFound desc = could not find container \"695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a\": container with ID starting with 695c9d9f80c029744885f1cc4f5b1215bebf7494d88123d731cd57ab7f7be86a not found: ID does not exist" Nov 22 08:43:41 crc kubenswrapper[4856]: I1122 08:43:41.950165 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.070404 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7v8q\" (UniqueName: \"kubernetes.io/projected/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-kube-api-access-r7v8q\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.070753 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-logs\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.070789 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.070853 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-config-data\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.070893 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.172989 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7v8q\" (UniqueName: \"kubernetes.io/projected/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-kube-api-access-r7v8q\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.173096 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-logs\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.173138 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.173188 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-config-data\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.173238 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.173534 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-logs\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.177796 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.206851 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7v8q\" (UniqueName: \"kubernetes.io/projected/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-kube-api-access-r7v8q\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.211573 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.211827 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-config-data\") pod \"nova-metadata-0\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.246353 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.280312 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.291349 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.376878 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-config-data\") pod \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.377005 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nls9p\" (UniqueName: \"kubernetes.io/projected/e6b23a7e-3095-43b9-846f-48d7a5b9b628-kube-api-access-nls9p\") pod \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.377063 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-scripts\") pod \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.377126 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmvjd\" (UniqueName: \"kubernetes.io/projected/0e1a60af-c38b-436e-99aa-e3140fb55829-kube-api-access-cmvjd\") pod \"0e1a60af-c38b-436e-99aa-e3140fb55829\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.377166 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-combined-ca-bundle\") pod \"0e1a60af-c38b-436e-99aa-e3140fb55829\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.377190 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-combined-ca-bundle\") pod \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\" (UID: \"e6b23a7e-3095-43b9-846f-48d7a5b9b628\") " Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.377244 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-config-data\") pod \"0e1a60af-c38b-436e-99aa-e3140fb55829\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.377349 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-scripts\") pod \"0e1a60af-c38b-436e-99aa-e3140fb55829\" (UID: \"0e1a60af-c38b-436e-99aa-e3140fb55829\") " Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.383214 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-scripts" (OuterVolumeSpecName: "scripts") pod "e6b23a7e-3095-43b9-846f-48d7a5b9b628" (UID: "e6b23a7e-3095-43b9-846f-48d7a5b9b628"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.384091 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-scripts" (OuterVolumeSpecName: "scripts") pod "0e1a60af-c38b-436e-99aa-e3140fb55829" (UID: "0e1a60af-c38b-436e-99aa-e3140fb55829"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.384216 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b23a7e-3095-43b9-846f-48d7a5b9b628-kube-api-access-nls9p" (OuterVolumeSpecName: "kube-api-access-nls9p") pod "e6b23a7e-3095-43b9-846f-48d7a5b9b628" (UID: "e6b23a7e-3095-43b9-846f-48d7a5b9b628"). InnerVolumeSpecName "kube-api-access-nls9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.384972 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1a60af-c38b-436e-99aa-e3140fb55829-kube-api-access-cmvjd" (OuterVolumeSpecName: "kube-api-access-cmvjd") pod "0e1a60af-c38b-436e-99aa-e3140fb55829" (UID: "0e1a60af-c38b-436e-99aa-e3140fb55829"). InnerVolumeSpecName "kube-api-access-cmvjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.425882 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6b23a7e-3095-43b9-846f-48d7a5b9b628" (UID: "e6b23a7e-3095-43b9-846f-48d7a5b9b628"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.431467 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-config-data" (OuterVolumeSpecName: "config-data") pod "0e1a60af-c38b-436e-99aa-e3140fb55829" (UID: "0e1a60af-c38b-436e-99aa-e3140fb55829"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.439228 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e1a60af-c38b-436e-99aa-e3140fb55829" (UID: "0e1a60af-c38b-436e-99aa-e3140fb55829"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.442936 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-config-data" (OuterVolumeSpecName: "config-data") pod "e6b23a7e-3095-43b9-846f-48d7a5b9b628" (UID: "e6b23a7e-3095-43b9-846f-48d7a5b9b628"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.479346 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.479372 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.479383 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.479393 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nls9p\" (UniqueName: \"kubernetes.io/projected/e6b23a7e-3095-43b9-846f-48d7a5b9b628-kube-api-access-nls9p\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.479405 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.479414 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmvjd\" (UniqueName: \"kubernetes.io/projected/0e1a60af-c38b-436e-99aa-e3140fb55829-kube-api-access-cmvjd\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.479423 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1a60af-c38b-436e-99aa-e3140fb55829-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.479434 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6b23a7e-3095-43b9-846f-48d7a5b9b628-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:42 crc kubenswrapper[4856]: W1122 08:43:42.687891 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a99f9d2_9f2c_47e9_a339_e2e8bf40bcdc.slice/crio-9a8c484558dcab4229ae465445c646a1f612c9a4b1754592b646f74f60d226a2 WatchSource:0}: Error finding container 9a8c484558dcab4229ae465445c646a1f612c9a4b1754592b646f74f60d226a2: Status 404 returned error can't find the container with id 9a8c484558dcab4229ae465445c646a1f612c9a4b1754592b646f74f60d226a2 Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.688164 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.719032 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:43:42 crc kubenswrapper[4856]: E1122 08:43:42.730299 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.747320 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eee9938c-75b9-4737-9210-edb57a6dc1c2" path="/var/lib/kubelet/pods/eee9938c-75b9-4737-9210-edb57a6dc1c2/volumes" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.861031 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-k7dt8" event={"ID":"e6b23a7e-3095-43b9-846f-48d7a5b9b628","Type":"ContainerDied","Data":"2d990f77a8b94bd92f10d7aa0e946fddebcf7407b067437b8990935cd89dcdcf"} Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.861089 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d990f77a8b94bd92f10d7aa0e946fddebcf7407b067437b8990935cd89dcdcf" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.861110 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-k7dt8" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.862545 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc","Type":"ContainerStarted","Data":"9a8c484558dcab4229ae465445c646a1f612c9a4b1754592b646f74f60d226a2"} Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.866136 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5bk7q" event={"ID":"0e1a60af-c38b-436e-99aa-e3140fb55829","Type":"ContainerDied","Data":"8a57e196953ead95cee11355767a511e99e6af2bde71e1fed510480932546e4d"} Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.866178 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a57e196953ead95cee11355767a511e99e6af2bde71e1fed510480932546e4d" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.866229 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5bk7q" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.961686 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 08:43:42 crc kubenswrapper[4856]: E1122 08:43:42.962414 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b23a7e-3095-43b9-846f-48d7a5b9b628" containerName="nova-manage" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.962436 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b23a7e-3095-43b9-846f-48d7a5b9b628" containerName="nova-manage" Nov 22 08:43:42 crc kubenswrapper[4856]: E1122 08:43:42.962450 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1a60af-c38b-436e-99aa-e3140fb55829" containerName="nova-cell1-conductor-db-sync" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.962457 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1a60af-c38b-436e-99aa-e3140fb55829" containerName="nova-cell1-conductor-db-sync" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.962658 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b23a7e-3095-43b9-846f-48d7a5b9b628" containerName="nova-manage" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.962683 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1a60af-c38b-436e-99aa-e3140fb55829" containerName="nova-cell1-conductor-db-sync" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.963321 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.965818 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 08:43:42 crc kubenswrapper[4856]: I1122 08:43:42.972927 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.090091 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkw7v\" (UniqueName: \"kubernetes.io/projected/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-kube-api-access-pkw7v\") pod \"nova-cell1-conductor-0\" (UID: \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\") " pod="openstack/nova-cell1-conductor-0" Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.090171 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\") " pod="openstack/nova-cell1-conductor-0" Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.090247 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\") " pod="openstack/nova-cell1-conductor-0" Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.127401 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.127758 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="703d7352-0a2c-419a-ad78-89c510f60a22" containerName="nova-api-log" containerID="cri-o://7a93ad5b8f171ce19f218b3b342691891e521fb536632eb9c7c9b0c49dadafd0" gracePeriod=30 Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.128277 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="703d7352-0a2c-419a-ad78-89c510f60a22" containerName="nova-api-api" containerID="cri-o://3d94b2e93e319a110512c4b292b238bbea11e95c312c8368ba51264713bcc977" gracePeriod=30 Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.141444 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.142145 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0f98e511-1796-4994-9530-eadf8b7d54e4" containerName="nova-scheduler-scheduler" containerID="cri-o://b3a92759f5d317db1ccdd0e3f66622176f3484d4d0c7ef22779cdf5d04e98264" gracePeriod=30 Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.150411 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.192446 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkw7v\" (UniqueName: \"kubernetes.io/projected/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-kube-api-access-pkw7v\") pod \"nova-cell1-conductor-0\" (UID: \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\") " pod="openstack/nova-cell1-conductor-0" Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.192574 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\") " pod="openstack/nova-cell1-conductor-0" Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.192637 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\") " pod="openstack/nova-cell1-conductor-0" Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.196738 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\") " pod="openstack/nova-cell1-conductor-0" Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.196756 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\") " pod="openstack/nova-cell1-conductor-0" Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.212504 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkw7v\" (UniqueName: \"kubernetes.io/projected/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-kube-api-access-pkw7v\") pod \"nova-cell1-conductor-0\" (UID: \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\") " pod="openstack/nova-cell1-conductor-0" Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.288382 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.738463 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.880697 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc","Type":"ContainerStarted","Data":"2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f"} Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.881000 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc","Type":"ContainerStarted","Data":"8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f"} Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.881408 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" containerName="nova-metadata-metadata" containerID="cri-o://2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f" gracePeriod=30 Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.883612 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" containerName="nova-metadata-log" containerID="cri-o://8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f" gracePeriod=30 Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.889459 4856 generic.go:334] "Generic (PLEG): container finished" podID="703d7352-0a2c-419a-ad78-89c510f60a22" containerID="3d94b2e93e319a110512c4b292b238bbea11e95c312c8368ba51264713bcc977" exitCode=0 Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.889490 4856 generic.go:334] "Generic (PLEG): container finished" podID="703d7352-0a2c-419a-ad78-89c510f60a22" containerID="7a93ad5b8f171ce19f218b3b342691891e521fb536632eb9c7c9b0c49dadafd0" exitCode=143 Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.889549 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"703d7352-0a2c-419a-ad78-89c510f60a22","Type":"ContainerDied","Data":"3d94b2e93e319a110512c4b292b238bbea11e95c312c8368ba51264713bcc977"} Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.889577 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"703d7352-0a2c-419a-ad78-89c510f60a22","Type":"ContainerDied","Data":"7a93ad5b8f171ce19f218b3b342691891e521fb536632eb9c7c9b0c49dadafd0"} Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.889587 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"703d7352-0a2c-419a-ad78-89c510f60a22","Type":"ContainerDied","Data":"d14b0886a9385f91c0392db1ea76328e2bf1a021bafc3d39eeb3a6d936868e51"} Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.889596 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d14b0886a9385f91c0392db1ea76328e2bf1a021bafc3d39eeb3a6d936868e51" Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.890658 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6","Type":"ContainerStarted","Data":"f5fef5652751a84671badd752388cc36e6023a1d09f0c33aa3c4cfa82eaa5c67"} Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.907066 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.907024876 podStartE2EDuration="2.907024876s" podCreationTimestamp="2025-11-22 08:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:43:43.896468071 +0000 UTC m=+6066.309861329" watchObservedRunningTime="2025-11-22 08:43:43.907024876 +0000 UTC m=+6066.320418134" Nov 22 08:43:43 crc kubenswrapper[4856]: I1122 08:43:43.958754 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.058927 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.114651 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/703d7352-0a2c-419a-ad78-89c510f60a22-logs\") pod \"703d7352-0a2c-419a-ad78-89c510f60a22\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.115012 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/703d7352-0a2c-419a-ad78-89c510f60a22-logs" (OuterVolumeSpecName: "logs") pod "703d7352-0a2c-419a-ad78-89c510f60a22" (UID: "703d7352-0a2c-419a-ad78-89c510f60a22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.115027 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703d7352-0a2c-419a-ad78-89c510f60a22-combined-ca-bundle\") pod \"703d7352-0a2c-419a-ad78-89c510f60a22\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.115204 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b8k4\" (UniqueName: \"kubernetes.io/projected/703d7352-0a2c-419a-ad78-89c510f60a22-kube-api-access-2b8k4\") pod \"703d7352-0a2c-419a-ad78-89c510f60a22\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.115298 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703d7352-0a2c-419a-ad78-89c510f60a22-config-data\") pod \"703d7352-0a2c-419a-ad78-89c510f60a22\" (UID: \"703d7352-0a2c-419a-ad78-89c510f60a22\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.116019 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/703d7352-0a2c-419a-ad78-89c510f60a22-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.119320 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/703d7352-0a2c-419a-ad78-89c510f60a22-kube-api-access-2b8k4" (OuterVolumeSpecName: "kube-api-access-2b8k4") pod "703d7352-0a2c-419a-ad78-89c510f60a22" (UID: "703d7352-0a2c-419a-ad78-89c510f60a22"). InnerVolumeSpecName "kube-api-access-2b8k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.140120 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703d7352-0a2c-419a-ad78-89c510f60a22-config-data" (OuterVolumeSpecName: "config-data") pod "703d7352-0a2c-419a-ad78-89c510f60a22" (UID: "703d7352-0a2c-419a-ad78-89c510f60a22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.141086 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703d7352-0a2c-419a-ad78-89c510f60a22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "703d7352-0a2c-419a-ad78-89c510f60a22" (UID: "703d7352-0a2c-419a-ad78-89c510f60a22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.152694 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.218070 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b8k4\" (UniqueName: \"kubernetes.io/projected/703d7352-0a2c-419a-ad78-89c510f60a22-kube-api-access-2b8k4\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.220716 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703d7352-0a2c-419a-ad78-89c510f60a22-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.220734 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703d7352-0a2c-419a-ad78-89c510f60a22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.223491 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fb88bc67f-mcjjq"] Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.223783 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" podUID="9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" containerName="dnsmasq-dns" containerID="cri-o://a1b7d8e7ce98d23edc0ad61c7488d8fc35d50c10dbbd8f3223dd7cea97dec540" gracePeriod=10 Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.232988 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.488227 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.635231 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-combined-ca-bundle\") pod \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.635627 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7v8q\" (UniqueName: \"kubernetes.io/projected/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-kube-api-access-r7v8q\") pod \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.635781 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-nova-metadata-tls-certs\") pod \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.636065 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-logs\") pod \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.636186 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-config-data\") pod \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\" (UID: \"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.636413 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-logs" (OuterVolumeSpecName: "logs") pod "3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" (UID: "3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.636994 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.643763 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-kube-api-access-r7v8q" (OuterVolumeSpecName: "kube-api-access-r7v8q") pod "3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" (UID: "3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc"). InnerVolumeSpecName "kube-api-access-r7v8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.661209 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" (UID: "3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.674906 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-config-data" (OuterVolumeSpecName: "config-data") pod "3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" (UID: "3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.703925 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" (UID: "3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.739472 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.739526 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7v8q\" (UniqueName: \"kubernetes.io/projected/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-kube-api-access-r7v8q\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.739540 4856 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.739552 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.780959 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.840160 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-ovsdbserver-nb\") pod \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.840230 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dpjd\" (UniqueName: \"kubernetes.io/projected/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-kube-api-access-7dpjd\") pod \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.840276 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-config\") pod \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.840301 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-ovsdbserver-sb\") pod \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.840477 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-dns-svc\") pod \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\" (UID: \"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8\") " Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.844311 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-kube-api-access-7dpjd" (OuterVolumeSpecName: "kube-api-access-7dpjd") pod "9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" (UID: "9ae0d80c-2578-49ae-81c7-f1dadef2e0f8"). InnerVolumeSpecName "kube-api-access-7dpjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.902497 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" (UID: "9ae0d80c-2578-49ae-81c7-f1dadef2e0f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.905693 4856 generic.go:334] "Generic (PLEG): container finished" podID="3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" containerID="2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f" exitCode=0 Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.905733 4856 generic.go:334] "Generic (PLEG): container finished" podID="3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" containerID="8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f" exitCode=143 Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.905798 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.905827 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc","Type":"ContainerDied","Data":"2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f"} Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.905864 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc","Type":"ContainerDied","Data":"8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f"} Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.905884 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc","Type":"ContainerDied","Data":"9a8c484558dcab4229ae465445c646a1f612c9a4b1754592b646f74f60d226a2"} Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.905904 4856 scope.go:117] "RemoveContainer" containerID="2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.909625 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6","Type":"ContainerStarted","Data":"611b59ee08ab058ac6753040a2c087b8fc3a9a3f3797adf808d52b4785da2748"} Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.910076 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.913253 4856 generic.go:334] "Generic (PLEG): container finished" podID="9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" containerID="a1b7d8e7ce98d23edc0ad61c7488d8fc35d50c10dbbd8f3223dd7cea97dec540" exitCode=0 Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.913294 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.913312 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" event={"ID":"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8","Type":"ContainerDied","Data":"a1b7d8e7ce98d23edc0ad61c7488d8fc35d50c10dbbd8f3223dd7cea97dec540"} Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.913616 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb88bc67f-mcjjq" event={"ID":"9ae0d80c-2578-49ae-81c7-f1dadef2e0f8","Type":"ContainerDied","Data":"6f5f445f244a011c986c6541300153d08c7d45eb24db599f9ae7b688c71c5fd3"} Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.913735 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.915077 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" (UID: "9ae0d80c-2578-49ae-81c7-f1dadef2e0f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.915305 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" (UID: "9ae0d80c-2578-49ae-81c7-f1dadef2e0f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.921180 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-config" (OuterVolumeSpecName: "config") pod "9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" (UID: "9ae0d80c-2578-49ae-81c7-f1dadef2e0f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.938917 4856 scope.go:117] "RemoveContainer" containerID="8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.942483 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dpjd\" (UniqueName: \"kubernetes.io/projected/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-kube-api-access-7dpjd\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.942708 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.942725 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.942738 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.942747 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.952814 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.952765043 podStartE2EDuration="2.952765043s" podCreationTimestamp="2025-11-22 08:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:43:44.923413362 +0000 UTC m=+6067.336806640" watchObservedRunningTime="2025-11-22 08:43:44.952765043 +0000 UTC m=+6067.366158321" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.991300 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.994774 4856 scope.go:117] "RemoveContainer" containerID="2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f" Nov 22 08:43:44 crc kubenswrapper[4856]: E1122 08:43:44.995902 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f\": container with ID starting with 2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f not found: ID does not exist" containerID="2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.995944 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f"} err="failed to get container status \"2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f\": rpc error: code = NotFound desc = could not find container \"2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f\": container with ID starting with 2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f not found: ID does not exist" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.995977 4856 scope.go:117] "RemoveContainer" containerID="8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f" Nov 22 08:43:44 crc kubenswrapper[4856]: E1122 08:43:44.996198 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f\": container with ID starting with 8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f not found: ID does not exist" containerID="8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.996230 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f"} err="failed to get container status \"8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f\": rpc error: code = NotFound desc = could not find container \"8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f\": container with ID starting with 8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f not found: ID does not exist" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.996249 4856 scope.go:117] "RemoveContainer" containerID="2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.996539 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f"} err="failed to get container status \"2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f\": rpc error: code = NotFound desc = could not find container \"2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f\": container with ID starting with 2242a868a17a965e24b21dcfa32ca6842c980869312641e4e7a2d855aff03e1f not found: ID does not exist" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.996564 4856 scope.go:117] "RemoveContainer" containerID="8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.996776 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f"} err="failed to get container status \"8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f\": rpc error: code = NotFound desc = could not find container \"8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f\": container with ID starting with 8a2d0cfaa24da96d30e76b4a17c9a9424804aff0617330820a5aef2a4351810f not found: ID does not exist" Nov 22 08:43:44 crc kubenswrapper[4856]: I1122 08:43:44.996806 4856 scope.go:117] "RemoveContainer" containerID="a1b7d8e7ce98d23edc0ad61c7488d8fc35d50c10dbbd8f3223dd7cea97dec540" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.005628 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.021451 4856 scope.go:117] "RemoveContainer" containerID="124bdd8092d0e70531c06cce75c64eb159b7221751b4225107e9433dd79a9f65" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.026829 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.059184 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.066499 4856 scope.go:117] "RemoveContainer" containerID="a1b7d8e7ce98d23edc0ad61c7488d8fc35d50c10dbbd8f3223dd7cea97dec540" Nov 22 08:43:45 crc kubenswrapper[4856]: E1122 08:43:45.067963 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1b7d8e7ce98d23edc0ad61c7488d8fc35d50c10dbbd8f3223dd7cea97dec540\": container with ID starting with a1b7d8e7ce98d23edc0ad61c7488d8fc35d50c10dbbd8f3223dd7cea97dec540 not found: ID does not exist" containerID="a1b7d8e7ce98d23edc0ad61c7488d8fc35d50c10dbbd8f3223dd7cea97dec540" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.068019 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1b7d8e7ce98d23edc0ad61c7488d8fc35d50c10dbbd8f3223dd7cea97dec540"} err="failed to get container status \"a1b7d8e7ce98d23edc0ad61c7488d8fc35d50c10dbbd8f3223dd7cea97dec540\": rpc error: code = NotFound desc = could not find container \"a1b7d8e7ce98d23edc0ad61c7488d8fc35d50c10dbbd8f3223dd7cea97dec540\": container with ID starting with a1b7d8e7ce98d23edc0ad61c7488d8fc35d50c10dbbd8f3223dd7cea97dec540 not found: ID does not exist" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.068048 4856 scope.go:117] "RemoveContainer" containerID="124bdd8092d0e70531c06cce75c64eb159b7221751b4225107e9433dd79a9f65" Nov 22 08:43:45 crc kubenswrapper[4856]: E1122 08:43:45.068370 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"124bdd8092d0e70531c06cce75c64eb159b7221751b4225107e9433dd79a9f65\": container with ID starting with 124bdd8092d0e70531c06cce75c64eb159b7221751b4225107e9433dd79a9f65 not found: ID does not exist" containerID="124bdd8092d0e70531c06cce75c64eb159b7221751b4225107e9433dd79a9f65" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.068423 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"124bdd8092d0e70531c06cce75c64eb159b7221751b4225107e9433dd79a9f65"} err="failed to get container status \"124bdd8092d0e70531c06cce75c64eb159b7221751b4225107e9433dd79a9f65\": rpc error: code = NotFound desc = could not find container \"124bdd8092d0e70531c06cce75c64eb159b7221751b4225107e9433dd79a9f65\": container with ID starting with 124bdd8092d0e70531c06cce75c64eb159b7221751b4225107e9433dd79a9f65 not found: ID does not exist" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.071560 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:45 crc kubenswrapper[4856]: E1122 08:43:45.071934 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" containerName="nova-metadata-log" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.071951 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" containerName="nova-metadata-log" Nov 22 08:43:45 crc kubenswrapper[4856]: E1122 08:43:45.071963 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="703d7352-0a2c-419a-ad78-89c510f60a22" containerName="nova-api-log" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.071969 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="703d7352-0a2c-419a-ad78-89c510f60a22" containerName="nova-api-log" Nov 22 08:43:45 crc kubenswrapper[4856]: E1122 08:43:45.071991 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="703d7352-0a2c-419a-ad78-89c510f60a22" containerName="nova-api-api" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.071998 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="703d7352-0a2c-419a-ad78-89c510f60a22" containerName="nova-api-api" Nov 22 08:43:45 crc kubenswrapper[4856]: E1122 08:43:45.072013 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" containerName="init" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.072019 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" containerName="init" Nov 22 08:43:45 crc kubenswrapper[4856]: E1122 08:43:45.072027 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" containerName="dnsmasq-dns" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.072033 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" containerName="dnsmasq-dns" Nov 22 08:43:45 crc kubenswrapper[4856]: E1122 08:43:45.072047 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" containerName="nova-metadata-metadata" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.072054 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" containerName="nova-metadata-metadata" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.072223 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="703d7352-0a2c-419a-ad78-89c510f60a22" containerName="nova-api-api" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.072244 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" containerName="dnsmasq-dns" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.072254 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" containerName="nova-metadata-log" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.072266 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="703d7352-0a2c-419a-ad78-89c510f60a22" containerName="nova-api-log" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.072277 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" containerName="nova-metadata-metadata" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.073248 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.075568 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.076603 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.080982 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.083862 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.087878 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.097744 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.111799 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.147605 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt4n6\" (UniqueName: \"kubernetes.io/projected/42687cfc-ebd9-4f23-a4e1-1443f5539dad-kube-api-access-nt4n6\") pod \"nova-api-0\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.147663 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43139350-8f06-4109-91a4-a71c0795a2cd-logs\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.147696 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.147737 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.147821 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42687cfc-ebd9-4f23-a4e1-1443f5539dad-logs\") pod \"nova-api-0\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.147879 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42687cfc-ebd9-4f23-a4e1-1443f5539dad-config-data\") pod \"nova-api-0\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.147941 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gncb\" (UniqueName: \"kubernetes.io/projected/43139350-8f06-4109-91a4-a71c0795a2cd-kube-api-access-4gncb\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.147970 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42687cfc-ebd9-4f23-a4e1-1443f5539dad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.148074 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-config-data\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.249641 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42687cfc-ebd9-4f23-a4e1-1443f5539dad-logs\") pod \"nova-api-0\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.249730 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42687cfc-ebd9-4f23-a4e1-1443f5539dad-config-data\") pod \"nova-api-0\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.249790 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gncb\" (UniqueName: \"kubernetes.io/projected/43139350-8f06-4109-91a4-a71c0795a2cd-kube-api-access-4gncb\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.249825 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42687cfc-ebd9-4f23-a4e1-1443f5539dad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.249896 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-config-data\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.249958 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt4n6\" (UniqueName: \"kubernetes.io/projected/42687cfc-ebd9-4f23-a4e1-1443f5539dad-kube-api-access-nt4n6\") pod \"nova-api-0\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.249986 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43139350-8f06-4109-91a4-a71c0795a2cd-logs\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.250014 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.250051 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.250527 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42687cfc-ebd9-4f23-a4e1-1443f5539dad-logs\") pod \"nova-api-0\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.251195 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43139350-8f06-4109-91a4-a71c0795a2cd-logs\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.253412 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fb88bc67f-mcjjq"] Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.254676 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42687cfc-ebd9-4f23-a4e1-1443f5539dad-config-data\") pod \"nova-api-0\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.255683 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.255923 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.256055 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-config-data\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.256476 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42687cfc-ebd9-4f23-a4e1-1443f5539dad-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.265561 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fb88bc67f-mcjjq"] Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.267911 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gncb\" (UniqueName: \"kubernetes.io/projected/43139350-8f06-4109-91a4-a71c0795a2cd-kube-api-access-4gncb\") pod \"nova-metadata-0\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.270443 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt4n6\" (UniqueName: \"kubernetes.io/projected/42687cfc-ebd9-4f23-a4e1-1443f5539dad-kube-api-access-nt4n6\") pod \"nova-api-0\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.398901 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.412300 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:43:45 crc kubenswrapper[4856]: W1122 08:43:45.898176 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42687cfc_ebd9_4f23_a4e1_1443f5539dad.slice/crio-6b7fc932f314ba748a5608c28ff739645c5999d281b1a90753eec87b2d2e4db4 WatchSource:0}: Error finding container 6b7fc932f314ba748a5608c28ff739645c5999d281b1a90753eec87b2d2e4db4: Status 404 returned error can't find the container with id 6b7fc932f314ba748a5608c28ff739645c5999d281b1a90753eec87b2d2e4db4 Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.902908 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.926055 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42687cfc-ebd9-4f23-a4e1-1443f5539dad","Type":"ContainerStarted","Data":"6b7fc932f314ba748a5608c28ff739645c5999d281b1a90753eec87b2d2e4db4"} Nov 22 08:43:45 crc kubenswrapper[4856]: I1122 08:43:45.992136 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:43:46 crc kubenswrapper[4856]: W1122 08:43:46.001869 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43139350_8f06_4109_91a4_a71c0795a2cd.slice/crio-cbe7706be31018fa7fa620102f329e603703753711a4626d08e07a330b57dcf3 WatchSource:0}: Error finding container cbe7706be31018fa7fa620102f329e603703753711a4626d08e07a330b57dcf3: Status 404 returned error can't find the container with id cbe7706be31018fa7fa620102f329e603703753711a4626d08e07a330b57dcf3 Nov 22 08:43:46 crc kubenswrapper[4856]: I1122 08:43:46.720628 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc" path="/var/lib/kubelet/pods/3a99f9d2-9f2c-47e9-a339-e2e8bf40bcdc/volumes" Nov 22 08:43:46 crc kubenswrapper[4856]: I1122 08:43:46.721558 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="703d7352-0a2c-419a-ad78-89c510f60a22" path="/var/lib/kubelet/pods/703d7352-0a2c-419a-ad78-89c510f60a22/volumes" Nov 22 08:43:46 crc kubenswrapper[4856]: I1122 08:43:46.722144 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae0d80c-2578-49ae-81c7-f1dadef2e0f8" path="/var/lib/kubelet/pods/9ae0d80c-2578-49ae-81c7-f1dadef2e0f8/volumes" Nov 22 08:43:46 crc kubenswrapper[4856]: I1122 08:43:46.941343 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43139350-8f06-4109-91a4-a71c0795a2cd","Type":"ContainerStarted","Data":"7eb599cca049e58a1214ab401ad7fbbfe537e484fe5ba55db233ef84c50389c9"} Nov 22 08:43:46 crc kubenswrapper[4856]: I1122 08:43:46.941701 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43139350-8f06-4109-91a4-a71c0795a2cd","Type":"ContainerStarted","Data":"eac5280c8add0b4b9199a29b8732420e42b3fdde1da2ef89de524585e48304ce"} Nov 22 08:43:46 crc kubenswrapper[4856]: I1122 08:43:46.941714 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43139350-8f06-4109-91a4-a71c0795a2cd","Type":"ContainerStarted","Data":"cbe7706be31018fa7fa620102f329e603703753711a4626d08e07a330b57dcf3"} Nov 22 08:43:46 crc kubenswrapper[4856]: I1122 08:43:46.943974 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42687cfc-ebd9-4f23-a4e1-1443f5539dad","Type":"ContainerStarted","Data":"04a7dd6dbb264df69f733b8adc9289a5bfdfeaf5184143a2e3badfd5c201b98b"} Nov 22 08:43:46 crc kubenswrapper[4856]: I1122 08:43:46.943996 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42687cfc-ebd9-4f23-a4e1-1443f5539dad","Type":"ContainerStarted","Data":"16a323f53cc673a27c5f84730f7fe97e8335d7112e1d9c9754c1bd10bb1f5b5c"} Nov 22 08:43:46 crc kubenswrapper[4856]: I1122 08:43:46.984965 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.984940957 podStartE2EDuration="2.984940957s" podCreationTimestamp="2025-11-22 08:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:43:46.977485587 +0000 UTC m=+6069.390878885" watchObservedRunningTime="2025-11-22 08:43:46.984940957 +0000 UTC m=+6069.398334215" Nov 22 08:43:47 crc kubenswrapper[4856]: I1122 08:43:47.006549 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.006477568 podStartE2EDuration="3.006477568s" podCreationTimestamp="2025-11-22 08:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:43:47.00360965 +0000 UTC m=+6069.417002948" watchObservedRunningTime="2025-11-22 08:43:47.006477568 +0000 UTC m=+6069.419870876" Nov 22 08:43:49 crc kubenswrapper[4856]: I1122 08:43:49.057112 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nzqfv"] Nov 22 08:43:49 crc kubenswrapper[4856]: I1122 08:43:49.067999 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a69c-account-create-88z2q"] Nov 22 08:43:49 crc kubenswrapper[4856]: I1122 08:43:49.075633 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nzqfv"] Nov 22 08:43:49 crc kubenswrapper[4856]: I1122 08:43:49.083828 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a69c-account-create-88z2q"] Nov 22 08:43:50 crc kubenswrapper[4856]: I1122 08:43:50.399406 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 08:43:50 crc kubenswrapper[4856]: I1122 08:43:50.399466 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 08:43:50 crc kubenswrapper[4856]: I1122 08:43:50.721037 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26eea8bb-99a6-46d3-8fad-283cad87cd06" path="/var/lib/kubelet/pods/26eea8bb-99a6-46d3-8fad-283cad87cd06/volumes" Nov 22 08:43:50 crc kubenswrapper[4856]: I1122 08:43:50.721729 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb66f8f-c69f-4526-b599-b5aa8214ad02" path="/var/lib/kubelet/pods/4bb66f8f-c69f-4526-b599-b5aa8214ad02/volumes" Nov 22 08:43:53 crc kubenswrapper[4856]: I1122 08:43:53.345630 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 22 08:43:54 crc kubenswrapper[4856]: I1122 08:43:54.710974 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:43:54 crc kubenswrapper[4856]: E1122 08:43:54.711704 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:43:55 crc kubenswrapper[4856]: I1122 08:43:55.399741 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 08:43:55 crc kubenswrapper[4856]: I1122 08:43:55.400152 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 08:43:55 crc kubenswrapper[4856]: I1122 08:43:55.412667 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 08:43:55 crc kubenswrapper[4856]: I1122 08:43:55.412995 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 08:43:56 crc kubenswrapper[4856]: I1122 08:43:56.419724 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.93:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:43:56 crc kubenswrapper[4856]: I1122 08:43:56.502720 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.94:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:43:56 crc kubenswrapper[4856]: I1122 08:43:56.502877 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.93:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:43:56 crc kubenswrapper[4856]: I1122 08:43:56.503044 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.94:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:02 crc kubenswrapper[4856]: I1122 08:44:02.035315 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-f4hxt"] Nov 22 08:44:02 crc kubenswrapper[4856]: I1122 08:44:02.047665 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-f4hxt"] Nov 22 08:44:02 crc kubenswrapper[4856]: I1122 08:44:02.720716 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa7e0985-459a-4527-83a1-595e7344c8fe" path="/var/lib/kubelet/pods/aa7e0985-459a-4527-83a1-595e7344c8fe/volumes" Nov 22 08:44:06 crc kubenswrapper[4856]: I1122 08:44:06.409646 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.93:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:06 crc kubenswrapper[4856]: I1122 08:44:06.409712 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.93:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:06 crc kubenswrapper[4856]: I1122 08:44:06.494771 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.94:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:06 crc kubenswrapper[4856]: I1122 08:44:06.494894 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.94:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:09 crc kubenswrapper[4856]: I1122 08:44:09.710411 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:44:09 crc kubenswrapper[4856]: E1122 08:44:09.711103 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:44:11 crc kubenswrapper[4856]: I1122 08:44:11.180376 4856 generic.go:334] "Generic (PLEG): container finished" podID="ec9ab510-20f1-4265-8735-596ca8e18ae9" containerID="8d241a4426d6c05f2e50eb47c2cc076b55025b42ddf9ce9295d3ae9f2c87dde5" exitCode=137 Nov 22 08:44:11 crc kubenswrapper[4856]: I1122 08:44:11.180571 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ec9ab510-20f1-4265-8735-596ca8e18ae9","Type":"ContainerDied","Data":"8d241a4426d6c05f2e50eb47c2cc076b55025b42ddf9ce9295d3ae9f2c87dde5"} Nov 22 08:44:11 crc kubenswrapper[4856]: I1122 08:44:11.355730 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:11 crc kubenswrapper[4856]: I1122 08:44:11.458852 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9ab510-20f1-4265-8735-596ca8e18ae9-config-data\") pod \"ec9ab510-20f1-4265-8735-596ca8e18ae9\" (UID: \"ec9ab510-20f1-4265-8735-596ca8e18ae9\") " Nov 22 08:44:11 crc kubenswrapper[4856]: I1122 08:44:11.459002 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm79t\" (UniqueName: \"kubernetes.io/projected/ec9ab510-20f1-4265-8735-596ca8e18ae9-kube-api-access-sm79t\") pod \"ec9ab510-20f1-4265-8735-596ca8e18ae9\" (UID: \"ec9ab510-20f1-4265-8735-596ca8e18ae9\") " Nov 22 08:44:11 crc kubenswrapper[4856]: I1122 08:44:11.459087 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9ab510-20f1-4265-8735-596ca8e18ae9-combined-ca-bundle\") pod \"ec9ab510-20f1-4265-8735-596ca8e18ae9\" (UID: \"ec9ab510-20f1-4265-8735-596ca8e18ae9\") " Nov 22 08:44:11 crc kubenswrapper[4856]: I1122 08:44:11.465345 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec9ab510-20f1-4265-8735-596ca8e18ae9-kube-api-access-sm79t" (OuterVolumeSpecName: "kube-api-access-sm79t") pod "ec9ab510-20f1-4265-8735-596ca8e18ae9" (UID: "ec9ab510-20f1-4265-8735-596ca8e18ae9"). InnerVolumeSpecName "kube-api-access-sm79t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:44:11 crc kubenswrapper[4856]: I1122 08:44:11.486308 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec9ab510-20f1-4265-8735-596ca8e18ae9-config-data" (OuterVolumeSpecName: "config-data") pod "ec9ab510-20f1-4265-8735-596ca8e18ae9" (UID: "ec9ab510-20f1-4265-8735-596ca8e18ae9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:44:11 crc kubenswrapper[4856]: I1122 08:44:11.486727 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec9ab510-20f1-4265-8735-596ca8e18ae9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec9ab510-20f1-4265-8735-596ca8e18ae9" (UID: "ec9ab510-20f1-4265-8735-596ca8e18ae9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:44:11 crc kubenswrapper[4856]: I1122 08:44:11.560815 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9ab510-20f1-4265-8735-596ca8e18ae9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:11 crc kubenswrapper[4856]: I1122 08:44:11.560854 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9ab510-20f1-4265-8735-596ca8e18ae9-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:11 crc kubenswrapper[4856]: I1122 08:44:11.560867 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm79t\" (UniqueName: \"kubernetes.io/projected/ec9ab510-20f1-4265-8735-596ca8e18ae9-kube-api-access-sm79t\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.191229 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ec9ab510-20f1-4265-8735-596ca8e18ae9","Type":"ContainerDied","Data":"004ca29d429b340b046256af29709e0b9149a7402a5d8d345c782c3e5dbec429"} Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.191285 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.191748 4856 scope.go:117] "RemoveContainer" containerID="8d241a4426d6c05f2e50eb47c2cc076b55025b42ddf9ce9295d3ae9f2c87dde5" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.230551 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.242309 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.250870 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 08:44:12 crc kubenswrapper[4856]: E1122 08:44:12.251312 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9ab510-20f1-4265-8735-596ca8e18ae9" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.251334 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9ab510-20f1-4265-8735-596ca8e18ae9" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.251631 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec9ab510-20f1-4265-8735-596ca8e18ae9" containerName="nova-cell1-novncproxy-novncproxy" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.252371 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.254448 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.254677 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.257070 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.262711 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.377300 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6998f0c9-8a9c-4d8c-9549-412b52efd19e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.377446 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6998f0c9-8a9c-4d8c-9549-412b52efd19e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.377800 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfzfj\" (UniqueName: \"kubernetes.io/projected/6998f0c9-8a9c-4d8c-9549-412b52efd19e-kube-api-access-gfzfj\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.377938 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6998f0c9-8a9c-4d8c-9549-412b52efd19e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.377986 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6998f0c9-8a9c-4d8c-9549-412b52efd19e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.479435 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6998f0c9-8a9c-4d8c-9549-412b52efd19e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.479541 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzfj\" (UniqueName: \"kubernetes.io/projected/6998f0c9-8a9c-4d8c-9549-412b52efd19e-kube-api-access-gfzfj\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.479600 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6998f0c9-8a9c-4d8c-9549-412b52efd19e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.479624 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6998f0c9-8a9c-4d8c-9549-412b52efd19e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.479678 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6998f0c9-8a9c-4d8c-9549-412b52efd19e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.722463 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec9ab510-20f1-4265-8735-596ca8e18ae9" path="/var/lib/kubelet/pods/ec9ab510-20f1-4265-8735-596ca8e18ae9/volumes" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.733761 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6998f0c9-8a9c-4d8c-9549-412b52efd19e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.733786 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6998f0c9-8a9c-4d8c-9549-412b52efd19e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.733757 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6998f0c9-8a9c-4d8c-9549-412b52efd19e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.734058 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6998f0c9-8a9c-4d8c-9549-412b52efd19e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.736689 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfzfj\" (UniqueName: \"kubernetes.io/projected/6998f0c9-8a9c-4d8c-9549-412b52efd19e-kube-api-access-gfzfj\") pod \"nova-cell1-novncproxy-0\" (UID: \"6998f0c9-8a9c-4d8c-9549-412b52efd19e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:12 crc kubenswrapper[4856]: I1122 08:44:12.892872 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:13 crc kubenswrapper[4856]: I1122 08:44:13.328822 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 22 08:44:13 crc kubenswrapper[4856]: W1122 08:44:13.335077 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6998f0c9_8a9c_4d8c_9549_412b52efd19e.slice/crio-f7e6645ffa46a888c97624de757c0c544474f5266887c0dba9ee82def5c39d62 WatchSource:0}: Error finding container f7e6645ffa46a888c97624de757c0c544474f5266887c0dba9ee82def5c39d62: Status 404 returned error can't find the container with id f7e6645ffa46a888c97624de757c0c544474f5266887c0dba9ee82def5c39d62 Nov 22 08:44:14 crc kubenswrapper[4856]: I1122 08:44:14.217199 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6998f0c9-8a9c-4d8c-9549-412b52efd19e","Type":"ContainerStarted","Data":"f7e6645ffa46a888c97624de757c0c544474f5266887c0dba9ee82def5c39d62"} Nov 22 08:44:15 crc kubenswrapper[4856]: I1122 08:44:15.412930 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 08:44:15 crc kubenswrapper[4856]: I1122 08:44:15.412982 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 08:44:16 crc kubenswrapper[4856]: I1122 08:44:16.047113 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-vm9wt"] Nov 22 08:44:16 crc kubenswrapper[4856]: I1122 08:44:16.057151 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-vm9wt"] Nov 22 08:44:16 crc kubenswrapper[4856]: I1122 08:44:16.408659 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.93:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:16 crc kubenswrapper[4856]: I1122 08:44:16.408688 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.93:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:16 crc kubenswrapper[4856]: I1122 08:44:16.495798 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.94:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:16 crc kubenswrapper[4856]: I1122 08:44:16.495902 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.94:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:16 crc kubenswrapper[4856]: I1122 08:44:16.723159 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c2f0db-0bef-41d1-8b0c-4e7875e69f99" path="/var/lib/kubelet/pods/b2c2f0db-0bef-41d1-8b0c-4e7875e69f99/volumes" Nov 22 08:44:20 crc kubenswrapper[4856]: I1122 08:44:20.986640 4856 generic.go:334] "Generic (PLEG): container finished" podID="0f98e511-1796-4994-9530-eadf8b7d54e4" containerID="b3a92759f5d317db1ccdd0e3f66622176f3484d4d0c7ef22779cdf5d04e98264" exitCode=-1 Nov 22 08:44:20 crc kubenswrapper[4856]: I1122 08:44:20.986748 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f98e511-1796-4994-9530-eadf8b7d54e4","Type":"ContainerDied","Data":"b3a92759f5d317db1ccdd0e3f66622176f3484d4d0c7ef22779cdf5d04e98264"} Nov 22 08:44:23 crc kubenswrapper[4856]: I1122 08:44:23.080988 4856 scope.go:117] "RemoveContainer" containerID="6802c3a04d19d09320e10dd4878392cec70a4cde253c8e4cd5bf94b5220e20ac" Nov 22 08:44:23 crc kubenswrapper[4856]: I1122 08:44:23.111304 4856 scope.go:117] "RemoveContainer" containerID="42f28f1cc97cbe6a6363460525ec227dec33a31c68c47411b5f3c09cc50fac93" Nov 22 08:44:23 crc kubenswrapper[4856]: I1122 08:44:23.188097 4856 scope.go:117] "RemoveContainer" containerID="e319472080c95830cec12e31aed60ac4e7dd030b360e7baece50b1f50d7f094e" Nov 22 08:44:23 crc kubenswrapper[4856]: I1122 08:44:23.228966 4856 scope.go:117] "RemoveContainer" containerID="268d901263ed4e1063f1fee215fd020aaccb13e4d1a0f68eb3ba148400263601" Nov 22 08:44:24 crc kubenswrapper[4856]: I1122 08:44:24.710176 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:44:24 crc kubenswrapper[4856]: E1122 08:44:24.712009 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:44:26 crc kubenswrapper[4856]: I1122 08:44:26.408636 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.93:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:26 crc kubenswrapper[4856]: I1122 08:44:26.408660 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.93:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:26 crc kubenswrapper[4856]: I1122 08:44:26.455057 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.94:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:26 crc kubenswrapper[4856]: I1122 08:44:26.455155 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.94:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:27 crc kubenswrapper[4856]: I1122 08:44:27.858294 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 08:44:27 crc kubenswrapper[4856]: I1122 08:44:27.984948 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f98e511-1796-4994-9530-eadf8b7d54e4-config-data\") pod \"0f98e511-1796-4994-9530-eadf8b7d54e4\" (UID: \"0f98e511-1796-4994-9530-eadf8b7d54e4\") " Nov 22 08:44:27 crc kubenswrapper[4856]: I1122 08:44:27.985027 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l589h\" (UniqueName: \"kubernetes.io/projected/0f98e511-1796-4994-9530-eadf8b7d54e4-kube-api-access-l589h\") pod \"0f98e511-1796-4994-9530-eadf8b7d54e4\" (UID: \"0f98e511-1796-4994-9530-eadf8b7d54e4\") " Nov 22 08:44:27 crc kubenswrapper[4856]: I1122 08:44:27.985097 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f98e511-1796-4994-9530-eadf8b7d54e4-combined-ca-bundle\") pod \"0f98e511-1796-4994-9530-eadf8b7d54e4\" (UID: \"0f98e511-1796-4994-9530-eadf8b7d54e4\") " Nov 22 08:44:27 crc kubenswrapper[4856]: I1122 08:44:27.990874 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f98e511-1796-4994-9530-eadf8b7d54e4-kube-api-access-l589h" (OuterVolumeSpecName: "kube-api-access-l589h") pod "0f98e511-1796-4994-9530-eadf8b7d54e4" (UID: "0f98e511-1796-4994-9530-eadf8b7d54e4"). InnerVolumeSpecName "kube-api-access-l589h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.015876 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f98e511-1796-4994-9530-eadf8b7d54e4-config-data" (OuterVolumeSpecName: "config-data") pod "0f98e511-1796-4994-9530-eadf8b7d54e4" (UID: "0f98e511-1796-4994-9530-eadf8b7d54e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.023318 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f98e511-1796-4994-9530-eadf8b7d54e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f98e511-1796-4994-9530-eadf8b7d54e4" (UID: "0f98e511-1796-4994-9530-eadf8b7d54e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.065741 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6998f0c9-8a9c-4d8c-9549-412b52efd19e","Type":"ContainerStarted","Data":"00c578f7d2d901bea8ad3786c7aa92e08af1a2b98a4324bc2b04a63dd2d5a611"} Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.072578 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f98e511-1796-4994-9530-eadf8b7d54e4","Type":"ContainerDied","Data":"10fb49ea15128641f047cdd33d7b9bfcb342cdb78c10c99d26dc8dc9e04e6747"} Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.072640 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.072847 4856 scope.go:117] "RemoveContainer" containerID="b3a92759f5d317db1ccdd0e3f66622176f3484d4d0c7ef22779cdf5d04e98264" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.088933 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f98e511-1796-4994-9530-eadf8b7d54e4-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.088972 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l589h\" (UniqueName: \"kubernetes.io/projected/0f98e511-1796-4994-9530-eadf8b7d54e4-kube-api-access-l589h\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.088984 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f98e511-1796-4994-9530-eadf8b7d54e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.099119 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=16.099093566 podStartE2EDuration="16.099093566s" podCreationTimestamp="2025-11-22 08:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:44:28.091980085 +0000 UTC m=+6110.505373343" watchObservedRunningTime="2025-11-22 08:44:28.099093566 +0000 UTC m=+6110.512486824" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.128213 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.138825 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.180170 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:44:28 crc kubenswrapper[4856]: E1122 08:44:28.180839 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f98e511-1796-4994-9530-eadf8b7d54e4" containerName="nova-scheduler-scheduler" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.180863 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f98e511-1796-4994-9530-eadf8b7d54e4" containerName="nova-scheduler-scheduler" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.181167 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f98e511-1796-4994-9530-eadf8b7d54e4" containerName="nova-scheduler-scheduler" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.182164 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.188089 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.189180 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.192475 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7a5c86-a6a8-4da4-970c-a34886016faa-config-data\") pod \"nova-scheduler-0\" (UID: \"dc7a5c86-a6a8-4da4-970c-a34886016faa\") " pod="openstack/nova-scheduler-0" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.192781 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7a5c86-a6a8-4da4-970c-a34886016faa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dc7a5c86-a6a8-4da4-970c-a34886016faa\") " pod="openstack/nova-scheduler-0" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.192829 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph55w\" (UniqueName: \"kubernetes.io/projected/dc7a5c86-a6a8-4da4-970c-a34886016faa-kube-api-access-ph55w\") pod \"nova-scheduler-0\" (UID: \"dc7a5c86-a6a8-4da4-970c-a34886016faa\") " pod="openstack/nova-scheduler-0" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.293728 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7a5c86-a6a8-4da4-970c-a34886016faa-config-data\") pod \"nova-scheduler-0\" (UID: \"dc7a5c86-a6a8-4da4-970c-a34886016faa\") " pod="openstack/nova-scheduler-0" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.293805 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7a5c86-a6a8-4da4-970c-a34886016faa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dc7a5c86-a6a8-4da4-970c-a34886016faa\") " pod="openstack/nova-scheduler-0" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.293837 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph55w\" (UniqueName: \"kubernetes.io/projected/dc7a5c86-a6a8-4da4-970c-a34886016faa-kube-api-access-ph55w\") pod \"nova-scheduler-0\" (UID: \"dc7a5c86-a6a8-4da4-970c-a34886016faa\") " pod="openstack/nova-scheduler-0" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.299394 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7a5c86-a6a8-4da4-970c-a34886016faa-config-data\") pod \"nova-scheduler-0\" (UID: \"dc7a5c86-a6a8-4da4-970c-a34886016faa\") " pod="openstack/nova-scheduler-0" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.308826 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7a5c86-a6a8-4da4-970c-a34886016faa-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"dc7a5c86-a6a8-4da4-970c-a34886016faa\") " pod="openstack/nova-scheduler-0" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.311896 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph55w\" (UniqueName: \"kubernetes.io/projected/dc7a5c86-a6a8-4da4-970c-a34886016faa-kube-api-access-ph55w\") pod \"nova-scheduler-0\" (UID: \"dc7a5c86-a6a8-4da4-970c-a34886016faa\") " pod="openstack/nova-scheduler-0" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.502106 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 08:44:28 crc kubenswrapper[4856]: I1122 08:44:28.728295 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f98e511-1796-4994-9530-eadf8b7d54e4" path="/var/lib/kubelet/pods/0f98e511-1796-4994-9530-eadf8b7d54e4/volumes" Nov 22 08:44:29 crc kubenswrapper[4856]: I1122 08:44:28.996969 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:44:29 crc kubenswrapper[4856]: W1122 08:44:28.998274 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc7a5c86_a6a8_4da4_970c_a34886016faa.slice/crio-7dd96f27d0fb07f9026e9f50f6302e9c5f19eca88276869c41f3af9160dc6e22 WatchSource:0}: Error finding container 7dd96f27d0fb07f9026e9f50f6302e9c5f19eca88276869c41f3af9160dc6e22: Status 404 returned error can't find the container with id 7dd96f27d0fb07f9026e9f50f6302e9c5f19eca88276869c41f3af9160dc6e22 Nov 22 08:44:29 crc kubenswrapper[4856]: I1122 08:44:29.084733 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc7a5c86-a6a8-4da4-970c-a34886016faa","Type":"ContainerStarted","Data":"7dd96f27d0fb07f9026e9f50f6302e9c5f19eca88276869c41f3af9160dc6e22"} Nov 22 08:44:30 crc kubenswrapper[4856]: I1122 08:44:30.093131 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc7a5c86-a6a8-4da4-970c-a34886016faa","Type":"ContainerStarted","Data":"1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc"} Nov 22 08:44:31 crc kubenswrapper[4856]: I1122 08:44:31.128877 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.12885791 podStartE2EDuration="3.12885791s" podCreationTimestamp="2025-11-22 08:44:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:44:31.120723041 +0000 UTC m=+6113.534116339" watchObservedRunningTime="2025-11-22 08:44:31.12885791 +0000 UTC m=+6113.542251168" Nov 22 08:44:32 crc kubenswrapper[4856]: I1122 08:44:32.893429 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:32 crc kubenswrapper[4856]: I1122 08:44:32.893962 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:32 crc kubenswrapper[4856]: I1122 08:44:32.912546 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.154872 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.333086 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bb2b6"] Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.334456 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.336646 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.336749 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.345815 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bb2b6"] Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.502926 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.508537 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cnhk\" (UniqueName: \"kubernetes.io/projected/482880bb-c065-4bed-be16-bad626eac7ed-kube-api-access-6cnhk\") pod \"nova-cell1-cell-mapping-bb2b6\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.508967 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-config-data\") pod \"nova-cell1-cell-mapping-bb2b6\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.509047 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bb2b6\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.509332 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-scripts\") pod \"nova-cell1-cell-mapping-bb2b6\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.611747 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-scripts\") pod \"nova-cell1-cell-mapping-bb2b6\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.611811 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cnhk\" (UniqueName: \"kubernetes.io/projected/482880bb-c065-4bed-be16-bad626eac7ed-kube-api-access-6cnhk\") pod \"nova-cell1-cell-mapping-bb2b6\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.612011 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-config-data\") pod \"nova-cell1-cell-mapping-bb2b6\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.612036 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bb2b6\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.620015 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-scripts\") pod \"nova-cell1-cell-mapping-bb2b6\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.620270 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-config-data\") pod \"nova-cell1-cell-mapping-bb2b6\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.629236 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cnhk\" (UniqueName: \"kubernetes.io/projected/482880bb-c065-4bed-be16-bad626eac7ed-kube-api-access-6cnhk\") pod \"nova-cell1-cell-mapping-bb2b6\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.629503 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bb2b6\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:33 crc kubenswrapper[4856]: I1122 08:44:33.660886 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:34 crc kubenswrapper[4856]: I1122 08:44:34.112363 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bb2b6"] Nov 22 08:44:34 crc kubenswrapper[4856]: W1122 08:44:34.117769 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod482880bb_c065_4bed_be16_bad626eac7ed.slice/crio-32269c431ffced18f6073612d7ddb75c493e9fb27eb6a66e3e520c8afcdc6129 WatchSource:0}: Error finding container 32269c431ffced18f6073612d7ddb75c493e9fb27eb6a66e3e520c8afcdc6129: Status 404 returned error can't find the container with id 32269c431ffced18f6073612d7ddb75c493e9fb27eb6a66e3e520c8afcdc6129 Nov 22 08:44:34 crc kubenswrapper[4856]: I1122 08:44:34.139994 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bb2b6" event={"ID":"482880bb-c065-4bed-be16-bad626eac7ed","Type":"ContainerStarted","Data":"32269c431ffced18f6073612d7ddb75c493e9fb27eb6a66e3e520c8afcdc6129"} Nov 22 08:44:35 crc kubenswrapper[4856]: I1122 08:44:35.151380 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bb2b6" event={"ID":"482880bb-c065-4bed-be16-bad626eac7ed","Type":"ContainerStarted","Data":"f2d7f4d17daf3a8a1de39e5b6d2335f98e57082c3c0ace101a71b483b1ecba36"} Nov 22 08:44:35 crc kubenswrapper[4856]: I1122 08:44:35.180556 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bb2b6" podStartSLOduration=2.180538169 podStartE2EDuration="2.180538169s" podCreationTimestamp="2025-11-22 08:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:44:35.173816038 +0000 UTC m=+6117.587209296" watchObservedRunningTime="2025-11-22 08:44:35.180538169 +0000 UTC m=+6117.593931427" Nov 22 08:44:36 crc kubenswrapper[4856]: I1122 08:44:36.407699 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.93:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:36 crc kubenswrapper[4856]: I1122 08:44:36.407771 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.93:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:36 crc kubenswrapper[4856]: I1122 08:44:36.495873 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.94:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:36 crc kubenswrapper[4856]: I1122 08:44:36.496097 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.94:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:44:38 crc kubenswrapper[4856]: I1122 08:44:38.503292 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 08:44:38 crc kubenswrapper[4856]: I1122 08:44:38.539446 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 08:44:38 crc kubenswrapper[4856]: I1122 08:44:38.722294 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:44:38 crc kubenswrapper[4856]: E1122 08:44:38.722716 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:44:39 crc kubenswrapper[4856]: I1122 08:44:39.228690 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 08:44:40 crc kubenswrapper[4856]: I1122 08:44:40.213763 4856 generic.go:334] "Generic (PLEG): container finished" podID="482880bb-c065-4bed-be16-bad626eac7ed" containerID="f2d7f4d17daf3a8a1de39e5b6d2335f98e57082c3c0ace101a71b483b1ecba36" exitCode=0 Nov 22 08:44:40 crc kubenswrapper[4856]: I1122 08:44:40.213868 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bb2b6" event={"ID":"482880bb-c065-4bed-be16-bad626eac7ed","Type":"ContainerDied","Data":"f2d7f4d17daf3a8a1de39e5b6d2335f98e57082c3c0ace101a71b483b1ecba36"} Nov 22 08:44:41 crc kubenswrapper[4856]: I1122 08:44:41.562607 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:41 crc kubenswrapper[4856]: I1122 08:44:41.582065 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-combined-ca-bundle\") pod \"482880bb-c065-4bed-be16-bad626eac7ed\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " Nov 22 08:44:41 crc kubenswrapper[4856]: I1122 08:44:41.582145 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cnhk\" (UniqueName: \"kubernetes.io/projected/482880bb-c065-4bed-be16-bad626eac7ed-kube-api-access-6cnhk\") pod \"482880bb-c065-4bed-be16-bad626eac7ed\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " Nov 22 08:44:41 crc kubenswrapper[4856]: I1122 08:44:41.582259 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-config-data\") pod \"482880bb-c065-4bed-be16-bad626eac7ed\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " Nov 22 08:44:41 crc kubenswrapper[4856]: I1122 08:44:41.582397 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-scripts\") pod \"482880bb-c065-4bed-be16-bad626eac7ed\" (UID: \"482880bb-c065-4bed-be16-bad626eac7ed\") " Nov 22 08:44:41 crc kubenswrapper[4856]: I1122 08:44:41.593817 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-scripts" (OuterVolumeSpecName: "scripts") pod "482880bb-c065-4bed-be16-bad626eac7ed" (UID: "482880bb-c065-4bed-be16-bad626eac7ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:44:41 crc kubenswrapper[4856]: I1122 08:44:41.593825 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/482880bb-c065-4bed-be16-bad626eac7ed-kube-api-access-6cnhk" (OuterVolumeSpecName: "kube-api-access-6cnhk") pod "482880bb-c065-4bed-be16-bad626eac7ed" (UID: "482880bb-c065-4bed-be16-bad626eac7ed"). InnerVolumeSpecName "kube-api-access-6cnhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:44:41 crc kubenswrapper[4856]: I1122 08:44:41.620324 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "482880bb-c065-4bed-be16-bad626eac7ed" (UID: "482880bb-c065-4bed-be16-bad626eac7ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:44:41 crc kubenswrapper[4856]: I1122 08:44:41.621661 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-config-data" (OuterVolumeSpecName: "config-data") pod "482880bb-c065-4bed-be16-bad626eac7ed" (UID: "482880bb-c065-4bed-be16-bad626eac7ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:44:41 crc kubenswrapper[4856]: I1122 08:44:41.684142 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:41 crc kubenswrapper[4856]: I1122 08:44:41.684178 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:41 crc kubenswrapper[4856]: I1122 08:44:41.684193 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cnhk\" (UniqueName: \"kubernetes.io/projected/482880bb-c065-4bed-be16-bad626eac7ed-kube-api-access-6cnhk\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:41 crc kubenswrapper[4856]: I1122 08:44:41.684206 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/482880bb-c065-4bed-be16-bad626eac7ed-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:42 crc kubenswrapper[4856]: I1122 08:44:42.232181 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bb2b6" event={"ID":"482880bb-c065-4bed-be16-bad626eac7ed","Type":"ContainerDied","Data":"32269c431ffced18f6073612d7ddb75c493e9fb27eb6a66e3e520c8afcdc6129"} Nov 22 08:44:42 crc kubenswrapper[4856]: I1122 08:44:42.232229 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32269c431ffced18f6073612d7ddb75c493e9fb27eb6a66e3e520c8afcdc6129" Nov 22 08:44:42 crc kubenswrapper[4856]: I1122 08:44:42.232247 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bb2b6" Nov 22 08:44:42 crc kubenswrapper[4856]: I1122 08:44:42.413979 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:44:42 crc kubenswrapper[4856]: I1122 08:44:42.414213 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-log" containerID="cri-o://16a323f53cc673a27c5f84730f7fe97e8335d7112e1d9c9754c1bd10bb1f5b5c" gracePeriod=30 Nov 22 08:44:42 crc kubenswrapper[4856]: I1122 08:44:42.414322 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-api" containerID="cri-o://04a7dd6dbb264df69f733b8adc9289a5bfdfeaf5184143a2e3badfd5c201b98b" gracePeriod=30 Nov 22 08:44:42 crc kubenswrapper[4856]: I1122 08:44:42.428329 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:44:42 crc kubenswrapper[4856]: I1122 08:44:42.428564 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="dc7a5c86-a6a8-4da4-970c-a34886016faa" containerName="nova-scheduler-scheduler" containerID="cri-o://1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" gracePeriod=30 Nov 22 08:44:42 crc kubenswrapper[4856]: I1122 08:44:42.499875 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:44:42 crc kubenswrapper[4856]: I1122 08:44:42.500125 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-log" containerID="cri-o://eac5280c8add0b4b9199a29b8732420e42b3fdde1da2ef89de524585e48304ce" gracePeriod=30 Nov 22 08:44:42 crc kubenswrapper[4856]: I1122 08:44:42.500271 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-metadata" containerID="cri-o://7eb599cca049e58a1214ab401ad7fbbfe537e484fe5ba55db233ef84c50389c9" gracePeriod=30 Nov 22 08:44:43 crc kubenswrapper[4856]: I1122 08:44:43.242902 4856 generic.go:334] "Generic (PLEG): container finished" podID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerID="16a323f53cc673a27c5f84730f7fe97e8335d7112e1d9c9754c1bd10bb1f5b5c" exitCode=143 Nov 22 08:44:43 crc kubenswrapper[4856]: I1122 08:44:43.242970 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42687cfc-ebd9-4f23-a4e1-1443f5539dad","Type":"ContainerDied","Data":"16a323f53cc673a27c5f84730f7fe97e8335d7112e1d9c9754c1bd10bb1f5b5c"} Nov 22 08:44:43 crc kubenswrapper[4856]: E1122 08:44:43.504686 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:44:43 crc kubenswrapper[4856]: E1122 08:44:43.506481 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:44:43 crc kubenswrapper[4856]: E1122 08:44:43.507540 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:44:43 crc kubenswrapper[4856]: E1122 08:44:43.507575 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dc7a5c86-a6a8-4da4-970c-a34886016faa" containerName="nova-scheduler-scheduler" Nov 22 08:44:45 crc kubenswrapper[4856]: I1122 08:44:45.261187 4856 generic.go:334] "Generic (PLEG): container finished" podID="43139350-8f06-4109-91a4-a71c0795a2cd" containerID="eac5280c8add0b4b9199a29b8732420e42b3fdde1da2ef89de524585e48304ce" exitCode=143 Nov 22 08:44:45 crc kubenswrapper[4856]: I1122 08:44:45.261237 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43139350-8f06-4109-91a4-a71c0795a2cd","Type":"ContainerDied","Data":"eac5280c8add0b4b9199a29b8732420e42b3fdde1da2ef89de524585e48304ce"} Nov 22 08:44:48 crc kubenswrapper[4856]: E1122 08:44:48.506131 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:44:48 crc kubenswrapper[4856]: E1122 08:44:48.509565 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:44:48 crc kubenswrapper[4856]: E1122 08:44:48.511920 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:44:48 crc kubenswrapper[4856]: E1122 08:44:48.511968 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dc7a5c86-a6a8-4da4-970c-a34886016faa" containerName="nova-scheduler-scheduler" Nov 22 08:44:53 crc kubenswrapper[4856]: E1122 08:44:53.505100 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:44:53 crc kubenswrapper[4856]: E1122 08:44:53.507584 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:44:53 crc kubenswrapper[4856]: E1122 08:44:53.509162 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:44:53 crc kubenswrapper[4856]: E1122 08:44:53.509207 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dc7a5c86-a6a8-4da4-970c-a34886016faa" containerName="nova-scheduler-scheduler" Nov 22 08:44:53 crc kubenswrapper[4856]: I1122 08:44:53.710587 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:44:53 crc kubenswrapper[4856]: E1122 08:44:53.710917 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.358707 4856 generic.go:334] "Generic (PLEG): container finished" podID="43139350-8f06-4109-91a4-a71c0795a2cd" containerID="7eb599cca049e58a1214ab401ad7fbbfe537e484fe5ba55db233ef84c50389c9" exitCode=0 Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.358787 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43139350-8f06-4109-91a4-a71c0795a2cd","Type":"ContainerDied","Data":"7eb599cca049e58a1214ab401ad7fbbfe537e484fe5ba55db233ef84c50389c9"} Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.359445 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"43139350-8f06-4109-91a4-a71c0795a2cd","Type":"ContainerDied","Data":"cbe7706be31018fa7fa620102f329e603703753711a4626d08e07a330b57dcf3"} Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.359464 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbe7706be31018fa7fa620102f329e603703753711a4626d08e07a330b57dcf3" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.362950 4856 generic.go:334] "Generic (PLEG): container finished" podID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerID="04a7dd6dbb264df69f733b8adc9289a5bfdfeaf5184143a2e3badfd5c201b98b" exitCode=0 Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.362984 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42687cfc-ebd9-4f23-a4e1-1443f5539dad","Type":"ContainerDied","Data":"04a7dd6dbb264df69f733b8adc9289a5bfdfeaf5184143a2e3badfd5c201b98b"} Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.381414 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.493947 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-config-data\") pod \"43139350-8f06-4109-91a4-a71c0795a2cd\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.494019 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-combined-ca-bundle\") pod \"43139350-8f06-4109-91a4-a71c0795a2cd\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.494161 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43139350-8f06-4109-91a4-a71c0795a2cd-logs\") pod \"43139350-8f06-4109-91a4-a71c0795a2cd\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.494612 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43139350-8f06-4109-91a4-a71c0795a2cd-logs" (OuterVolumeSpecName: "logs") pod "43139350-8f06-4109-91a4-a71c0795a2cd" (UID: "43139350-8f06-4109-91a4-a71c0795a2cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.494647 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gncb\" (UniqueName: \"kubernetes.io/projected/43139350-8f06-4109-91a4-a71c0795a2cd-kube-api-access-4gncb\") pod \"43139350-8f06-4109-91a4-a71c0795a2cd\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.494713 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-nova-metadata-tls-certs\") pod \"43139350-8f06-4109-91a4-a71c0795a2cd\" (UID: \"43139350-8f06-4109-91a4-a71c0795a2cd\") " Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.495024 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/43139350-8f06-4109-91a4-a71c0795a2cd-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.501936 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43139350-8f06-4109-91a4-a71c0795a2cd-kube-api-access-4gncb" (OuterVolumeSpecName: "kube-api-access-4gncb") pod "43139350-8f06-4109-91a4-a71c0795a2cd" (UID: "43139350-8f06-4109-91a4-a71c0795a2cd"). InnerVolumeSpecName "kube-api-access-4gncb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.525996 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-config-data" (OuterVolumeSpecName: "config-data") pod "43139350-8f06-4109-91a4-a71c0795a2cd" (UID: "43139350-8f06-4109-91a4-a71c0795a2cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.526880 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43139350-8f06-4109-91a4-a71c0795a2cd" (UID: "43139350-8f06-4109-91a4-a71c0795a2cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.564801 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "43139350-8f06-4109-91a4-a71c0795a2cd" (UID: "43139350-8f06-4109-91a4-a71c0795a2cd"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.597670 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gncb\" (UniqueName: \"kubernetes.io/projected/43139350-8f06-4109-91a4-a71c0795a2cd-kube-api-access-4gncb\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.597713 4856 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.597727 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.597742 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43139350-8f06-4109-91a4-a71c0795a2cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.645045 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.802012 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42687cfc-ebd9-4f23-a4e1-1443f5539dad-logs\") pod \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.802453 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42687cfc-ebd9-4f23-a4e1-1443f5539dad-config-data\") pod \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.802510 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt4n6\" (UniqueName: \"kubernetes.io/projected/42687cfc-ebd9-4f23-a4e1-1443f5539dad-kube-api-access-nt4n6\") pod \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.802619 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42687cfc-ebd9-4f23-a4e1-1443f5539dad-combined-ca-bundle\") pod \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\" (UID: \"42687cfc-ebd9-4f23-a4e1-1443f5539dad\") " Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.802657 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42687cfc-ebd9-4f23-a4e1-1443f5539dad-logs" (OuterVolumeSpecName: "logs") pod "42687cfc-ebd9-4f23-a4e1-1443f5539dad" (UID: "42687cfc-ebd9-4f23-a4e1-1443f5539dad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.803533 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/42687cfc-ebd9-4f23-a4e1-1443f5539dad-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.807747 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42687cfc-ebd9-4f23-a4e1-1443f5539dad-kube-api-access-nt4n6" (OuterVolumeSpecName: "kube-api-access-nt4n6") pod "42687cfc-ebd9-4f23-a4e1-1443f5539dad" (UID: "42687cfc-ebd9-4f23-a4e1-1443f5539dad"). InnerVolumeSpecName "kube-api-access-nt4n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.829262 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42687cfc-ebd9-4f23-a4e1-1443f5539dad-config-data" (OuterVolumeSpecName: "config-data") pod "42687cfc-ebd9-4f23-a4e1-1443f5539dad" (UID: "42687cfc-ebd9-4f23-a4e1-1443f5539dad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.838463 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42687cfc-ebd9-4f23-a4e1-1443f5539dad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42687cfc-ebd9-4f23-a4e1-1443f5539dad" (UID: "42687cfc-ebd9-4f23-a4e1-1443f5539dad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.905114 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42687cfc-ebd9-4f23-a4e1-1443f5539dad-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.905157 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt4n6\" (UniqueName: \"kubernetes.io/projected/42687cfc-ebd9-4f23-a4e1-1443f5539dad-kube-api-access-nt4n6\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:56 crc kubenswrapper[4856]: I1122 08:44:56.905299 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42687cfc-ebd9-4f23-a4e1-1443f5539dad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.373204 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"42687cfc-ebd9-4f23-a4e1-1443f5539dad","Type":"ContainerDied","Data":"6b7fc932f314ba748a5608c28ff739645c5999d281b1a90753eec87b2d2e4db4"} Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.373238 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.373264 4856 scope.go:117] "RemoveContainer" containerID="04a7dd6dbb264df69f733b8adc9289a5bfdfeaf5184143a2e3badfd5c201b98b" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.373221 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.399866 4856 scope.go:117] "RemoveContainer" containerID="16a323f53cc673a27c5f84730f7fe97e8335d7112e1d9c9754c1bd10bb1f5b5c" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.408123 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.419798 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.432789 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.449410 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.462314 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:44:57 crc kubenswrapper[4856]: E1122 08:44:57.464796 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="482880bb-c065-4bed-be16-bad626eac7ed" containerName="nova-manage" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.464924 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="482880bb-c065-4bed-be16-bad626eac7ed" containerName="nova-manage" Nov 22 08:44:57 crc kubenswrapper[4856]: E1122 08:44:57.465015 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-log" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.465094 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-log" Nov 22 08:44:57 crc kubenswrapper[4856]: E1122 08:44:57.465173 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-metadata" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.465248 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-metadata" Nov 22 08:44:57 crc kubenswrapper[4856]: E1122 08:44:57.465357 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-log" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.465423 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-log" Nov 22 08:44:57 crc kubenswrapper[4856]: E1122 08:44:57.465500 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-api" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.465649 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-api" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.467183 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-log" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.467322 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="482880bb-c065-4bed-be16-bad626eac7ed" containerName="nova-manage" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.467429 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-log" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.467568 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" containerName="nova-api-api" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.467664 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" containerName="nova-metadata-metadata" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.469971 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.479338 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.479645 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.491024 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.501283 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.503304 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.505930 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.508298 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.616075 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25bzn\" (UniqueName: \"kubernetes.io/projected/a8ff87d7-f980-4337-84b7-db2032ed5ebe-kube-api-access-25bzn\") pod \"nova-api-0\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.616137 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ff87d7-f980-4337-84b7-db2032ed5ebe-config-data\") pod \"nova-api-0\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.616203 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb038800-5d2f-42f1-85b1-05d8aa807383-logs\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.616659 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.616796 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggzn6\" (UniqueName: \"kubernetes.io/projected/fb038800-5d2f-42f1-85b1-05d8aa807383-kube-api-access-ggzn6\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.616858 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.616903 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ff87d7-f980-4337-84b7-db2032ed5ebe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.617015 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8ff87d7-f980-4337-84b7-db2032ed5ebe-logs\") pod \"nova-api-0\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.617044 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-config-data\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.718582 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.718684 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggzn6\" (UniqueName: \"kubernetes.io/projected/fb038800-5d2f-42f1-85b1-05d8aa807383-kube-api-access-ggzn6\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.718719 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.718745 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ff87d7-f980-4337-84b7-db2032ed5ebe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.718795 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8ff87d7-f980-4337-84b7-db2032ed5ebe-logs\") pod \"nova-api-0\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.718820 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-config-data\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.718882 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25bzn\" (UniqueName: \"kubernetes.io/projected/a8ff87d7-f980-4337-84b7-db2032ed5ebe-kube-api-access-25bzn\") pod \"nova-api-0\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.718930 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ff87d7-f980-4337-84b7-db2032ed5ebe-config-data\") pod \"nova-api-0\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.719326 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb038800-5d2f-42f1-85b1-05d8aa807383-logs\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.720406 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8ff87d7-f980-4337-84b7-db2032ed5ebe-logs\") pod \"nova-api-0\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.720422 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb038800-5d2f-42f1-85b1-05d8aa807383-logs\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.724263 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.724354 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.724389 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ff87d7-f980-4337-84b7-db2032ed5ebe-config-data\") pod \"nova-api-0\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.725238 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ff87d7-f980-4337-84b7-db2032ed5ebe-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.726309 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-config-data\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.733974 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggzn6\" (UniqueName: \"kubernetes.io/projected/fb038800-5d2f-42f1-85b1-05d8aa807383-kube-api-access-ggzn6\") pod \"nova-metadata-0\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.735670 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25bzn\" (UniqueName: \"kubernetes.io/projected/a8ff87d7-f980-4337-84b7-db2032ed5ebe-kube-api-access-25bzn\") pod \"nova-api-0\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " pod="openstack/nova-api-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.801623 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 08:44:57 crc kubenswrapper[4856]: I1122 08:44:57.821428 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:44:58 crc kubenswrapper[4856]: I1122 08:44:58.267885 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:44:58 crc kubenswrapper[4856]: W1122 08:44:58.271476 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8ff87d7_f980_4337_84b7_db2032ed5ebe.slice/crio-569d899eaebd03022599fb1c01cf04b51c3bb7693bcb5e146fd5a721a2cd23d1 WatchSource:0}: Error finding container 569d899eaebd03022599fb1c01cf04b51c3bb7693bcb5e146fd5a721a2cd23d1: Status 404 returned error can't find the container with id 569d899eaebd03022599fb1c01cf04b51c3bb7693bcb5e146fd5a721a2cd23d1 Nov 22 08:44:58 crc kubenswrapper[4856]: I1122 08:44:58.274950 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 08:44:58 crc kubenswrapper[4856]: I1122 08:44:58.382129 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb038800-5d2f-42f1-85b1-05d8aa807383","Type":"ContainerStarted","Data":"d00503ba8cd7c221f0d7212ee4eef6c987be5c4d94e3ec8d4331f91c4f46acbe"} Nov 22 08:44:58 crc kubenswrapper[4856]: I1122 08:44:58.383395 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8ff87d7-f980-4337-84b7-db2032ed5ebe","Type":"ContainerStarted","Data":"569d899eaebd03022599fb1c01cf04b51c3bb7693bcb5e146fd5a721a2cd23d1"} Nov 22 08:44:58 crc kubenswrapper[4856]: E1122 08:44:58.504823 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:44:58 crc kubenswrapper[4856]: E1122 08:44:58.506589 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:44:58 crc kubenswrapper[4856]: E1122 08:44:58.508880 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:44:58 crc kubenswrapper[4856]: E1122 08:44:58.508971 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dc7a5c86-a6a8-4da4-970c-a34886016faa" containerName="nova-scheduler-scheduler" Nov 22 08:44:58 crc kubenswrapper[4856]: I1122 08:44:58.725600 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42687cfc-ebd9-4f23-a4e1-1443f5539dad" path="/var/lib/kubelet/pods/42687cfc-ebd9-4f23-a4e1-1443f5539dad/volumes" Nov 22 08:44:58 crc kubenswrapper[4856]: I1122 08:44:58.726396 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43139350-8f06-4109-91a4-a71c0795a2cd" path="/var/lib/kubelet/pods/43139350-8f06-4109-91a4-a71c0795a2cd/volumes" Nov 22 08:44:59 crc kubenswrapper[4856]: I1122 08:44:59.395831 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb038800-5d2f-42f1-85b1-05d8aa807383","Type":"ContainerStarted","Data":"4560371460c6dd4d4d43bcf8040fe8d4a482df1312737f51ad0d59f12c7664f0"} Nov 22 08:44:59 crc kubenswrapper[4856]: I1122 08:44:59.396137 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb038800-5d2f-42f1-85b1-05d8aa807383","Type":"ContainerStarted","Data":"2b5b955717e9bd38ae5b9a0616eaf2cac006e23a3db4204566fc610bea28865b"} Nov 22 08:44:59 crc kubenswrapper[4856]: I1122 08:44:59.399254 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8ff87d7-f980-4337-84b7-db2032ed5ebe","Type":"ContainerStarted","Data":"ee5b49b4c3cb0ef34d1bd6d84fb08e3985d33066deaf349e5a06dec6e25d030f"} Nov 22 08:44:59 crc kubenswrapper[4856]: I1122 08:44:59.399983 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8ff87d7-f980-4337-84b7-db2032ed5ebe","Type":"ContainerStarted","Data":"d715e821ae9349a0e5ced4f6c6c4ecc62ad986c000d966d7b988ccb2f0d8241a"} Nov 22 08:44:59 crc kubenswrapper[4856]: I1122 08:44:59.421058 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.421037459 podStartE2EDuration="2.421037459s" podCreationTimestamp="2025-11-22 08:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:44:59.413003822 +0000 UTC m=+6141.826397080" watchObservedRunningTime="2025-11-22 08:44:59.421037459 +0000 UTC m=+6141.834430717" Nov 22 08:44:59 crc kubenswrapper[4856]: I1122 08:44:59.430650 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.430628267 podStartE2EDuration="2.430628267s" podCreationTimestamp="2025-11-22 08:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:44:59.429166258 +0000 UTC m=+6141.842559536" watchObservedRunningTime="2025-11-22 08:44:59.430628267 +0000 UTC m=+6141.844021525" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.141382 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm"] Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.142562 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.146081 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.146483 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.152698 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm"] Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.173496 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1050fae-88db-48d3-8b09-87c3fe96a967-secret-volume\") pod \"collect-profiles-29396685-lv6nm\" (UID: \"a1050fae-88db-48d3-8b09-87c3fe96a967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.173695 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86lb7\" (UniqueName: \"kubernetes.io/projected/a1050fae-88db-48d3-8b09-87c3fe96a967-kube-api-access-86lb7\") pod \"collect-profiles-29396685-lv6nm\" (UID: \"a1050fae-88db-48d3-8b09-87c3fe96a967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.173743 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1050fae-88db-48d3-8b09-87c3fe96a967-config-volume\") pod \"collect-profiles-29396685-lv6nm\" (UID: \"a1050fae-88db-48d3-8b09-87c3fe96a967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.275205 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86lb7\" (UniqueName: \"kubernetes.io/projected/a1050fae-88db-48d3-8b09-87c3fe96a967-kube-api-access-86lb7\") pod \"collect-profiles-29396685-lv6nm\" (UID: \"a1050fae-88db-48d3-8b09-87c3fe96a967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.275306 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1050fae-88db-48d3-8b09-87c3fe96a967-config-volume\") pod \"collect-profiles-29396685-lv6nm\" (UID: \"a1050fae-88db-48d3-8b09-87c3fe96a967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.275394 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1050fae-88db-48d3-8b09-87c3fe96a967-secret-volume\") pod \"collect-profiles-29396685-lv6nm\" (UID: \"a1050fae-88db-48d3-8b09-87c3fe96a967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.276269 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1050fae-88db-48d3-8b09-87c3fe96a967-config-volume\") pod \"collect-profiles-29396685-lv6nm\" (UID: \"a1050fae-88db-48d3-8b09-87c3fe96a967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.280824 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1050fae-88db-48d3-8b09-87c3fe96a967-secret-volume\") pod \"collect-profiles-29396685-lv6nm\" (UID: \"a1050fae-88db-48d3-8b09-87c3fe96a967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.291193 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86lb7\" (UniqueName: \"kubernetes.io/projected/a1050fae-88db-48d3-8b09-87c3fe96a967-kube-api-access-86lb7\") pod \"collect-profiles-29396685-lv6nm\" (UID: \"a1050fae-88db-48d3-8b09-87c3fe96a967\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.460331 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" Nov 22 08:45:00 crc kubenswrapper[4856]: I1122 08:45:00.893544 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm"] Nov 22 08:45:00 crc kubenswrapper[4856]: W1122 08:45:00.896910 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1050fae_88db_48d3_8b09_87c3fe96a967.slice/crio-2cddf2ef4b1afb91c0fad05807744376b4e898e57201bd4d0c307a1a12293a2f WatchSource:0}: Error finding container 2cddf2ef4b1afb91c0fad05807744376b4e898e57201bd4d0c307a1a12293a2f: Status 404 returned error can't find the container with id 2cddf2ef4b1afb91c0fad05807744376b4e898e57201bd4d0c307a1a12293a2f Nov 22 08:45:01 crc kubenswrapper[4856]: I1122 08:45:01.417251 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" event={"ID":"a1050fae-88db-48d3-8b09-87c3fe96a967","Type":"ContainerStarted","Data":"d51c210ed60dcedb4ea79256ad59c40c4d926cf361d6a8619e38780913db5594"} Nov 22 08:45:01 crc kubenswrapper[4856]: I1122 08:45:01.417307 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" event={"ID":"a1050fae-88db-48d3-8b09-87c3fe96a967","Type":"ContainerStarted","Data":"2cddf2ef4b1afb91c0fad05807744376b4e898e57201bd4d0c307a1a12293a2f"} Nov 22 08:45:01 crc kubenswrapper[4856]: I1122 08:45:01.433188 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" podStartSLOduration=1.433169743 podStartE2EDuration="1.433169743s" podCreationTimestamp="2025-11-22 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:45:01.43119377 +0000 UTC m=+6143.844587038" watchObservedRunningTime="2025-11-22 08:45:01.433169743 +0000 UTC m=+6143.846563001" Nov 22 08:45:02 crc kubenswrapper[4856]: I1122 08:45:02.427086 4856 generic.go:334] "Generic (PLEG): container finished" podID="a1050fae-88db-48d3-8b09-87c3fe96a967" containerID="d51c210ed60dcedb4ea79256ad59c40c4d926cf361d6a8619e38780913db5594" exitCode=0 Nov 22 08:45:02 crc kubenswrapper[4856]: I1122 08:45:02.427190 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" event={"ID":"a1050fae-88db-48d3-8b09-87c3fe96a967","Type":"ContainerDied","Data":"d51c210ed60dcedb4ea79256ad59c40c4d926cf361d6a8619e38780913db5594"} Nov 22 08:45:02 crc kubenswrapper[4856]: I1122 08:45:02.801701 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 08:45:02 crc kubenswrapper[4856]: I1122 08:45:02.802071 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 08:45:03 crc kubenswrapper[4856]: E1122 08:45:03.504628 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:45:03 crc kubenswrapper[4856]: E1122 08:45:03.506708 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:45:03 crc kubenswrapper[4856]: E1122 08:45:03.508721 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:45:03 crc kubenswrapper[4856]: E1122 08:45:03.508762 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dc7a5c86-a6a8-4da4-970c-a34886016faa" containerName="nova-scheduler-scheduler" Nov 22 08:45:04 crc kubenswrapper[4856]: I1122 08:45:04.552257 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" Nov 22 08:45:04 crc kubenswrapper[4856]: I1122 08:45:04.648408 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1050fae-88db-48d3-8b09-87c3fe96a967-config-volume\") pod \"a1050fae-88db-48d3-8b09-87c3fe96a967\" (UID: \"a1050fae-88db-48d3-8b09-87c3fe96a967\") " Nov 22 08:45:04 crc kubenswrapper[4856]: I1122 08:45:04.648757 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86lb7\" (UniqueName: \"kubernetes.io/projected/a1050fae-88db-48d3-8b09-87c3fe96a967-kube-api-access-86lb7\") pod \"a1050fae-88db-48d3-8b09-87c3fe96a967\" (UID: \"a1050fae-88db-48d3-8b09-87c3fe96a967\") " Nov 22 08:45:04 crc kubenswrapper[4856]: I1122 08:45:04.648912 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1050fae-88db-48d3-8b09-87c3fe96a967-secret-volume\") pod \"a1050fae-88db-48d3-8b09-87c3fe96a967\" (UID: \"a1050fae-88db-48d3-8b09-87c3fe96a967\") " Nov 22 08:45:04 crc kubenswrapper[4856]: I1122 08:45:04.649167 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1050fae-88db-48d3-8b09-87c3fe96a967-config-volume" (OuterVolumeSpecName: "config-volume") pod "a1050fae-88db-48d3-8b09-87c3fe96a967" (UID: "a1050fae-88db-48d3-8b09-87c3fe96a967"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:45:04 crc kubenswrapper[4856]: I1122 08:45:04.649584 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1050fae-88db-48d3-8b09-87c3fe96a967-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:04 crc kubenswrapper[4856]: I1122 08:45:04.654883 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1050fae-88db-48d3-8b09-87c3fe96a967-kube-api-access-86lb7" (OuterVolumeSpecName: "kube-api-access-86lb7") pod "a1050fae-88db-48d3-8b09-87c3fe96a967" (UID: "a1050fae-88db-48d3-8b09-87c3fe96a967"). InnerVolumeSpecName "kube-api-access-86lb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:45:04 crc kubenswrapper[4856]: I1122 08:45:04.654904 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1050fae-88db-48d3-8b09-87c3fe96a967-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a1050fae-88db-48d3-8b09-87c3fe96a967" (UID: "a1050fae-88db-48d3-8b09-87c3fe96a967"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:45:04 crc kubenswrapper[4856]: I1122 08:45:04.710194 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:45:04 crc kubenswrapper[4856]: E1122 08:45:04.710481 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:45:04 crc kubenswrapper[4856]: I1122 08:45:04.750819 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86lb7\" (UniqueName: \"kubernetes.io/projected/a1050fae-88db-48d3-8b09-87c3fe96a967-kube-api-access-86lb7\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:04 crc kubenswrapper[4856]: I1122 08:45:04.750855 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1050fae-88db-48d3-8b09-87c3fe96a967-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:05 crc kubenswrapper[4856]: I1122 08:45:05.460494 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" event={"ID":"a1050fae-88db-48d3-8b09-87c3fe96a967","Type":"ContainerDied","Data":"2cddf2ef4b1afb91c0fad05807744376b4e898e57201bd4d0c307a1a12293a2f"} Nov 22 08:45:05 crc kubenswrapper[4856]: I1122 08:45:05.460549 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cddf2ef4b1afb91c0fad05807744376b4e898e57201bd4d0c307a1a12293a2f" Nov 22 08:45:05 crc kubenswrapper[4856]: I1122 08:45:05.460559 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm" Nov 22 08:45:05 crc kubenswrapper[4856]: I1122 08:45:05.632336 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6"] Nov 22 08:45:05 crc kubenswrapper[4856]: I1122 08:45:05.640213 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396640-rfxl6"] Nov 22 08:45:06 crc kubenswrapper[4856]: I1122 08:45:06.725167 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44a36f1-1a04-4522-a338-6161608fbdc4" path="/var/lib/kubelet/pods/c44a36f1-1a04-4522-a338-6161608fbdc4/volumes" Nov 22 08:45:07 crc kubenswrapper[4856]: I1122 08:45:07.803195 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 08:45:07 crc kubenswrapper[4856]: I1122 08:45:07.803530 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 08:45:07 crc kubenswrapper[4856]: I1122 08:45:07.822449 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 08:45:07 crc kubenswrapper[4856]: I1122 08:45:07.822551 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 08:45:08 crc kubenswrapper[4856]: E1122 08:45:08.505065 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:45:08 crc kubenswrapper[4856]: E1122 08:45:08.506755 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:45:08 crc kubenswrapper[4856]: E1122 08:45:08.508347 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:45:08 crc kubenswrapper[4856]: E1122 08:45:08.508380 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dc7a5c86-a6a8-4da4-970c-a34886016faa" containerName="nova-scheduler-scheduler" Nov 22 08:45:08 crc kubenswrapper[4856]: I1122 08:45:08.815701 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fb038800-5d2f-42f1-85b1-05d8aa807383" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.98:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:45:08 crc kubenswrapper[4856]: I1122 08:45:08.815701 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fb038800-5d2f-42f1-85b1-05d8aa807383" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.98:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:45:08 crc kubenswrapper[4856]: I1122 08:45:08.909686 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a8ff87d7-f980-4337-84b7-db2032ed5ebe" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.99:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:45:08 crc kubenswrapper[4856]: I1122 08:45:08.909729 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a8ff87d7-f980-4337-84b7-db2032ed5ebe" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.99:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:45:13 crc kubenswrapper[4856]: E1122 08:45:13.503827 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc is running failed: container process not found" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:45:13 crc kubenswrapper[4856]: E1122 08:45:13.504831 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc is running failed: container process not found" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:45:13 crc kubenswrapper[4856]: E1122 08:45:13.505721 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc is running failed: container process not found" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 08:45:13 crc kubenswrapper[4856]: E1122 08:45:13.505768 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="dc7a5c86-a6a8-4da4-970c-a34886016faa" containerName="nova-scheduler-scheduler" Nov 22 08:45:13 crc kubenswrapper[4856]: I1122 08:45:13.533411 4856 generic.go:334] "Generic (PLEG): container finished" podID="dc7a5c86-a6a8-4da4-970c-a34886016faa" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" exitCode=137 Nov 22 08:45:13 crc kubenswrapper[4856]: I1122 08:45:13.533462 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc7a5c86-a6a8-4da4-970c-a34886016faa","Type":"ContainerDied","Data":"1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc"} Nov 22 08:45:13 crc kubenswrapper[4856]: I1122 08:45:13.867085 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.032353 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7a5c86-a6a8-4da4-970c-a34886016faa-combined-ca-bundle\") pod \"dc7a5c86-a6a8-4da4-970c-a34886016faa\" (UID: \"dc7a5c86-a6a8-4da4-970c-a34886016faa\") " Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.032422 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph55w\" (UniqueName: \"kubernetes.io/projected/dc7a5c86-a6a8-4da4-970c-a34886016faa-kube-api-access-ph55w\") pod \"dc7a5c86-a6a8-4da4-970c-a34886016faa\" (UID: \"dc7a5c86-a6a8-4da4-970c-a34886016faa\") " Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.032565 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7a5c86-a6a8-4da4-970c-a34886016faa-config-data\") pod \"dc7a5c86-a6a8-4da4-970c-a34886016faa\" (UID: \"dc7a5c86-a6a8-4da4-970c-a34886016faa\") " Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.038455 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7a5c86-a6a8-4da4-970c-a34886016faa-kube-api-access-ph55w" (OuterVolumeSpecName: "kube-api-access-ph55w") pod "dc7a5c86-a6a8-4da4-970c-a34886016faa" (UID: "dc7a5c86-a6a8-4da4-970c-a34886016faa"). InnerVolumeSpecName "kube-api-access-ph55w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.071675 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7a5c86-a6a8-4da4-970c-a34886016faa-config-data" (OuterVolumeSpecName: "config-data") pod "dc7a5c86-a6a8-4da4-970c-a34886016faa" (UID: "dc7a5c86-a6a8-4da4-970c-a34886016faa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.071757 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7a5c86-a6a8-4da4-970c-a34886016faa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc7a5c86-a6a8-4da4-970c-a34886016faa" (UID: "dc7a5c86-a6a8-4da4-970c-a34886016faa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.134917 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7a5c86-a6a8-4da4-970c-a34886016faa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.134954 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph55w\" (UniqueName: \"kubernetes.io/projected/dc7a5c86-a6a8-4da4-970c-a34886016faa-kube-api-access-ph55w\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.134966 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7a5c86-a6a8-4da4-970c-a34886016faa-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.543546 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"dc7a5c86-a6a8-4da4-970c-a34886016faa","Type":"ContainerDied","Data":"7dd96f27d0fb07f9026e9f50f6302e9c5f19eca88276869c41f3af9160dc6e22"} Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.543665 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.544695 4856 scope.go:117] "RemoveContainer" containerID="1ba49776018b0dfac413831946e03ee576c5da203ccd50e0417df22c93d030dc" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.583103 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.601321 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.612580 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:45:14 crc kubenswrapper[4856]: E1122 08:45:14.613451 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1050fae-88db-48d3-8b09-87c3fe96a967" containerName="collect-profiles" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.613588 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1050fae-88db-48d3-8b09-87c3fe96a967" containerName="collect-profiles" Nov 22 08:45:14 crc kubenswrapper[4856]: E1122 08:45:14.613696 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7a5c86-a6a8-4da4-970c-a34886016faa" containerName="nova-scheduler-scheduler" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.613774 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7a5c86-a6a8-4da4-970c-a34886016faa" containerName="nova-scheduler-scheduler" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.614070 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1050fae-88db-48d3-8b09-87c3fe96a967" containerName="collect-profiles" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.614222 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7a5c86-a6a8-4da4-970c-a34886016faa" containerName="nova-scheduler-scheduler" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.615196 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.617603 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.621761 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.721435 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7a5c86-a6a8-4da4-970c-a34886016faa" path="/var/lib/kubelet/pods/dc7a5c86-a6a8-4da4-970c-a34886016faa/volumes" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.746545 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5q79\" (UniqueName: \"kubernetes.io/projected/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-kube-api-access-h5q79\") pod \"nova-scheduler-0\" (UID: \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\") " pod="openstack/nova-scheduler-0" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.746655 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\") " pod="openstack/nova-scheduler-0" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.746750 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-config-data\") pod \"nova-scheduler-0\" (UID: \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\") " pod="openstack/nova-scheduler-0" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.849694 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\") " pod="openstack/nova-scheduler-0" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.849814 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-config-data\") pod \"nova-scheduler-0\" (UID: \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\") " pod="openstack/nova-scheduler-0" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.850038 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5q79\" (UniqueName: \"kubernetes.io/projected/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-kube-api-access-h5q79\") pod \"nova-scheduler-0\" (UID: \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\") " pod="openstack/nova-scheduler-0" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.853453 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\") " pod="openstack/nova-scheduler-0" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.854787 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-config-data\") pod \"nova-scheduler-0\" (UID: \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\") " pod="openstack/nova-scheduler-0" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.865614 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5q79\" (UniqueName: \"kubernetes.io/projected/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-kube-api-access-h5q79\") pod \"nova-scheduler-0\" (UID: \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\") " pod="openstack/nova-scheduler-0" Nov 22 08:45:14 crc kubenswrapper[4856]: I1122 08:45:14.936600 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 08:45:15 crc kubenswrapper[4856]: I1122 08:45:15.188981 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 08:45:15 crc kubenswrapper[4856]: W1122 08:45:15.191676 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ce55421_c0e1_4f25_9a74_8ae0e35b250c.slice/crio-c6bb006538a4466e530c4133563d2a161e79e92d63344b749d608c10e489349f WatchSource:0}: Error finding container c6bb006538a4466e530c4133563d2a161e79e92d63344b749d608c10e489349f: Status 404 returned error can't find the container with id c6bb006538a4466e530c4133563d2a161e79e92d63344b749d608c10e489349f Nov 22 08:45:15 crc kubenswrapper[4856]: I1122 08:45:15.558127 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ce55421-c0e1-4f25-9a74-8ae0e35b250c","Type":"ContainerStarted","Data":"c6bb006538a4466e530c4133563d2a161e79e92d63344b749d608c10e489349f"} Nov 22 08:45:16 crc kubenswrapper[4856]: I1122 08:45:16.571258 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ce55421-c0e1-4f25-9a74-8ae0e35b250c","Type":"ContainerStarted","Data":"7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62"} Nov 22 08:45:16 crc kubenswrapper[4856]: I1122 08:45:16.597340 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.597320727 podStartE2EDuration="2.597320727s" podCreationTimestamp="2025-11-22 08:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:45:16.58887375 +0000 UTC m=+6159.002267008" watchObservedRunningTime="2025-11-22 08:45:16.597320727 +0000 UTC m=+6159.010713985" Nov 22 08:45:17 crc kubenswrapper[4856]: I1122 08:45:17.710528 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:45:17 crc kubenswrapper[4856]: E1122 08:45:17.712157 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:45:17 crc kubenswrapper[4856]: I1122 08:45:17.809364 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 08:45:17 crc kubenswrapper[4856]: I1122 08:45:17.810638 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 08:45:17 crc kubenswrapper[4856]: I1122 08:45:17.823170 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 08:45:17 crc kubenswrapper[4856]: I1122 08:45:17.831953 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 08:45:17 crc kubenswrapper[4856]: I1122 08:45:17.832933 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 08:45:17 crc kubenswrapper[4856]: I1122 08:45:17.834715 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 08:45:17 crc kubenswrapper[4856]: I1122 08:45:17.835383 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 08:45:18 crc kubenswrapper[4856]: I1122 08:45:18.590342 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 08:45:18 crc kubenswrapper[4856]: I1122 08:45:18.594114 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 08:45:18 crc kubenswrapper[4856]: I1122 08:45:18.595591 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 08:45:18 crc kubenswrapper[4856]: I1122 08:45:18.785425 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f89c44cf-dxqq7"] Nov 22 08:45:18 crc kubenswrapper[4856]: I1122 08:45:18.787343 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:18 crc kubenswrapper[4856]: I1122 08:45:18.813037 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f89c44cf-dxqq7"] Nov 22 08:45:18 crc kubenswrapper[4856]: I1122 08:45:18.938400 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-ovsdbserver-sb\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:18 crc kubenswrapper[4856]: I1122 08:45:18.938751 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-ovsdbserver-nb\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:18 crc kubenswrapper[4856]: I1122 08:45:18.938888 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drdq6\" (UniqueName: \"kubernetes.io/projected/7d1ca7c1-892b-402a-a523-407168b2deb8-kube-api-access-drdq6\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:18 crc kubenswrapper[4856]: I1122 08:45:18.939011 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-dns-svc\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:18 crc kubenswrapper[4856]: I1122 08:45:18.939285 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-config\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:19 crc kubenswrapper[4856]: I1122 08:45:19.042109 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-config\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:19 crc kubenswrapper[4856]: I1122 08:45:19.042207 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-ovsdbserver-sb\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:19 crc kubenswrapper[4856]: I1122 08:45:19.042242 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-ovsdbserver-nb\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:19 crc kubenswrapper[4856]: I1122 08:45:19.042274 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drdq6\" (UniqueName: \"kubernetes.io/projected/7d1ca7c1-892b-402a-a523-407168b2deb8-kube-api-access-drdq6\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:19 crc kubenswrapper[4856]: I1122 08:45:19.042306 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-dns-svc\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:19 crc kubenswrapper[4856]: I1122 08:45:19.043346 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-ovsdbserver-nb\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:19 crc kubenswrapper[4856]: I1122 08:45:19.043402 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-ovsdbserver-sb\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:19 crc kubenswrapper[4856]: I1122 08:45:19.043451 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-config\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:19 crc kubenswrapper[4856]: I1122 08:45:19.044303 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-dns-svc\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:19 crc kubenswrapper[4856]: I1122 08:45:19.080167 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drdq6\" (UniqueName: \"kubernetes.io/projected/7d1ca7c1-892b-402a-a523-407168b2deb8-kube-api-access-drdq6\") pod \"dnsmasq-dns-5f89c44cf-dxqq7\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:19 crc kubenswrapper[4856]: I1122 08:45:19.121236 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:19 crc kubenswrapper[4856]: I1122 08:45:19.671529 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f89c44cf-dxqq7"] Nov 22 08:45:19 crc kubenswrapper[4856]: I1122 08:45:19.937407 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 08:45:20 crc kubenswrapper[4856]: I1122 08:45:20.608328 4856 generic.go:334] "Generic (PLEG): container finished" podID="7d1ca7c1-892b-402a-a523-407168b2deb8" containerID="e88d04cdaa6bedf5c5dcde32dec614a1753553d28e9edad23987ed3d0e8bc2b2" exitCode=0 Nov 22 08:45:20 crc kubenswrapper[4856]: I1122 08:45:20.608578 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" event={"ID":"7d1ca7c1-892b-402a-a523-407168b2deb8","Type":"ContainerDied","Data":"e88d04cdaa6bedf5c5dcde32dec614a1753553d28e9edad23987ed3d0e8bc2b2"} Nov 22 08:45:20 crc kubenswrapper[4856]: I1122 08:45:20.610029 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" event={"ID":"7d1ca7c1-892b-402a-a523-407168b2deb8","Type":"ContainerStarted","Data":"89efa9ec7a3b43ccff55bd5b4a0b920ed97c6c54837f76df3402a0c80e1767bf"} Nov 22 08:45:21 crc kubenswrapper[4856]: I1122 08:45:21.620030 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" event={"ID":"7d1ca7c1-892b-402a-a523-407168b2deb8","Type":"ContainerStarted","Data":"c49cd7874d707676c448d2df1af807f11f40c863f6ff01ac56d2c1b9066605d0"} Nov 22 08:45:21 crc kubenswrapper[4856]: I1122 08:45:21.620620 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:21 crc kubenswrapper[4856]: I1122 08:45:21.649130 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" podStartSLOduration=3.649105273 podStartE2EDuration="3.649105273s" podCreationTimestamp="2025-11-22 08:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:45:21.639390911 +0000 UTC m=+6164.052784189" watchObservedRunningTime="2025-11-22 08:45:21.649105273 +0000 UTC m=+6164.062498531" Nov 22 08:45:21 crc kubenswrapper[4856]: I1122 08:45:21.920026 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:45:21 crc kubenswrapper[4856]: I1122 08:45:21.920640 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a8ff87d7-f980-4337-84b7-db2032ed5ebe" containerName="nova-api-log" containerID="cri-o://d715e821ae9349a0e5ced4f6c6c4ecc62ad986c000d966d7b988ccb2f0d8241a" gracePeriod=30 Nov 22 08:45:21 crc kubenswrapper[4856]: I1122 08:45:21.921071 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a8ff87d7-f980-4337-84b7-db2032ed5ebe" containerName="nova-api-api" containerID="cri-o://ee5b49b4c3cb0ef34d1bd6d84fb08e3985d33066deaf349e5a06dec6e25d030f" gracePeriod=30 Nov 22 08:45:22 crc kubenswrapper[4856]: I1122 08:45:22.634204 4856 generic.go:334] "Generic (PLEG): container finished" podID="a8ff87d7-f980-4337-84b7-db2032ed5ebe" containerID="d715e821ae9349a0e5ced4f6c6c4ecc62ad986c000d966d7b988ccb2f0d8241a" exitCode=143 Nov 22 08:45:22 crc kubenswrapper[4856]: I1122 08:45:22.634289 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8ff87d7-f980-4337-84b7-db2032ed5ebe","Type":"ContainerDied","Data":"d715e821ae9349a0e5ced4f6c6c4ecc62ad986c000d966d7b988ccb2f0d8241a"} Nov 22 08:45:23 crc kubenswrapper[4856]: I1122 08:45:23.448677 4856 scope.go:117] "RemoveContainer" containerID="443a82891cbcb452272df1904faa8139afa45b83db12b19d18783b444c183faa" Nov 22 08:45:24 crc kubenswrapper[4856]: I1122 08:45:24.937589 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 08:45:24 crc kubenswrapper[4856]: I1122 08:45:24.968346 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.497380 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.563471 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ff87d7-f980-4337-84b7-db2032ed5ebe-config-data\") pod \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.563604 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25bzn\" (UniqueName: \"kubernetes.io/projected/a8ff87d7-f980-4337-84b7-db2032ed5ebe-kube-api-access-25bzn\") pod \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.563685 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ff87d7-f980-4337-84b7-db2032ed5ebe-combined-ca-bundle\") pod \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.563748 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8ff87d7-f980-4337-84b7-db2032ed5ebe-logs\") pod \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\" (UID: \"a8ff87d7-f980-4337-84b7-db2032ed5ebe\") " Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.564406 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8ff87d7-f980-4337-84b7-db2032ed5ebe-logs" (OuterVolumeSpecName: "logs") pod "a8ff87d7-f980-4337-84b7-db2032ed5ebe" (UID: "a8ff87d7-f980-4337-84b7-db2032ed5ebe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.582236 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ff87d7-f980-4337-84b7-db2032ed5ebe-kube-api-access-25bzn" (OuterVolumeSpecName: "kube-api-access-25bzn") pod "a8ff87d7-f980-4337-84b7-db2032ed5ebe" (UID: "a8ff87d7-f980-4337-84b7-db2032ed5ebe"). InnerVolumeSpecName "kube-api-access-25bzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.594134 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ff87d7-f980-4337-84b7-db2032ed5ebe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8ff87d7-f980-4337-84b7-db2032ed5ebe" (UID: "a8ff87d7-f980-4337-84b7-db2032ed5ebe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.601908 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8ff87d7-f980-4337-84b7-db2032ed5ebe-config-data" (OuterVolumeSpecName: "config-data") pod "a8ff87d7-f980-4337-84b7-db2032ed5ebe" (UID: "a8ff87d7-f980-4337-84b7-db2032ed5ebe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.663300 4856 generic.go:334] "Generic (PLEG): container finished" podID="a8ff87d7-f980-4337-84b7-db2032ed5ebe" containerID="ee5b49b4c3cb0ef34d1bd6d84fb08e3985d33066deaf349e5a06dec6e25d030f" exitCode=0 Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.663358 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.663405 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8ff87d7-f980-4337-84b7-db2032ed5ebe","Type":"ContainerDied","Data":"ee5b49b4c3cb0ef34d1bd6d84fb08e3985d33066deaf349e5a06dec6e25d030f"} Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.663431 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a8ff87d7-f980-4337-84b7-db2032ed5ebe","Type":"ContainerDied","Data":"569d899eaebd03022599fb1c01cf04b51c3bb7693bcb5e146fd5a721a2cd23d1"} Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.663446 4856 scope.go:117] "RemoveContainer" containerID="ee5b49b4c3cb0ef34d1bd6d84fb08e3985d33066deaf349e5a06dec6e25d030f" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.665399 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25bzn\" (UniqueName: \"kubernetes.io/projected/a8ff87d7-f980-4337-84b7-db2032ed5ebe-kube-api-access-25bzn\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.665432 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ff87d7-f980-4337-84b7-db2032ed5ebe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.665446 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8ff87d7-f980-4337-84b7-db2032ed5ebe-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.665457 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ff87d7-f980-4337-84b7-db2032ed5ebe-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.704451 4856 scope.go:117] "RemoveContainer" containerID="d715e821ae9349a0e5ced4f6c6c4ecc62ad986c000d966d7b988ccb2f0d8241a" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.717897 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.737295 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.755117 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.765002 4856 scope.go:117] "RemoveContainer" containerID="ee5b49b4c3cb0ef34d1bd6d84fb08e3985d33066deaf349e5a06dec6e25d030f" Nov 22 08:45:25 crc kubenswrapper[4856]: E1122 08:45:25.773764 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee5b49b4c3cb0ef34d1bd6d84fb08e3985d33066deaf349e5a06dec6e25d030f\": container with ID starting with ee5b49b4c3cb0ef34d1bd6d84fb08e3985d33066deaf349e5a06dec6e25d030f not found: ID does not exist" containerID="ee5b49b4c3cb0ef34d1bd6d84fb08e3985d33066deaf349e5a06dec6e25d030f" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.773853 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee5b49b4c3cb0ef34d1bd6d84fb08e3985d33066deaf349e5a06dec6e25d030f"} err="failed to get container status \"ee5b49b4c3cb0ef34d1bd6d84fb08e3985d33066deaf349e5a06dec6e25d030f\": rpc error: code = NotFound desc = could not find container \"ee5b49b4c3cb0ef34d1bd6d84fb08e3985d33066deaf349e5a06dec6e25d030f\": container with ID starting with ee5b49b4c3cb0ef34d1bd6d84fb08e3985d33066deaf349e5a06dec6e25d030f not found: ID does not exist" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.773885 4856 scope.go:117] "RemoveContainer" containerID="d715e821ae9349a0e5ced4f6c6c4ecc62ad986c000d966d7b988ccb2f0d8241a" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.776705 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 08:45:25 crc kubenswrapper[4856]: E1122 08:45:25.777074 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ff87d7-f980-4337-84b7-db2032ed5ebe" containerName="nova-api-api" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.777093 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ff87d7-f980-4337-84b7-db2032ed5ebe" containerName="nova-api-api" Nov 22 08:45:25 crc kubenswrapper[4856]: E1122 08:45:25.777131 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ff87d7-f980-4337-84b7-db2032ed5ebe" containerName="nova-api-log" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.777138 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ff87d7-f980-4337-84b7-db2032ed5ebe" containerName="nova-api-log" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.777353 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ff87d7-f980-4337-84b7-db2032ed5ebe" containerName="nova-api-log" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.777374 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ff87d7-f980-4337-84b7-db2032ed5ebe" containerName="nova-api-api" Nov 22 08:45:25 crc kubenswrapper[4856]: E1122 08:45:25.777734 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d715e821ae9349a0e5ced4f6c6c4ecc62ad986c000d966d7b988ccb2f0d8241a\": container with ID starting with d715e821ae9349a0e5ced4f6c6c4ecc62ad986c000d966d7b988ccb2f0d8241a not found: ID does not exist" containerID="d715e821ae9349a0e5ced4f6c6c4ecc62ad986c000d966d7b988ccb2f0d8241a" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.777791 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d715e821ae9349a0e5ced4f6c6c4ecc62ad986c000d966d7b988ccb2f0d8241a"} err="failed to get container status \"d715e821ae9349a0e5ced4f6c6c4ecc62ad986c000d966d7b988ccb2f0d8241a\": rpc error: code = NotFound desc = could not find container \"d715e821ae9349a0e5ced4f6c6c4ecc62ad986c000d966d7b988ccb2f0d8241a\": container with ID starting with d715e821ae9349a0e5ced4f6c6c4ecc62ad986c000d966d7b988ccb2f0d8241a not found: ID does not exist" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.779166 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.784165 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.784218 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.784167 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.788045 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.884365 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-config-data\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.884572 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5s8x\" (UniqueName: \"kubernetes.io/projected/bc1b3193-18b5-400b-a11c-7787373cc559-kube-api-access-s5s8x\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.884618 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.884657 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc1b3193-18b5-400b-a11c-7787373cc559-logs\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.884674 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-public-tls-certs\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.884716 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.988274 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-config-data\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.988350 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5s8x\" (UniqueName: \"kubernetes.io/projected/bc1b3193-18b5-400b-a11c-7787373cc559-kube-api-access-s5s8x\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.988390 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.988412 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc1b3193-18b5-400b-a11c-7787373cc559-logs\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.988430 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-public-tls-certs\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.988466 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.989463 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc1b3193-18b5-400b-a11c-7787373cc559-logs\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.992772 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-internal-tls-certs\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.993672 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-public-tls-certs\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.993695 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-config-data\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:25 crc kubenswrapper[4856]: I1122 08:45:25.994791 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:26 crc kubenswrapper[4856]: I1122 08:45:26.010410 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5s8x\" (UniqueName: \"kubernetes.io/projected/bc1b3193-18b5-400b-a11c-7787373cc559-kube-api-access-s5s8x\") pod \"nova-api-0\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " pod="openstack/nova-api-0" Nov 22 08:45:26 crc kubenswrapper[4856]: I1122 08:45:26.102904 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 08:45:26 crc kubenswrapper[4856]: I1122 08:45:26.538458 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 08:45:26 crc kubenswrapper[4856]: W1122 08:45:26.552685 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc1b3193_18b5_400b_a11c_7787373cc559.slice/crio-c8aba31375ae803262782f0eda1688f2a8f6f0fb3ae47b1d0c94ee699b1c2a19 WatchSource:0}: Error finding container c8aba31375ae803262782f0eda1688f2a8f6f0fb3ae47b1d0c94ee699b1c2a19: Status 404 returned error can't find the container with id c8aba31375ae803262782f0eda1688f2a8f6f0fb3ae47b1d0c94ee699b1c2a19 Nov 22 08:45:26 crc kubenswrapper[4856]: I1122 08:45:26.677248 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc1b3193-18b5-400b-a11c-7787373cc559","Type":"ContainerStarted","Data":"c8aba31375ae803262782f0eda1688f2a8f6f0fb3ae47b1d0c94ee699b1c2a19"} Nov 22 08:45:26 crc kubenswrapper[4856]: I1122 08:45:26.726113 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8ff87d7-f980-4337-84b7-db2032ed5ebe" path="/var/lib/kubelet/pods/a8ff87d7-f980-4337-84b7-db2032ed5ebe/volumes" Nov 22 08:45:27 crc kubenswrapper[4856]: I1122 08:45:27.690758 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc1b3193-18b5-400b-a11c-7787373cc559","Type":"ContainerStarted","Data":"811699a363f8fa80b6295ee0f719b801ca4867d326580ffff6e9d8cde8caa1c2"} Nov 22 08:45:27 crc kubenswrapper[4856]: I1122 08:45:27.691196 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc1b3193-18b5-400b-a11c-7787373cc559","Type":"ContainerStarted","Data":"db2203dcce0e38961a7c4d02903b0b86c38225490a82559e30eba9d7c39d6d00"} Nov 22 08:45:27 crc kubenswrapper[4856]: I1122 08:45:27.719647 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.719623887 podStartE2EDuration="2.719623887s" podCreationTimestamp="2025-11-22 08:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:45:27.706692069 +0000 UTC m=+6170.120085337" watchObservedRunningTime="2025-11-22 08:45:27.719623887 +0000 UTC m=+6170.133017145" Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.124477 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.190654 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbbd65c89-495gj"] Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.190904 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" podUID="e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" containerName="dnsmasq-dns" containerID="cri-o://ef1e41f2c9277a64d1a6de8db6459f671cdfcf85c5eb15d70795318e3d82fa0d" gracePeriod=10 Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.715599 4856 generic.go:334] "Generic (PLEG): container finished" podID="e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" containerID="ef1e41f2c9277a64d1a6de8db6459f671cdfcf85c5eb15d70795318e3d82fa0d" exitCode=0 Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.715639 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" event={"ID":"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8","Type":"ContainerDied","Data":"ef1e41f2c9277a64d1a6de8db6459f671cdfcf85c5eb15d70795318e3d82fa0d"} Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.716154 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" event={"ID":"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8","Type":"ContainerDied","Data":"c084438205302fa5dad35445d5512dc7123144a8c0d2df3597816c718aead610"} Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.716180 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c084438205302fa5dad35445d5512dc7123144a8c0d2df3597816c718aead610" Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.766113 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.877071 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-dns-svc\") pod \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.877206 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-ovsdbserver-sb\") pod \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.877321 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-ovsdbserver-nb\") pod \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.877338 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-config\") pod \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.877407 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2b7d\" (UniqueName: \"kubernetes.io/projected/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-kube-api-access-t2b7d\") pod \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\" (UID: \"e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8\") " Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.882678 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-kube-api-access-t2b7d" (OuterVolumeSpecName: "kube-api-access-t2b7d") pod "e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" (UID: "e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8"). InnerVolumeSpecName "kube-api-access-t2b7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.925642 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" (UID: "e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.932603 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" (UID: "e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.932647 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-config" (OuterVolumeSpecName: "config") pod "e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" (UID: "e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.933125 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" (UID: "e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.979315 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.979349 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.979361 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.979370 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2b7d\" (UniqueName: \"kubernetes.io/projected/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-kube-api-access-t2b7d\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:29 crc kubenswrapper[4856]: I1122 08:45:29.979382 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:45:30 crc kubenswrapper[4856]: I1122 08:45:30.711015 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:45:30 crc kubenswrapper[4856]: E1122 08:45:30.711902 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:45:30 crc kubenswrapper[4856]: I1122 08:45:30.728388 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fbbd65c89-495gj" Nov 22 08:45:30 crc kubenswrapper[4856]: I1122 08:45:30.756119 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fbbd65c89-495gj"] Nov 22 08:45:30 crc kubenswrapper[4856]: I1122 08:45:30.765194 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fbbd65c89-495gj"] Nov 22 08:45:32 crc kubenswrapper[4856]: I1122 08:45:32.720070 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" path="/var/lib/kubelet/pods/e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8/volumes" Nov 22 08:45:36 crc kubenswrapper[4856]: I1122 08:45:36.103700 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 08:45:36 crc kubenswrapper[4856]: I1122 08:45:36.104348 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 08:45:37 crc kubenswrapper[4856]: I1122 08:45:37.119954 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bc1b3193-18b5-400b-a11c-7787373cc559" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.103:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:45:37 crc kubenswrapper[4856]: I1122 08:45:37.120413 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bc1b3193-18b5-400b-a11c-7787373cc559" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.103:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 08:45:41 crc kubenswrapper[4856]: I1122 08:45:41.709053 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:45:41 crc kubenswrapper[4856]: E1122 08:45:41.709719 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:45:46 crc kubenswrapper[4856]: I1122 08:45:46.114114 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 08:45:46 crc kubenswrapper[4856]: I1122 08:45:46.115125 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 08:45:46 crc kubenswrapper[4856]: I1122 08:45:46.115210 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 08:45:46 crc kubenswrapper[4856]: I1122 08:45:46.128240 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 08:45:46 crc kubenswrapper[4856]: I1122 08:45:46.896818 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 08:45:46 crc kubenswrapper[4856]: I1122 08:45:46.903163 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 08:45:52 crc kubenswrapper[4856]: I1122 08:45:52.709857 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:45:52 crc kubenswrapper[4856]: E1122 08:45:52.710651 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.025363 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bd5f6bd69-67mxq"] Nov 22 08:45:58 crc kubenswrapper[4856]: E1122 08:45:58.026042 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" containerName="dnsmasq-dns" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.026056 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" containerName="dnsmasq-dns" Nov 22 08:45:58 crc kubenswrapper[4856]: E1122 08:45:58.026074 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" containerName="init" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.026079 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" containerName="init" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.026291 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5fe2f94-8f3a-4f5c-bc88-76994a6d73b8" containerName="dnsmasq-dns" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.027520 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.030203 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.030386 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-cd252" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.030541 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.031100 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.041991 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bd5f6bd69-67mxq"] Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.101227 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.103035 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" containerName="glance-httpd" containerID="cri-o://19951d2e88630472ef3cf3d4e7f4f4526250c534966287a6ba3d30bd21a61f9e" gracePeriod=30 Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.108607 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" containerName="glance-log" containerID="cri-o://04d58e3bab54109f892b11187ac913c49228ece8e0caa2609a2fdc4443ea589e" gracePeriod=30 Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.117084 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0dd2f468-6caa-45c3-a28b-82f97f28162e-horizon-secret-key\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.117397 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0dd2f468-6caa-45c3-a28b-82f97f28162e-config-data\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.117492 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd2f468-6caa-45c3-a28b-82f97f28162e-logs\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.117647 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hs6jk\" (UniqueName: \"kubernetes.io/projected/0dd2f468-6caa-45c3-a28b-82f97f28162e-kube-api-access-hs6jk\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.117732 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd2f468-6caa-45c3-a28b-82f97f28162e-scripts\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.133254 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-577c8b5cc5-h9cj6"] Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.135221 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.158136 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-577c8b5cc5-h9cj6"] Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.179588 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.180106 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f971a400-35ad-42f4-a6a2-818bb7dc026d" containerName="glance-log" containerID="cri-o://b72eedcea9c06f61ebd0b7cf16cb557e4af668471f9df424cbfc26e446141c59" gracePeriod=30 Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.180207 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f971a400-35ad-42f4-a6a2-818bb7dc026d" containerName="glance-httpd" containerID="cri-o://2acf4bda0bb6a1e742141c119adb7237f8747dc6ee949fa693b63637e58e42e4" gracePeriod=30 Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.222398 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tljrz\" (UniqueName: \"kubernetes.io/projected/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-kube-api-access-tljrz\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.222452 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hs6jk\" (UniqueName: \"kubernetes.io/projected/0dd2f468-6caa-45c3-a28b-82f97f28162e-kube-api-access-hs6jk\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.222617 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-scripts\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.222770 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd2f468-6caa-45c3-a28b-82f97f28162e-scripts\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.223120 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-logs\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.223200 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-config-data\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.223249 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0dd2f468-6caa-45c3-a28b-82f97f28162e-horizon-secret-key\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.223300 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0dd2f468-6caa-45c3-a28b-82f97f28162e-config-data\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.223332 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd2f468-6caa-45c3-a28b-82f97f28162e-logs\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.223356 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-horizon-secret-key\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.224076 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd2f468-6caa-45c3-a28b-82f97f28162e-scripts\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.224147 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd2f468-6caa-45c3-a28b-82f97f28162e-logs\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.225326 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0dd2f468-6caa-45c3-a28b-82f97f28162e-config-data\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.229590 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0dd2f468-6caa-45c3-a28b-82f97f28162e-horizon-secret-key\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.245377 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hs6jk\" (UniqueName: \"kubernetes.io/projected/0dd2f468-6caa-45c3-a28b-82f97f28162e-kube-api-access-hs6jk\") pod \"horizon-7bd5f6bd69-67mxq\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.324953 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-logs\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.325020 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-config-data\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.325068 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-horizon-secret-key\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.325130 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tljrz\" (UniqueName: \"kubernetes.io/projected/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-kube-api-access-tljrz\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.325160 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-scripts\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.325677 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-logs\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.325963 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-scripts\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.328270 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-config-data\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.329747 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-horizon-secret-key\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.345679 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tljrz\" (UniqueName: \"kubernetes.io/projected/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-kube-api-access-tljrz\") pod \"horizon-577c8b5cc5-h9cj6\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.354911 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.461706 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.833062 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bd5f6bd69-67mxq"] Nov 22 08:45:58 crc kubenswrapper[4856]: I1122 08:45:58.962322 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-577c8b5cc5-h9cj6"] Nov 22 08:45:59 crc kubenswrapper[4856]: I1122 08:45:59.016212 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-577c8b5cc5-h9cj6" event={"ID":"9d00a074-d63d-48e2-99e7-54e9f5cebe8a","Type":"ContainerStarted","Data":"6970d0ef86ca86697678323d06db1e47e29aeaa10f52fc10881ab174ba964b87"} Nov 22 08:45:59 crc kubenswrapper[4856]: I1122 08:45:59.018766 4856 generic.go:334] "Generic (PLEG): container finished" podID="f971a400-35ad-42f4-a6a2-818bb7dc026d" containerID="b72eedcea9c06f61ebd0b7cf16cb557e4af668471f9df424cbfc26e446141c59" exitCode=143 Nov 22 08:45:59 crc kubenswrapper[4856]: I1122 08:45:59.018853 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f971a400-35ad-42f4-a6a2-818bb7dc026d","Type":"ContainerDied","Data":"b72eedcea9c06f61ebd0b7cf16cb557e4af668471f9df424cbfc26e446141c59"} Nov 22 08:45:59 crc kubenswrapper[4856]: I1122 08:45:59.020860 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bd5f6bd69-67mxq" event={"ID":"0dd2f468-6caa-45c3-a28b-82f97f28162e","Type":"ContainerStarted","Data":"9055d250252ff0a20329170232fc774af978126bb5890903a84dd58699c3caf3"} Nov 22 08:45:59 crc kubenswrapper[4856]: I1122 08:45:59.023750 4856 generic.go:334] "Generic (PLEG): container finished" podID="475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" containerID="04d58e3bab54109f892b11187ac913c49228ece8e0caa2609a2fdc4443ea589e" exitCode=143 Nov 22 08:45:59 crc kubenswrapper[4856]: I1122 08:45:59.023788 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9","Type":"ContainerDied","Data":"04d58e3bab54109f892b11187ac913c49228ece8e0caa2609a2fdc4443ea589e"} Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.354823 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bd5f6bd69-67mxq"] Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.409599 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-64dd85876-2v8sb"] Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.411315 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.414235 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.421774 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64dd85876-2v8sb"] Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.473269 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6c54f55-cab8-41f9-8c6e-6f23442ed202-scripts\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.473343 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-horizon-tls-certs\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.473400 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6c54f55-cab8-41f9-8c6e-6f23442ed202-config-data\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.473453 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq8wm\" (UniqueName: \"kubernetes.io/projected/a6c54f55-cab8-41f9-8c6e-6f23442ed202-kube-api-access-dq8wm\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.473487 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-horizon-secret-key\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.473531 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6c54f55-cab8-41f9-8c6e-6f23442ed202-logs\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.473550 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-combined-ca-bundle\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.475552 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-577c8b5cc5-h9cj6"] Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.507371 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5f58664c9d-xr6gw"] Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.509739 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.539325 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f58664c9d-xr6gw"] Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.575022 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fac214f-463f-4451-a06c-2e4750ff1eb3-scripts\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.575072 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6c54f55-cab8-41f9-8c6e-6f23442ed202-config-data\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.575216 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fac214f-463f-4451-a06c-2e4750ff1eb3-logs\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.575264 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fac214f-463f-4451-a06c-2e4750ff1eb3-config-data\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.575310 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq8wm\" (UniqueName: \"kubernetes.io/projected/a6c54f55-cab8-41f9-8c6e-6f23442ed202-kube-api-access-dq8wm\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.575452 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-horizon-secret-key\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.575552 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-horizon-secret-key\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.575641 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6c54f55-cab8-41f9-8c6e-6f23442ed202-logs\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.575674 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-combined-ca-bundle\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.575731 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-horizon-tls-certs\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.575811 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qph66\" (UniqueName: \"kubernetes.io/projected/1fac214f-463f-4451-a06c-2e4750ff1eb3-kube-api-access-qph66\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.575872 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6c54f55-cab8-41f9-8c6e-6f23442ed202-scripts\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.575982 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-horizon-tls-certs\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.576105 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-combined-ca-bundle\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.576834 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6c54f55-cab8-41f9-8c6e-6f23442ed202-config-data\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.576872 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6c54f55-cab8-41f9-8c6e-6f23442ed202-scripts\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.576958 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6c54f55-cab8-41f9-8c6e-6f23442ed202-logs\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.581760 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-combined-ca-bundle\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.582673 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-horizon-secret-key\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.585041 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-horizon-tls-certs\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.590661 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq8wm\" (UniqueName: \"kubernetes.io/projected/a6c54f55-cab8-41f9-8c6e-6f23442ed202-kube-api-access-dq8wm\") pod \"horizon-64dd85876-2v8sb\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.678020 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fac214f-463f-4451-a06c-2e4750ff1eb3-logs\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.678464 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fac214f-463f-4451-a06c-2e4750ff1eb3-config-data\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.678423 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fac214f-463f-4451-a06c-2e4750ff1eb3-logs\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.678589 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-horizon-secret-key\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.679008 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-horizon-tls-certs\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.679045 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qph66\" (UniqueName: \"kubernetes.io/projected/1fac214f-463f-4451-a06c-2e4750ff1eb3-kube-api-access-qph66\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.679434 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-combined-ca-bundle\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.679506 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fac214f-463f-4451-a06c-2e4750ff1eb3-scripts\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.679866 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fac214f-463f-4451-a06c-2e4750ff1eb3-config-data\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.680164 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fac214f-463f-4451-a06c-2e4750ff1eb3-scripts\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.682459 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-horizon-secret-key\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.682665 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-horizon-tls-certs\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.683036 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-combined-ca-bundle\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.696833 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qph66\" (UniqueName: \"kubernetes.io/projected/1fac214f-463f-4451-a06c-2e4750ff1eb3-kube-api-access-qph66\") pod \"horizon-5f58664c9d-xr6gw\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.739470 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:00 crc kubenswrapper[4856]: I1122 08:46:00.827262 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.223853 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64dd85876-2v8sb"] Nov 22 08:46:01 crc kubenswrapper[4856]: W1122 08:46:01.234529 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6c54f55_cab8_41f9_8c6e_6f23442ed202.slice/crio-ab1e42d145a5caa21ed62215ae37545c89c46892aae9114ff31b6aed4110ffaa WatchSource:0}: Error finding container ab1e42d145a5caa21ed62215ae37545c89c46892aae9114ff31b6aed4110ffaa: Status 404 returned error can't find the container with id ab1e42d145a5caa21ed62215ae37545c89c46892aae9114ff31b6aed4110ffaa Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.312555 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5f58664c9d-xr6gw"] Nov 22 08:46:01 crc kubenswrapper[4856]: W1122 08:46:01.348077 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fac214f_463f_4451_a06c_2e4750ff1eb3.slice/crio-8e21a2d25639438f16c8f942b2046ca73bc44474b9df9c3b608f42328c1c03aa WatchSource:0}: Error finding container 8e21a2d25639438f16c8f942b2046ca73bc44474b9df9c3b608f42328c1c03aa: Status 404 returned error can't find the container with id 8e21a2d25639438f16c8f942b2046ca73bc44474b9df9c3b608f42328c1c03aa Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.801893 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.924437 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-public-tls-certs\") pod \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.924531 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr9fk\" (UniqueName: \"kubernetes.io/projected/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-kube-api-access-tr9fk\") pod \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.924569 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-combined-ca-bundle\") pod \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.924624 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-httpd-run\") pod \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.924645 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-scripts\") pod \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.924822 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-logs\") pod \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.924881 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-config-data\") pod \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\" (UID: \"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9\") " Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.945733 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-logs" (OuterVolumeSpecName: "logs") pod "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" (UID: "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.958201 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" (UID: "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.969012 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-scripts" (OuterVolumeSpecName: "scripts") pod "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" (UID: "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:46:01 crc kubenswrapper[4856]: I1122 08:46:01.976823 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-kube-api-access-tr9fk" (OuterVolumeSpecName: "kube-api-access-tr9fk") pod "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" (UID: "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9"). InnerVolumeSpecName "kube-api-access-tr9fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.015687 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" (UID: "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.017737 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-config-data" (OuterVolumeSpecName: "config-data") pod "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" (UID: "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.021069 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.027491 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.027542 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.027556 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.027570 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr9fk\" (UniqueName: \"kubernetes.io/projected/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-kube-api-access-tr9fk\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.027582 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.027591 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.029575 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" (UID: "475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.096946 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64dd85876-2v8sb" event={"ID":"a6c54f55-cab8-41f9-8c6e-6f23442ed202","Type":"ContainerStarted","Data":"ab1e42d145a5caa21ed62215ae37545c89c46892aae9114ff31b6aed4110ffaa"} Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.104407 4856 generic.go:334] "Generic (PLEG): container finished" podID="475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" containerID="19951d2e88630472ef3cf3d4e7f4f4526250c534966287a6ba3d30bd21a61f9e" exitCode=0 Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.104528 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.104522 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9","Type":"ContainerDied","Data":"19951d2e88630472ef3cf3d4e7f4f4526250c534966287a6ba3d30bd21a61f9e"} Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.104673 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9","Type":"ContainerDied","Data":"a5806f31084f90b6a9ee7dc10b638b5efd6d62e0c6554b6d5f1b2807eeacfeaa"} Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.104716 4856 scope.go:117] "RemoveContainer" containerID="19951d2e88630472ef3cf3d4e7f4f4526250c534966287a6ba3d30bd21a61f9e" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.118457 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f58664c9d-xr6gw" event={"ID":"1fac214f-463f-4451-a06c-2e4750ff1eb3","Type":"ContainerStarted","Data":"8e21a2d25639438f16c8f942b2046ca73bc44474b9df9c3b608f42328c1c03aa"} Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.121479 4856 generic.go:334] "Generic (PLEG): container finished" podID="f971a400-35ad-42f4-a6a2-818bb7dc026d" containerID="2acf4bda0bb6a1e742141c119adb7237f8747dc6ee949fa693b63637e58e42e4" exitCode=0 Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.121546 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f971a400-35ad-42f4-a6a2-818bb7dc026d","Type":"ContainerDied","Data":"2acf4bda0bb6a1e742141c119adb7237f8747dc6ee949fa693b63637e58e42e4"} Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.121575 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f971a400-35ad-42f4-a6a2-818bb7dc026d","Type":"ContainerDied","Data":"30ac2e82fb4aff5a0c00655282219de1054482599d2e60e966a6ffe77e133d5f"} Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.121626 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.128368 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-config-data\") pod \"f971a400-35ad-42f4-a6a2-818bb7dc026d\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.128464 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-combined-ca-bundle\") pod \"f971a400-35ad-42f4-a6a2-818bb7dc026d\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.128485 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f971a400-35ad-42f4-a6a2-818bb7dc026d-httpd-run\") pod \"f971a400-35ad-42f4-a6a2-818bb7dc026d\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.128532 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-scripts\") pod \"f971a400-35ad-42f4-a6a2-818bb7dc026d\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.128568 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxlc4\" (UniqueName: \"kubernetes.io/projected/f971a400-35ad-42f4-a6a2-818bb7dc026d-kube-api-access-fxlc4\") pod \"f971a400-35ad-42f4-a6a2-818bb7dc026d\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.128670 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f971a400-35ad-42f4-a6a2-818bb7dc026d-logs\") pod \"f971a400-35ad-42f4-a6a2-818bb7dc026d\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.128730 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-internal-tls-certs\") pod \"f971a400-35ad-42f4-a6a2-818bb7dc026d\" (UID: \"f971a400-35ad-42f4-a6a2-818bb7dc026d\") " Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.129711 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.129728 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f971a400-35ad-42f4-a6a2-818bb7dc026d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f971a400-35ad-42f4-a6a2-818bb7dc026d" (UID: "f971a400-35ad-42f4-a6a2-818bb7dc026d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.132070 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f971a400-35ad-42f4-a6a2-818bb7dc026d-logs" (OuterVolumeSpecName: "logs") pod "f971a400-35ad-42f4-a6a2-818bb7dc026d" (UID: "f971a400-35ad-42f4-a6a2-818bb7dc026d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.138827 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-scripts" (OuterVolumeSpecName: "scripts") pod "f971a400-35ad-42f4-a6a2-818bb7dc026d" (UID: "f971a400-35ad-42f4-a6a2-818bb7dc026d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.140852 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f971a400-35ad-42f4-a6a2-818bb7dc026d-kube-api-access-fxlc4" (OuterVolumeSpecName: "kube-api-access-fxlc4") pod "f971a400-35ad-42f4-a6a2-818bb7dc026d" (UID: "f971a400-35ad-42f4-a6a2-818bb7dc026d"). InnerVolumeSpecName "kube-api-access-fxlc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.164572 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.173888 4856 scope.go:117] "RemoveContainer" containerID="04d58e3bab54109f892b11187ac913c49228ece8e0caa2609a2fdc4443ea589e" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.188239 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.196538 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-config-data" (OuterVolumeSpecName: "config-data") pod "f971a400-35ad-42f4-a6a2-818bb7dc026d" (UID: "f971a400-35ad-42f4-a6a2-818bb7dc026d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.202832 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:46:02 crc kubenswrapper[4856]: E1122 08:46:02.203273 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" containerName="glance-httpd" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.203289 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" containerName="glance-httpd" Nov 22 08:46:02 crc kubenswrapper[4856]: E1122 08:46:02.203333 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f971a400-35ad-42f4-a6a2-818bb7dc026d" containerName="glance-httpd" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.203343 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f971a400-35ad-42f4-a6a2-818bb7dc026d" containerName="glance-httpd" Nov 22 08:46:02 crc kubenswrapper[4856]: E1122 08:46:02.203354 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f971a400-35ad-42f4-a6a2-818bb7dc026d" containerName="glance-log" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.203364 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f971a400-35ad-42f4-a6a2-818bb7dc026d" containerName="glance-log" Nov 22 08:46:02 crc kubenswrapper[4856]: E1122 08:46:02.203399 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" containerName="glance-log" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.203410 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" containerName="glance-log" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.203718 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" containerName="glance-httpd" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.203745 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" containerName="glance-log" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.203760 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f971a400-35ad-42f4-a6a2-818bb7dc026d" containerName="glance-log" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.203782 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f971a400-35ad-42f4-a6a2-818bb7dc026d" containerName="glance-httpd" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.205060 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.209131 4856 scope.go:117] "RemoveContainer" containerID="19951d2e88630472ef3cf3d4e7f4f4526250c534966287a6ba3d30bd21a61f9e" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.210159 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.210370 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 22 08:46:02 crc kubenswrapper[4856]: E1122 08:46:02.210736 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19951d2e88630472ef3cf3d4e7f4f4526250c534966287a6ba3d30bd21a61f9e\": container with ID starting with 19951d2e88630472ef3cf3d4e7f4f4526250c534966287a6ba3d30bd21a61f9e not found: ID does not exist" containerID="19951d2e88630472ef3cf3d4e7f4f4526250c534966287a6ba3d30bd21a61f9e" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.210786 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19951d2e88630472ef3cf3d4e7f4f4526250c534966287a6ba3d30bd21a61f9e"} err="failed to get container status \"19951d2e88630472ef3cf3d4e7f4f4526250c534966287a6ba3d30bd21a61f9e\": rpc error: code = NotFound desc = could not find container \"19951d2e88630472ef3cf3d4e7f4f4526250c534966287a6ba3d30bd21a61f9e\": container with ID starting with 19951d2e88630472ef3cf3d4e7f4f4526250c534966287a6ba3d30bd21a61f9e not found: ID does not exist" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.210812 4856 scope.go:117] "RemoveContainer" containerID="04d58e3bab54109f892b11187ac913c49228ece8e0caa2609a2fdc4443ea589e" Nov 22 08:46:02 crc kubenswrapper[4856]: E1122 08:46:02.211223 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04d58e3bab54109f892b11187ac913c49228ece8e0caa2609a2fdc4443ea589e\": container with ID starting with 04d58e3bab54109f892b11187ac913c49228ece8e0caa2609a2fdc4443ea589e not found: ID does not exist" containerID="04d58e3bab54109f892b11187ac913c49228ece8e0caa2609a2fdc4443ea589e" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.211245 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04d58e3bab54109f892b11187ac913c49228ece8e0caa2609a2fdc4443ea589e"} err="failed to get container status \"04d58e3bab54109f892b11187ac913c49228ece8e0caa2609a2fdc4443ea589e\": rpc error: code = NotFound desc = could not find container \"04d58e3bab54109f892b11187ac913c49228ece8e0caa2609a2fdc4443ea589e\": container with ID starting with 04d58e3bab54109f892b11187ac913c49228ece8e0caa2609a2fdc4443ea589e not found: ID does not exist" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.211259 4856 scope.go:117] "RemoveContainer" containerID="2acf4bda0bb6a1e742141c119adb7237f8747dc6ee949fa693b63637e58e42e4" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.217630 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f971a400-35ad-42f4-a6a2-818bb7dc026d" (UID: "f971a400-35ad-42f4-a6a2-818bb7dc026d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.218686 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f971a400-35ad-42f4-a6a2-818bb7dc026d" (UID: "f971a400-35ad-42f4-a6a2-818bb7dc026d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.222805 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.231743 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f971a400-35ad-42f4-a6a2-818bb7dc026d-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.231780 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.231797 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.231810 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.231821 4856 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f971a400-35ad-42f4-a6a2-818bb7dc026d-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.231832 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f971a400-35ad-42f4-a6a2-818bb7dc026d-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.231844 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxlc4\" (UniqueName: \"kubernetes.io/projected/f971a400-35ad-42f4-a6a2-818bb7dc026d-kube-api-access-fxlc4\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.244307 4856 scope.go:117] "RemoveContainer" containerID="b72eedcea9c06f61ebd0b7cf16cb557e4af668471f9df424cbfc26e446141c59" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.293761 4856 scope.go:117] "RemoveContainer" containerID="2acf4bda0bb6a1e742141c119adb7237f8747dc6ee949fa693b63637e58e42e4" Nov 22 08:46:02 crc kubenswrapper[4856]: E1122 08:46:02.294354 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2acf4bda0bb6a1e742141c119adb7237f8747dc6ee949fa693b63637e58e42e4\": container with ID starting with 2acf4bda0bb6a1e742141c119adb7237f8747dc6ee949fa693b63637e58e42e4 not found: ID does not exist" containerID="2acf4bda0bb6a1e742141c119adb7237f8747dc6ee949fa693b63637e58e42e4" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.294404 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2acf4bda0bb6a1e742141c119adb7237f8747dc6ee949fa693b63637e58e42e4"} err="failed to get container status \"2acf4bda0bb6a1e742141c119adb7237f8747dc6ee949fa693b63637e58e42e4\": rpc error: code = NotFound desc = could not find container \"2acf4bda0bb6a1e742141c119adb7237f8747dc6ee949fa693b63637e58e42e4\": container with ID starting with 2acf4bda0bb6a1e742141c119adb7237f8747dc6ee949fa693b63637e58e42e4 not found: ID does not exist" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.294436 4856 scope.go:117] "RemoveContainer" containerID="b72eedcea9c06f61ebd0b7cf16cb557e4af668471f9df424cbfc26e446141c59" Nov 22 08:46:02 crc kubenswrapper[4856]: E1122 08:46:02.295133 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b72eedcea9c06f61ebd0b7cf16cb557e4af668471f9df424cbfc26e446141c59\": container with ID starting with b72eedcea9c06f61ebd0b7cf16cb557e4af668471f9df424cbfc26e446141c59 not found: ID does not exist" containerID="b72eedcea9c06f61ebd0b7cf16cb557e4af668471f9df424cbfc26e446141c59" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.295155 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b72eedcea9c06f61ebd0b7cf16cb557e4af668471f9df424cbfc26e446141c59"} err="failed to get container status \"b72eedcea9c06f61ebd0b7cf16cb557e4af668471f9df424cbfc26e446141c59\": rpc error: code = NotFound desc = could not find container \"b72eedcea9c06f61ebd0b7cf16cb557e4af668471f9df424cbfc26e446141c59\": container with ID starting with b72eedcea9c06f61ebd0b7cf16cb557e4af668471f9df424cbfc26e446141c59 not found: ID does not exist" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.333458 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dbf1025-16ed-4933-8207-61bb390843a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.333547 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbf1025-16ed-4933-8207-61bb390843a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.333610 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbf1025-16ed-4933-8207-61bb390843a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.334021 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dbf1025-16ed-4933-8207-61bb390843a6-logs\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.334115 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gd6q\" (UniqueName: \"kubernetes.io/projected/0dbf1025-16ed-4933-8207-61bb390843a6-kube-api-access-9gd6q\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.334146 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbf1025-16ed-4933-8207-61bb390843a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.334176 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbf1025-16ed-4933-8207-61bb390843a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.436707 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dbf1025-16ed-4933-8207-61bb390843a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.436758 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbf1025-16ed-4933-8207-61bb390843a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.436825 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbf1025-16ed-4933-8207-61bb390843a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.436932 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dbf1025-16ed-4933-8207-61bb390843a6-logs\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.437096 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gd6q\" (UniqueName: \"kubernetes.io/projected/0dbf1025-16ed-4933-8207-61bb390843a6-kube-api-access-9gd6q\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.437136 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbf1025-16ed-4933-8207-61bb390843a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.437968 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbf1025-16ed-4933-8207-61bb390843a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.439486 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dbf1025-16ed-4933-8207-61bb390843a6-logs\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.439970 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0dbf1025-16ed-4933-8207-61bb390843a6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.443816 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dbf1025-16ed-4933-8207-61bb390843a6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.444154 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dbf1025-16ed-4933-8207-61bb390843a6-scripts\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.446028 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dbf1025-16ed-4933-8207-61bb390843a6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.459854 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dbf1025-16ed-4933-8207-61bb390843a6-config-data\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.468291 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gd6q\" (UniqueName: \"kubernetes.io/projected/0dbf1025-16ed-4933-8207-61bb390843a6-kube-api-access-9gd6q\") pod \"glance-default-external-api-0\" (UID: \"0dbf1025-16ed-4933-8207-61bb390843a6\") " pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.483259 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.493734 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.499688 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.501709 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.504085 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.504394 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.537714 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.539556 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.539627 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-logs\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.539669 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.540214 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsp7z\" (UniqueName: \"kubernetes.io/projected/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-kube-api-access-gsp7z\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.540391 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.540482 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.540630 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.547330 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.643417 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-logs\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.643481 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.643553 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsp7z\" (UniqueName: \"kubernetes.io/projected/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-kube-api-access-gsp7z\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.643606 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.643636 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.643680 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.643893 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.644101 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-logs\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.644204 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.652564 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.652831 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.653086 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.660947 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.666480 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsp7z\" (UniqueName: \"kubernetes.io/projected/b5cd3215-7fab-4cdf-acfe-b72f972a3d86-kube-api-access-gsp7z\") pod \"glance-default-internal-api-0\" (UID: \"b5cd3215-7fab-4cdf-acfe-b72f972a3d86\") " pod="openstack/glance-default-internal-api-0" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.738457 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9" path="/var/lib/kubelet/pods/475bb2f1-ffa4-4ea9-a7d2-63d6c996c9f9/volumes" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.739729 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f971a400-35ad-42f4-a6a2-818bb7dc026d" path="/var/lib/kubelet/pods/f971a400-35ad-42f4-a6a2-818bb7dc026d/volumes" Nov 22 08:46:02 crc kubenswrapper[4856]: I1122 08:46:02.843733 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 22 08:46:06 crc kubenswrapper[4856]: I1122 08:46:06.141829 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:46:06 crc kubenswrapper[4856]: E1122 08:46:06.145988 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:46:08 crc kubenswrapper[4856]: I1122 08:46:08.642063 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.205299 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bd5f6bd69-67mxq" event={"ID":"0dd2f468-6caa-45c3-a28b-82f97f28162e","Type":"ContainerStarted","Data":"9396995ab31d7da1b961786cac953339bdb2a9d28b28b9eb2665c56bb0cc9072"} Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.205681 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bd5f6bd69-67mxq" event={"ID":"0dd2f468-6caa-45c3-a28b-82f97f28162e","Type":"ContainerStarted","Data":"8474f07468743619639e01a82ef96755b8f42c4fe0035784d313727f63a1672d"} Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.205427 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bd5f6bd69-67mxq" podUID="0dd2f468-6caa-45c3-a28b-82f97f28162e" containerName="horizon" containerID="cri-o://9396995ab31d7da1b961786cac953339bdb2a9d28b28b9eb2665c56bb0cc9072" gracePeriod=30 Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.205384 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bd5f6bd69-67mxq" podUID="0dd2f468-6caa-45c3-a28b-82f97f28162e" containerName="horizon-log" containerID="cri-o://8474f07468743619639e01a82ef96755b8f42c4fe0035784d313727f63a1672d" gracePeriod=30 Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.210115 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f58664c9d-xr6gw" event={"ID":"1fac214f-463f-4451-a06c-2e4750ff1eb3","Type":"ContainerStarted","Data":"d675e288c9fc3739d3c69a4edbe815fd636a8fe5322484a8ad3c0e206a0a7afc"} Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.210225 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f58664c9d-xr6gw" event={"ID":"1fac214f-463f-4451-a06c-2e4750ff1eb3","Type":"ContainerStarted","Data":"d25884b425897151b8b46a058d2e89e36cf56c280458025a88acd48b68c8f0b2"} Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.213042 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0dbf1025-16ed-4933-8207-61bb390843a6","Type":"ContainerStarted","Data":"0150802e27eeccd5e43e8219efcbe3055b58420fe204d16c332b57750b3d9586"} Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.215981 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-577c8b5cc5-h9cj6" event={"ID":"9d00a074-d63d-48e2-99e7-54e9f5cebe8a","Type":"ContainerStarted","Data":"f209cd5d737edc6a54c7646a1334a86bc642ce128089438d9d7df8a7817cd624"} Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.216145 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-577c8b5cc5-h9cj6" event={"ID":"9d00a074-d63d-48e2-99e7-54e9f5cebe8a","Type":"ContainerStarted","Data":"5bd9da5b204faec816c586f0c034616611da3c88d6a91692908f7377dc53e732"} Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.216228 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-577c8b5cc5-h9cj6" podUID="9d00a074-d63d-48e2-99e7-54e9f5cebe8a" containerName="horizon" containerID="cri-o://f209cd5d737edc6a54c7646a1334a86bc642ce128089438d9d7df8a7817cd624" gracePeriod=30 Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.216185 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-577c8b5cc5-h9cj6" podUID="9d00a074-d63d-48e2-99e7-54e9f5cebe8a" containerName="horizon-log" containerID="cri-o://5bd9da5b204faec816c586f0c034616611da3c88d6a91692908f7377dc53e732" gracePeriod=30 Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.217822 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64dd85876-2v8sb" event={"ID":"a6c54f55-cab8-41f9-8c6e-6f23442ed202","Type":"ContainerStarted","Data":"0f6e99dbd180519c525acd9e48c1823c31e9797096b08cf84a032cbe017c6f34"} Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.217871 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64dd85876-2v8sb" event={"ID":"a6c54f55-cab8-41f9-8c6e-6f23442ed202","Type":"ContainerStarted","Data":"fcb1ceab4c4c574fd97a89d9c44baf5e6267bdb78ee2636272cc18b565876281"} Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.236086 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bd5f6bd69-67mxq" podStartSLOduration=2.750128596 podStartE2EDuration="12.236069008s" podCreationTimestamp="2025-11-22 08:45:57 +0000 UTC" firstStartedPulling="2025-11-22 08:45:58.838665738 +0000 UTC m=+6201.252058986" lastFinishedPulling="2025-11-22 08:46:08.32460614 +0000 UTC m=+6210.737999398" observedRunningTime="2025-11-22 08:46:09.224709672 +0000 UTC m=+6211.638102940" watchObservedRunningTime="2025-11-22 08:46:09.236069008 +0000 UTC m=+6211.649462266" Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.263390 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-577c8b5cc5-h9cj6" podStartSLOduration=2.044585382 podStartE2EDuration="11.263369994s" podCreationTimestamp="2025-11-22 08:45:58 +0000 UTC" firstStartedPulling="2025-11-22 08:45:58.969157015 +0000 UTC m=+6201.382550273" lastFinishedPulling="2025-11-22 08:46:08.187941627 +0000 UTC m=+6210.601334885" observedRunningTime="2025-11-22 08:46:09.261094952 +0000 UTC m=+6211.674488220" watchObservedRunningTime="2025-11-22 08:46:09.263369994 +0000 UTC m=+6211.676763252" Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.264787 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5f58664c9d-xr6gw" podStartSLOduration=2.369891447 podStartE2EDuration="9.264776382s" podCreationTimestamp="2025-11-22 08:46:00 +0000 UTC" firstStartedPulling="2025-11-22 08:46:01.351447196 +0000 UTC m=+6203.764840454" lastFinishedPulling="2025-11-22 08:46:08.246332131 +0000 UTC m=+6210.659725389" observedRunningTime="2025-11-22 08:46:09.246572111 +0000 UTC m=+6211.659965379" watchObservedRunningTime="2025-11-22 08:46:09.264776382 +0000 UTC m=+6211.678169640" Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.284187 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-64dd85876-2v8sb" podStartSLOduration=2.071599627 podStartE2EDuration="9.284168385s" podCreationTimestamp="2025-11-22 08:46:00 +0000 UTC" firstStartedPulling="2025-11-22 08:46:01.236734914 +0000 UTC m=+6203.650128172" lastFinishedPulling="2025-11-22 08:46:08.449303672 +0000 UTC m=+6210.862696930" observedRunningTime="2025-11-22 08:46:09.281429211 +0000 UTC m=+6211.694822489" watchObservedRunningTime="2025-11-22 08:46:09.284168385 +0000 UTC m=+6211.697561643" Nov 22 08:46:09 crc kubenswrapper[4856]: I1122 08:46:09.598798 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 22 08:46:10 crc kubenswrapper[4856]: I1122 08:46:10.229683 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0dbf1025-16ed-4933-8207-61bb390843a6","Type":"ContainerStarted","Data":"2ce025f161db215d7776831a5b5b42be02eddef49b0122537380ca01d074b34e"} Nov 22 08:46:10 crc kubenswrapper[4856]: I1122 08:46:10.230889 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0dbf1025-16ed-4933-8207-61bb390843a6","Type":"ContainerStarted","Data":"6e919d16e1954e1780d9be9ec7552a93d36f756d1e838efd3a8a2761c3981692"} Nov 22 08:46:10 crc kubenswrapper[4856]: I1122 08:46:10.234551 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b5cd3215-7fab-4cdf-acfe-b72f972a3d86","Type":"ContainerStarted","Data":"b32702b45cdc1b41a7a5f13d4bc632f235594da0bf979a0ae56aeacbfc880c5d"} Nov 22 08:46:10 crc kubenswrapper[4856]: I1122 08:46:10.234740 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b5cd3215-7fab-4cdf-acfe-b72f972a3d86","Type":"ContainerStarted","Data":"045a9ca65eec576bbe4b2f93c5947a5fdb1402b97bf333ccfa23f12e331cd451"} Nov 22 08:46:10 crc kubenswrapper[4856]: I1122 08:46:10.266990 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.266968564999999 podStartE2EDuration="8.266968565s" podCreationTimestamp="2025-11-22 08:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:46:10.26383419 +0000 UTC m=+6212.677227458" watchObservedRunningTime="2025-11-22 08:46:10.266968565 +0000 UTC m=+6212.680361823" Nov 22 08:46:10 crc kubenswrapper[4856]: I1122 08:46:10.740019 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:10 crc kubenswrapper[4856]: I1122 08:46:10.740260 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:10 crc kubenswrapper[4856]: I1122 08:46:10.828256 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:10 crc kubenswrapper[4856]: I1122 08:46:10.828300 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:11 crc kubenswrapper[4856]: I1122 08:46:11.245144 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b5cd3215-7fab-4cdf-acfe-b72f972a3d86","Type":"ContainerStarted","Data":"95e4cc6514adec3456cfa78ec184c5e220ec58a918e592d8fb9c28da585af6c0"} Nov 22 08:46:11 crc kubenswrapper[4856]: I1122 08:46:11.272786 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.272762755 podStartE2EDuration="9.272762755s" podCreationTimestamp="2025-11-22 08:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:46:11.26480467 +0000 UTC m=+6213.678197928" watchObservedRunningTime="2025-11-22 08:46:11.272762755 +0000 UTC m=+6213.686156013" Nov 22 08:46:12 crc kubenswrapper[4856]: I1122 08:46:12.538818 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 08:46:12 crc kubenswrapper[4856]: I1122 08:46:12.539179 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 22 08:46:12 crc kubenswrapper[4856]: I1122 08:46:12.578879 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 08:46:12 crc kubenswrapper[4856]: I1122 08:46:12.592421 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 22 08:46:12 crc kubenswrapper[4856]: I1122 08:46:12.845028 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 08:46:12 crc kubenswrapper[4856]: I1122 08:46:12.845337 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 22 08:46:12 crc kubenswrapper[4856]: I1122 08:46:12.875653 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 08:46:12 crc kubenswrapper[4856]: I1122 08:46:12.884954 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 22 08:46:13 crc kubenswrapper[4856]: I1122 08:46:13.261313 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 08:46:13 crc kubenswrapper[4856]: I1122 08:46:13.261359 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 08:46:13 crc kubenswrapper[4856]: I1122 08:46:13.261374 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 22 08:46:13 crc kubenswrapper[4856]: I1122 08:46:13.261386 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 22 08:46:16 crc kubenswrapper[4856]: I1122 08:46:16.583863 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 08:46:16 crc kubenswrapper[4856]: I1122 08:46:16.723127 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 22 08:46:18 crc kubenswrapper[4856]: I1122 08:46:18.356628 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:46:18 crc kubenswrapper[4856]: I1122 08:46:18.462733 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:46:20 crc kubenswrapper[4856]: I1122 08:46:20.742386 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64dd85876-2v8sb" podUID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.106:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.106:8443: connect: connection refused" Nov 22 08:46:20 crc kubenswrapper[4856]: I1122 08:46:20.830780 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5f58664c9d-xr6gw" podUID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.107:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8443: connect: connection refused" Nov 22 08:46:21 crc kubenswrapper[4856]: I1122 08:46:21.709984 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:46:21 crc kubenswrapper[4856]: E1122 08:46:21.710220 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:46:23 crc kubenswrapper[4856]: I1122 08:46:23.574156 4856 scope.go:117] "RemoveContainer" containerID="8fd568f1e0cf9006c5fb640eab74d555776670d3cf3ca53482af9648952a0057" Nov 22 08:46:23 crc kubenswrapper[4856]: I1122 08:46:23.610852 4856 scope.go:117] "RemoveContainer" containerID="30a303cf8123a1c2e7216674f3d314384fd0ff728dbc171820f782f5d449875f" Nov 22 08:46:30 crc kubenswrapper[4856]: I1122 08:46:30.037714 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-mltbc"] Nov 22 08:46:30 crc kubenswrapper[4856]: I1122 08:46:30.045878 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-ef3a-account-create-rkn96"] Nov 22 08:46:30 crc kubenswrapper[4856]: I1122 08:46:30.057413 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-ef3a-account-create-rkn96"] Nov 22 08:46:30 crc kubenswrapper[4856]: I1122 08:46:30.065485 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-mltbc"] Nov 22 08:46:30 crc kubenswrapper[4856]: I1122 08:46:30.729411 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9091157f-35e8-471a-a784-9dad836695ab" path="/var/lib/kubelet/pods/9091157f-35e8-471a-a784-9dad836695ab/volumes" Nov 22 08:46:30 crc kubenswrapper[4856]: I1122 08:46:30.730063 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5562803-a85b-481f-a8ed-a0c309e63253" path="/var/lib/kubelet/pods/f5562803-a85b-481f-a8ed-a0c309e63253/volumes" Nov 22 08:46:32 crc kubenswrapper[4856]: I1122 08:46:32.961979 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:33 crc kubenswrapper[4856]: I1122 08:46:33.008147 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:34 crc kubenswrapper[4856]: I1122 08:46:34.711779 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:46:34 crc kubenswrapper[4856]: E1122 08:46:34.712534 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:46:34 crc kubenswrapper[4856]: I1122 08:46:34.847828 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:46:35 crc kubenswrapper[4856]: I1122 08:46:35.000741 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 08:46:35 crc kubenswrapper[4856]: I1122 08:46:35.004352 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 22 08:46:35 crc kubenswrapper[4856]: I1122 08:46:35.079292 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:46:35 crc kubenswrapper[4856]: I1122 08:46:35.145004 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64dd85876-2v8sb"] Nov 22 08:46:35 crc kubenswrapper[4856]: I1122 08:46:35.489605 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64dd85876-2v8sb" podUID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerName="horizon" containerID="cri-o://0f6e99dbd180519c525acd9e48c1823c31e9797096b08cf84a032cbe017c6f34" gracePeriod=30 Nov 22 08:46:35 crc kubenswrapper[4856]: I1122 08:46:35.489767 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64dd85876-2v8sb" podUID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerName="horizon-log" containerID="cri-o://fcb1ceab4c4c574fd97a89d9c44baf5e6267bdb78ee2636272cc18b565876281" gracePeriod=30 Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.566750 4856 generic.go:334] "Generic (PLEG): container finished" podID="0dd2f468-6caa-45c3-a28b-82f97f28162e" containerID="9396995ab31d7da1b961786cac953339bdb2a9d28b28b9eb2665c56bb0cc9072" exitCode=137 Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.567042 4856 generic.go:334] "Generic (PLEG): container finished" podID="0dd2f468-6caa-45c3-a28b-82f97f28162e" containerID="8474f07468743619639e01a82ef96755b8f42c4fe0035784d313727f63a1672d" exitCode=137 Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.567120 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bd5f6bd69-67mxq" event={"ID":"0dd2f468-6caa-45c3-a28b-82f97f28162e","Type":"ContainerDied","Data":"9396995ab31d7da1b961786cac953339bdb2a9d28b28b9eb2665c56bb0cc9072"} Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.567146 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bd5f6bd69-67mxq" event={"ID":"0dd2f468-6caa-45c3-a28b-82f97f28162e","Type":"ContainerDied","Data":"8474f07468743619639e01a82ef96755b8f42c4fe0035784d313727f63a1672d"} Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.598385 4856 generic.go:334] "Generic (PLEG): container finished" podID="9d00a074-d63d-48e2-99e7-54e9f5cebe8a" containerID="f209cd5d737edc6a54c7646a1334a86bc642ce128089438d9d7df8a7817cd624" exitCode=137 Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.598660 4856 generic.go:334] "Generic (PLEG): container finished" podID="9d00a074-d63d-48e2-99e7-54e9f5cebe8a" containerID="5bd9da5b204faec816c586f0c034616611da3c88d6a91692908f7377dc53e732" exitCode=137 Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.598725 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-577c8b5cc5-h9cj6" event={"ID":"9d00a074-d63d-48e2-99e7-54e9f5cebe8a","Type":"ContainerDied","Data":"f209cd5d737edc6a54c7646a1334a86bc642ce128089438d9d7df8a7817cd624"} Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.598751 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-577c8b5cc5-h9cj6" event={"ID":"9d00a074-d63d-48e2-99e7-54e9f5cebe8a","Type":"ContainerDied","Data":"5bd9da5b204faec816c586f0c034616611da3c88d6a91692908f7377dc53e732"} Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.609826 4856 generic.go:334] "Generic (PLEG): container finished" podID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerID="0f6e99dbd180519c525acd9e48c1823c31e9797096b08cf84a032cbe017c6f34" exitCode=0 Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.609876 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64dd85876-2v8sb" event={"ID":"a6c54f55-cab8-41f9-8c6e-6f23442ed202","Type":"ContainerDied","Data":"0f6e99dbd180519c525acd9e48c1823c31e9797096b08cf84a032cbe017c6f34"} Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.784998 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.794167 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.942895 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-horizon-secret-key\") pod \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.942940 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-config-data\") pod \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.942973 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-scripts\") pod \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.943048 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0dd2f468-6caa-45c3-a28b-82f97f28162e-horizon-secret-key\") pod \"0dd2f468-6caa-45c3-a28b-82f97f28162e\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.943109 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hs6jk\" (UniqueName: \"kubernetes.io/projected/0dd2f468-6caa-45c3-a28b-82f97f28162e-kube-api-access-hs6jk\") pod \"0dd2f468-6caa-45c3-a28b-82f97f28162e\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.943168 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd2f468-6caa-45c3-a28b-82f97f28162e-logs\") pod \"0dd2f468-6caa-45c3-a28b-82f97f28162e\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.943201 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-logs\") pod \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.943302 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0dd2f468-6caa-45c3-a28b-82f97f28162e-config-data\") pod \"0dd2f468-6caa-45c3-a28b-82f97f28162e\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.943329 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tljrz\" (UniqueName: \"kubernetes.io/projected/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-kube-api-access-tljrz\") pod \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\" (UID: \"9d00a074-d63d-48e2-99e7-54e9f5cebe8a\") " Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.943376 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd2f468-6caa-45c3-a28b-82f97f28162e-scripts\") pod \"0dd2f468-6caa-45c3-a28b-82f97f28162e\" (UID: \"0dd2f468-6caa-45c3-a28b-82f97f28162e\") " Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.944123 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-logs" (OuterVolumeSpecName: "logs") pod "9d00a074-d63d-48e2-99e7-54e9f5cebe8a" (UID: "9d00a074-d63d-48e2-99e7-54e9f5cebe8a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.944864 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dd2f468-6caa-45c3-a28b-82f97f28162e-logs" (OuterVolumeSpecName: "logs") pod "0dd2f468-6caa-45c3-a28b-82f97f28162e" (UID: "0dd2f468-6caa-45c3-a28b-82f97f28162e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.949703 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9d00a074-d63d-48e2-99e7-54e9f5cebe8a" (UID: "9d00a074-d63d-48e2-99e7-54e9f5cebe8a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.950748 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-kube-api-access-tljrz" (OuterVolumeSpecName: "kube-api-access-tljrz") pod "9d00a074-d63d-48e2-99e7-54e9f5cebe8a" (UID: "9d00a074-d63d-48e2-99e7-54e9f5cebe8a"). InnerVolumeSpecName "kube-api-access-tljrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.950881 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd2f468-6caa-45c3-a28b-82f97f28162e-kube-api-access-hs6jk" (OuterVolumeSpecName: "kube-api-access-hs6jk") pod "0dd2f468-6caa-45c3-a28b-82f97f28162e" (UID: "0dd2f468-6caa-45c3-a28b-82f97f28162e"). InnerVolumeSpecName "kube-api-access-hs6jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.952761 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd2f468-6caa-45c3-a28b-82f97f28162e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0dd2f468-6caa-45c3-a28b-82f97f28162e" (UID: "0dd2f468-6caa-45c3-a28b-82f97f28162e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.970568 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-scripts" (OuterVolumeSpecName: "scripts") pod "9d00a074-d63d-48e2-99e7-54e9f5cebe8a" (UID: "9d00a074-d63d-48e2-99e7-54e9f5cebe8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.970615 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd2f468-6caa-45c3-a28b-82f97f28162e-config-data" (OuterVolumeSpecName: "config-data") pod "0dd2f468-6caa-45c3-a28b-82f97f28162e" (UID: "0dd2f468-6caa-45c3-a28b-82f97f28162e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.972075 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0dd2f468-6caa-45c3-a28b-82f97f28162e-scripts" (OuterVolumeSpecName: "scripts") pod "0dd2f468-6caa-45c3-a28b-82f97f28162e" (UID: "0dd2f468-6caa-45c3-a28b-82f97f28162e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:46:39 crc kubenswrapper[4856]: I1122 08:46:39.972188 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-config-data" (OuterVolumeSpecName: "config-data") pod "9d00a074-d63d-48e2-99e7-54e9f5cebe8a" (UID: "9d00a074-d63d-48e2-99e7-54e9f5cebe8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.039585 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-f865q"] Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.045635 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tljrz\" (UniqueName: \"kubernetes.io/projected/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-kube-api-access-tljrz\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.045671 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0dd2f468-6caa-45c3-a28b-82f97f28162e-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.045682 4856 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.045693 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.045703 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.045712 4856 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0dd2f468-6caa-45c3-a28b-82f97f28162e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.045722 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hs6jk\" (UniqueName: \"kubernetes.io/projected/0dd2f468-6caa-45c3-a28b-82f97f28162e-kube-api-access-hs6jk\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.045731 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dd2f468-6caa-45c3-a28b-82f97f28162e-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.045738 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d00a074-d63d-48e2-99e7-54e9f5cebe8a-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.045746 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0dd2f468-6caa-45c3-a28b-82f97f28162e-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.048709 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-f865q"] Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.622786 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-577c8b5cc5-h9cj6" event={"ID":"9d00a074-d63d-48e2-99e7-54e9f5cebe8a","Type":"ContainerDied","Data":"6970d0ef86ca86697678323d06db1e47e29aeaa10f52fc10881ab174ba964b87"} Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.623387 4856 scope.go:117] "RemoveContainer" containerID="f209cd5d737edc6a54c7646a1334a86bc642ce128089438d9d7df8a7817cd624" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.623640 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-577c8b5cc5-h9cj6" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.627088 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bd5f6bd69-67mxq" event={"ID":"0dd2f468-6caa-45c3-a28b-82f97f28162e","Type":"ContainerDied","Data":"9055d250252ff0a20329170232fc774af978126bb5890903a84dd58699c3caf3"} Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.627133 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bd5f6bd69-67mxq" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.676726 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-577c8b5cc5-h9cj6"] Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.687536 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-577c8b5cc5-h9cj6"] Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.695833 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bd5f6bd69-67mxq"] Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.704996 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7bd5f6bd69-67mxq"] Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.740613 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64dd85876-2v8sb" podUID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.106:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.106:8443: connect: connection refused" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.753193 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd2f468-6caa-45c3-a28b-82f97f28162e" path="/var/lib/kubelet/pods/0dd2f468-6caa-45c3-a28b-82f97f28162e/volumes" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.754665 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d00a074-d63d-48e2-99e7-54e9f5cebe8a" path="/var/lib/kubelet/pods/9d00a074-d63d-48e2-99e7-54e9f5cebe8a/volumes" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.756129 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b945e4ce-9238-4801-a040-0fc22d868de7" path="/var/lib/kubelet/pods/b945e4ce-9238-4801-a040-0fc22d868de7/volumes" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.812354 4856 scope.go:117] "RemoveContainer" containerID="5bd9da5b204faec816c586f0c034616611da3c88d6a91692908f7377dc53e732" Nov 22 08:46:40 crc kubenswrapper[4856]: I1122 08:46:40.850322 4856 scope.go:117] "RemoveContainer" containerID="9396995ab31d7da1b961786cac953339bdb2a9d28b28b9eb2665c56bb0cc9072" Nov 22 08:46:41 crc kubenswrapper[4856]: I1122 08:46:41.019831 4856 scope.go:117] "RemoveContainer" containerID="8474f07468743619639e01a82ef96755b8f42c4fe0035784d313727f63a1672d" Nov 22 08:46:49 crc kubenswrapper[4856]: I1122 08:46:49.710284 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:46:49 crc kubenswrapper[4856]: E1122 08:46:49.711126 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:46:50 crc kubenswrapper[4856]: I1122 08:46:50.741089 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64dd85876-2v8sb" podUID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.106:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.106:8443: connect: connection refused" Nov 22 08:47:00 crc kubenswrapper[4856]: I1122 08:47:00.741299 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-64dd85876-2v8sb" podUID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.106:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.106:8443: connect: connection refused" Nov 22 08:47:00 crc kubenswrapper[4856]: I1122 08:47:00.741968 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:47:04 crc kubenswrapper[4856]: I1122 08:47:04.710085 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:47:04 crc kubenswrapper[4856]: E1122 08:47:04.710663 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:47:05 crc kubenswrapper[4856]: I1122 08:47:05.873110 4856 generic.go:334] "Generic (PLEG): container finished" podID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerID="fcb1ceab4c4c574fd97a89d9c44baf5e6267bdb78ee2636272cc18b565876281" exitCode=137 Nov 22 08:47:05 crc kubenswrapper[4856]: I1122 08:47:05.873182 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64dd85876-2v8sb" event={"ID":"a6c54f55-cab8-41f9-8c6e-6f23442ed202","Type":"ContainerDied","Data":"fcb1ceab4c4c574fd97a89d9c44baf5e6267bdb78ee2636272cc18b565876281"} Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.012333 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.039757 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6c54f55-cab8-41f9-8c6e-6f23442ed202-config-data\") pod \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.039852 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-horizon-secret-key\") pod \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.039942 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-horizon-tls-certs\") pod \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.040019 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6c54f55-cab8-41f9-8c6e-6f23442ed202-logs\") pod \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.040076 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-combined-ca-bundle\") pod \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.040102 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6c54f55-cab8-41f9-8c6e-6f23442ed202-scripts\") pod \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.040306 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq8wm\" (UniqueName: \"kubernetes.io/projected/a6c54f55-cab8-41f9-8c6e-6f23442ed202-kube-api-access-dq8wm\") pod \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\" (UID: \"a6c54f55-cab8-41f9-8c6e-6f23442ed202\") " Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.041095 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6c54f55-cab8-41f9-8c6e-6f23442ed202-logs" (OuterVolumeSpecName: "logs") pod "a6c54f55-cab8-41f9-8c6e-6f23442ed202" (UID: "a6c54f55-cab8-41f9-8c6e-6f23442ed202"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.046361 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c54f55-cab8-41f9-8c6e-6f23442ed202-kube-api-access-dq8wm" (OuterVolumeSpecName: "kube-api-access-dq8wm") pod "a6c54f55-cab8-41f9-8c6e-6f23442ed202" (UID: "a6c54f55-cab8-41f9-8c6e-6f23442ed202"). InnerVolumeSpecName "kube-api-access-dq8wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.047692 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a6c54f55-cab8-41f9-8c6e-6f23442ed202" (UID: "a6c54f55-cab8-41f9-8c6e-6f23442ed202"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.068361 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c54f55-cab8-41f9-8c6e-6f23442ed202-scripts" (OuterVolumeSpecName: "scripts") pod "a6c54f55-cab8-41f9-8c6e-6f23442ed202" (UID: "a6c54f55-cab8-41f9-8c6e-6f23442ed202"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.070262 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c54f55-cab8-41f9-8c6e-6f23442ed202-config-data" (OuterVolumeSpecName: "config-data") pod "a6c54f55-cab8-41f9-8c6e-6f23442ed202" (UID: "a6c54f55-cab8-41f9-8c6e-6f23442ed202"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.075054 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6c54f55-cab8-41f9-8c6e-6f23442ed202" (UID: "a6c54f55-cab8-41f9-8c6e-6f23442ed202"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.098037 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "a6c54f55-cab8-41f9-8c6e-6f23442ed202" (UID: "a6c54f55-cab8-41f9-8c6e-6f23442ed202"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.143032 4856 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.143072 4856 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.143082 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6c54f55-cab8-41f9-8c6e-6f23442ed202-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.143093 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6c54f55-cab8-41f9-8c6e-6f23442ed202-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.143103 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6c54f55-cab8-41f9-8c6e-6f23442ed202-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.143148 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq8wm\" (UniqueName: \"kubernetes.io/projected/a6c54f55-cab8-41f9-8c6e-6f23442ed202-kube-api-access-dq8wm\") on node \"crc\" DevicePath \"\"" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.143160 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a6c54f55-cab8-41f9-8c6e-6f23442ed202-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.884146 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64dd85876-2v8sb" event={"ID":"a6c54f55-cab8-41f9-8c6e-6f23442ed202","Type":"ContainerDied","Data":"ab1e42d145a5caa21ed62215ae37545c89c46892aae9114ff31b6aed4110ffaa"} Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.884197 4856 scope.go:117] "RemoveContainer" containerID="0f6e99dbd180519c525acd9e48c1823c31e9797096b08cf84a032cbe017c6f34" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.884238 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64dd85876-2v8sb" Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.912690 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-64dd85876-2v8sb"] Nov 22 08:47:06 crc kubenswrapper[4856]: I1122 08:47:06.920352 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-64dd85876-2v8sb"] Nov 22 08:47:07 crc kubenswrapper[4856]: I1122 08:47:07.046803 4856 scope.go:117] "RemoveContainer" containerID="fcb1ceab4c4c574fd97a89d9c44baf5e6267bdb78ee2636272cc18b565876281" Nov 22 08:47:08 crc kubenswrapper[4856]: I1122 08:47:08.720709 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" path="/var/lib/kubelet/pods/a6c54f55-cab8-41f9-8c6e-6f23442ed202/volumes" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.052376 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8484479b76-8csj5"] Nov 22 08:47:16 crc kubenswrapper[4856]: E1122 08:47:16.055828 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd2f468-6caa-45c3-a28b-82f97f28162e" containerName="horizon-log" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.055864 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd2f468-6caa-45c3-a28b-82f97f28162e" containerName="horizon-log" Nov 22 08:47:16 crc kubenswrapper[4856]: E1122 08:47:16.055894 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dd2f468-6caa-45c3-a28b-82f97f28162e" containerName="horizon" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.055900 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dd2f468-6caa-45c3-a28b-82f97f28162e" containerName="horizon" Nov 22 08:47:16 crc kubenswrapper[4856]: E1122 08:47:16.055931 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerName="horizon-log" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.055940 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerName="horizon-log" Nov 22 08:47:16 crc kubenswrapper[4856]: E1122 08:47:16.055957 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d00a074-d63d-48e2-99e7-54e9f5cebe8a" containerName="horizon" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.055965 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d00a074-d63d-48e2-99e7-54e9f5cebe8a" containerName="horizon" Nov 22 08:47:16 crc kubenswrapper[4856]: E1122 08:47:16.055988 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d00a074-d63d-48e2-99e7-54e9f5cebe8a" containerName="horizon-log" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.055997 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d00a074-d63d-48e2-99e7-54e9f5cebe8a" containerName="horizon-log" Nov 22 08:47:16 crc kubenswrapper[4856]: E1122 08:47:16.056025 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerName="horizon" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.056032 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerName="horizon" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.074639 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d00a074-d63d-48e2-99e7-54e9f5cebe8a" containerName="horizon" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.074769 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd2f468-6caa-45c3-a28b-82f97f28162e" containerName="horizon" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.074788 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d00a074-d63d-48e2-99e7-54e9f5cebe8a" containerName="horizon-log" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.074820 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dd2f468-6caa-45c3-a28b-82f97f28162e" containerName="horizon-log" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.074883 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerName="horizon-log" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.074901 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c54f55-cab8-41f9-8c6e-6f23442ed202" containerName="horizon" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.092549 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.102892 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8484479b76-8csj5"] Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.132480 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14290ea7-6928-401a-8a9e-3ab8e557570d-logs\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.132593 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm278\" (UniqueName: \"kubernetes.io/projected/14290ea7-6928-401a-8a9e-3ab8e557570d-kube-api-access-dm278\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.132628 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14290ea7-6928-401a-8a9e-3ab8e557570d-combined-ca-bundle\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.132675 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/14290ea7-6928-401a-8a9e-3ab8e557570d-horizon-secret-key\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.132701 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/14290ea7-6928-401a-8a9e-3ab8e557570d-horizon-tls-certs\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.132734 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14290ea7-6928-401a-8a9e-3ab8e557570d-scripts\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.132779 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14290ea7-6928-401a-8a9e-3ab8e557570d-config-data\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.234166 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14290ea7-6928-401a-8a9e-3ab8e557570d-scripts\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.234260 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14290ea7-6928-401a-8a9e-3ab8e557570d-config-data\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.234292 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14290ea7-6928-401a-8a9e-3ab8e557570d-logs\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.234353 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dm278\" (UniqueName: \"kubernetes.io/projected/14290ea7-6928-401a-8a9e-3ab8e557570d-kube-api-access-dm278\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.234389 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14290ea7-6928-401a-8a9e-3ab8e557570d-combined-ca-bundle\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.234430 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/14290ea7-6928-401a-8a9e-3ab8e557570d-horizon-secret-key\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.234457 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/14290ea7-6928-401a-8a9e-3ab8e557570d-horizon-tls-certs\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.235163 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14290ea7-6928-401a-8a9e-3ab8e557570d-scripts\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.235732 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14290ea7-6928-401a-8a9e-3ab8e557570d-logs\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.236965 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/14290ea7-6928-401a-8a9e-3ab8e557570d-config-data\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.241921 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/14290ea7-6928-401a-8a9e-3ab8e557570d-horizon-tls-certs\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.242688 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/14290ea7-6928-401a-8a9e-3ab8e557570d-horizon-secret-key\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.245229 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14290ea7-6928-401a-8a9e-3ab8e557570d-combined-ca-bundle\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.256177 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm278\" (UniqueName: \"kubernetes.io/projected/14290ea7-6928-401a-8a9e-3ab8e557570d-kube-api-access-dm278\") pod \"horizon-8484479b76-8csj5\" (UID: \"14290ea7-6928-401a-8a9e-3ab8e557570d\") " pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:16 crc kubenswrapper[4856]: I1122 08:47:16.429722 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.333366 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-s8hnn"] Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.335414 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-s8hnn" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.340895 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-s8hnn"] Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.435618 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-bd20-account-create-ncmhw"] Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.436995 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-bd20-account-create-ncmhw" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.439564 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.443545 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-bd20-account-create-ncmhw"] Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.455294 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/211ac788-84c0-47d0-a08f-574892036281-operator-scripts\") pod \"heat-db-create-s8hnn\" (UID: \"211ac788-84c0-47d0-a08f-574892036281\") " pod="openstack/heat-db-create-s8hnn" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.455419 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffp98\" (UniqueName: \"kubernetes.io/projected/211ac788-84c0-47d0-a08f-574892036281-kube-api-access-ffp98\") pod \"heat-db-create-s8hnn\" (UID: \"211ac788-84c0-47d0-a08f-574892036281\") " pod="openstack/heat-db-create-s8hnn" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.556998 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/211ac788-84c0-47d0-a08f-574892036281-operator-scripts\") pod \"heat-db-create-s8hnn\" (UID: \"211ac788-84c0-47d0-a08f-574892036281\") " pod="openstack/heat-db-create-s8hnn" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.557127 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pntkt\" (UniqueName: \"kubernetes.io/projected/83b13002-f7ec-482b-81b3-dc6297a4ebc9-kube-api-access-pntkt\") pod \"heat-bd20-account-create-ncmhw\" (UID: \"83b13002-f7ec-482b-81b3-dc6297a4ebc9\") " pod="openstack/heat-bd20-account-create-ncmhw" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.557166 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffp98\" (UniqueName: \"kubernetes.io/projected/211ac788-84c0-47d0-a08f-574892036281-kube-api-access-ffp98\") pod \"heat-db-create-s8hnn\" (UID: \"211ac788-84c0-47d0-a08f-574892036281\") " pod="openstack/heat-db-create-s8hnn" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.557194 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83b13002-f7ec-482b-81b3-dc6297a4ebc9-operator-scripts\") pod \"heat-bd20-account-create-ncmhw\" (UID: \"83b13002-f7ec-482b-81b3-dc6297a4ebc9\") " pod="openstack/heat-bd20-account-create-ncmhw" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.558115 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/211ac788-84c0-47d0-a08f-574892036281-operator-scripts\") pod \"heat-db-create-s8hnn\" (UID: \"211ac788-84c0-47d0-a08f-574892036281\") " pod="openstack/heat-db-create-s8hnn" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.578034 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffp98\" (UniqueName: \"kubernetes.io/projected/211ac788-84c0-47d0-a08f-574892036281-kube-api-access-ffp98\") pod \"heat-db-create-s8hnn\" (UID: \"211ac788-84c0-47d0-a08f-574892036281\") " pod="openstack/heat-db-create-s8hnn" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.658778 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-s8hnn" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.659614 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pntkt\" (UniqueName: \"kubernetes.io/projected/83b13002-f7ec-482b-81b3-dc6297a4ebc9-kube-api-access-pntkt\") pod \"heat-bd20-account-create-ncmhw\" (UID: \"83b13002-f7ec-482b-81b3-dc6297a4ebc9\") " pod="openstack/heat-bd20-account-create-ncmhw" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.659664 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83b13002-f7ec-482b-81b3-dc6297a4ebc9-operator-scripts\") pod \"heat-bd20-account-create-ncmhw\" (UID: \"83b13002-f7ec-482b-81b3-dc6297a4ebc9\") " pod="openstack/heat-bd20-account-create-ncmhw" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.660323 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83b13002-f7ec-482b-81b3-dc6297a4ebc9-operator-scripts\") pod \"heat-bd20-account-create-ncmhw\" (UID: \"83b13002-f7ec-482b-81b3-dc6297a4ebc9\") " pod="openstack/heat-bd20-account-create-ncmhw" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.682386 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pntkt\" (UniqueName: \"kubernetes.io/projected/83b13002-f7ec-482b-81b3-dc6297a4ebc9-kube-api-access-pntkt\") pod \"heat-bd20-account-create-ncmhw\" (UID: \"83b13002-f7ec-482b-81b3-dc6297a4ebc9\") " pod="openstack/heat-bd20-account-create-ncmhw" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.756966 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-bd20-account-create-ncmhw" Nov 22 08:47:17 crc kubenswrapper[4856]: I1122 08:47:17.906745 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8484479b76-8csj5"] Nov 22 08:47:18 crc kubenswrapper[4856]: I1122 08:47:18.022960 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8484479b76-8csj5" event={"ID":"14290ea7-6928-401a-8a9e-3ab8e557570d","Type":"ContainerStarted","Data":"5beb2e545c3175438de8dc1e09862d2fd0acbf3ccce9c3b6e512658189c088d0"} Nov 22 08:47:18 crc kubenswrapper[4856]: I1122 08:47:18.150845 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-bd20-account-create-ncmhw"] Nov 22 08:47:18 crc kubenswrapper[4856]: I1122 08:47:18.173721 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-s8hnn"] Nov 22 08:47:18 crc kubenswrapper[4856]: W1122 08:47:18.185881 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod211ac788_84c0_47d0_a08f_574892036281.slice/crio-c2b1dc3b678bfa3cb4a370f6f1726673a194e4da3f2572238a439c8bca940b49 WatchSource:0}: Error finding container c2b1dc3b678bfa3cb4a370f6f1726673a194e4da3f2572238a439c8bca940b49: Status 404 returned error can't find the container with id c2b1dc3b678bfa3cb4a370f6f1726673a194e4da3f2572238a439c8bca940b49 Nov 22 08:47:19 crc kubenswrapper[4856]: I1122 08:47:19.036374 4856 generic.go:334] "Generic (PLEG): container finished" podID="83b13002-f7ec-482b-81b3-dc6297a4ebc9" containerID="4b00f624b93df7f63e47150f91f0621140bb717d4499c2d780aea46428ac582b" exitCode=0 Nov 22 08:47:19 crc kubenswrapper[4856]: I1122 08:47:19.036461 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-bd20-account-create-ncmhw" event={"ID":"83b13002-f7ec-482b-81b3-dc6297a4ebc9","Type":"ContainerDied","Data":"4b00f624b93df7f63e47150f91f0621140bb717d4499c2d780aea46428ac582b"} Nov 22 08:47:19 crc kubenswrapper[4856]: I1122 08:47:19.036806 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-bd20-account-create-ncmhw" event={"ID":"83b13002-f7ec-482b-81b3-dc6297a4ebc9","Type":"ContainerStarted","Data":"a4920d58842528f0e9cb091e138cb37b0780f21061ea7e61d05575a47a7d5966"} Nov 22 08:47:19 crc kubenswrapper[4856]: I1122 08:47:19.040190 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8484479b76-8csj5" event={"ID":"14290ea7-6928-401a-8a9e-3ab8e557570d","Type":"ContainerStarted","Data":"e5ca5e1316f957cf9785485afc3605ce71c4e1791be8f81c279028fb3217ae72"} Nov 22 08:47:19 crc kubenswrapper[4856]: I1122 08:47:19.040255 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8484479b76-8csj5" event={"ID":"14290ea7-6928-401a-8a9e-3ab8e557570d","Type":"ContainerStarted","Data":"7a6325538830c5bdb2f889d0a44581db98ec6ad4cd0bd20d6f6ca0982a7fedd7"} Nov 22 08:47:19 crc kubenswrapper[4856]: I1122 08:47:19.048417 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-s8hnn" event={"ID":"211ac788-84c0-47d0-a08f-574892036281","Type":"ContainerStarted","Data":"a765d7e8805ea40f32ceccbd95c3df40bdcf9b38525f446649a4091937e0e994"} Nov 22 08:47:19 crc kubenswrapper[4856]: I1122 08:47:19.048467 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-s8hnn" event={"ID":"211ac788-84c0-47d0-a08f-574892036281","Type":"ContainerStarted","Data":"c2b1dc3b678bfa3cb4a370f6f1726673a194e4da3f2572238a439c8bca940b49"} Nov 22 08:47:19 crc kubenswrapper[4856]: I1122 08:47:19.089452 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8484479b76-8csj5" podStartSLOduration=3.089429303 podStartE2EDuration="3.089429303s" podCreationTimestamp="2025-11-22 08:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:47:19.072697042 +0000 UTC m=+6281.486090300" watchObservedRunningTime="2025-11-22 08:47:19.089429303 +0000 UTC m=+6281.502822561" Nov 22 08:47:19 crc kubenswrapper[4856]: I1122 08:47:19.100068 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-s8hnn" podStartSLOduration=2.100048518 podStartE2EDuration="2.100048518s" podCreationTimestamp="2025-11-22 08:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:47:19.087705275 +0000 UTC m=+6281.501098523" watchObservedRunningTime="2025-11-22 08:47:19.100048518 +0000 UTC m=+6281.513441776" Nov 22 08:47:19 crc kubenswrapper[4856]: I1122 08:47:19.710054 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:47:19 crc kubenswrapper[4856]: E1122 08:47:19.710332 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:47:20 crc kubenswrapper[4856]: I1122 08:47:20.061116 4856 generic.go:334] "Generic (PLEG): container finished" podID="211ac788-84c0-47d0-a08f-574892036281" containerID="a765d7e8805ea40f32ceccbd95c3df40bdcf9b38525f446649a4091937e0e994" exitCode=0 Nov 22 08:47:20 crc kubenswrapper[4856]: I1122 08:47:20.061178 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-s8hnn" event={"ID":"211ac788-84c0-47d0-a08f-574892036281","Type":"ContainerDied","Data":"a765d7e8805ea40f32ceccbd95c3df40bdcf9b38525f446649a4091937e0e994"} Nov 22 08:47:20 crc kubenswrapper[4856]: I1122 08:47:20.423727 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-bd20-account-create-ncmhw" Nov 22 08:47:20 crc kubenswrapper[4856]: I1122 08:47:20.523986 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83b13002-f7ec-482b-81b3-dc6297a4ebc9-operator-scripts\") pod \"83b13002-f7ec-482b-81b3-dc6297a4ebc9\" (UID: \"83b13002-f7ec-482b-81b3-dc6297a4ebc9\") " Nov 22 08:47:20 crc kubenswrapper[4856]: I1122 08:47:20.524044 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pntkt\" (UniqueName: \"kubernetes.io/projected/83b13002-f7ec-482b-81b3-dc6297a4ebc9-kube-api-access-pntkt\") pod \"83b13002-f7ec-482b-81b3-dc6297a4ebc9\" (UID: \"83b13002-f7ec-482b-81b3-dc6297a4ebc9\") " Nov 22 08:47:20 crc kubenswrapper[4856]: I1122 08:47:20.525429 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83b13002-f7ec-482b-81b3-dc6297a4ebc9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "83b13002-f7ec-482b-81b3-dc6297a4ebc9" (UID: "83b13002-f7ec-482b-81b3-dc6297a4ebc9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:47:20 crc kubenswrapper[4856]: I1122 08:47:20.530852 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b13002-f7ec-482b-81b3-dc6297a4ebc9-kube-api-access-pntkt" (OuterVolumeSpecName: "kube-api-access-pntkt") pod "83b13002-f7ec-482b-81b3-dc6297a4ebc9" (UID: "83b13002-f7ec-482b-81b3-dc6297a4ebc9"). InnerVolumeSpecName "kube-api-access-pntkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:47:20 crc kubenswrapper[4856]: I1122 08:47:20.626419 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/83b13002-f7ec-482b-81b3-dc6297a4ebc9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:47:20 crc kubenswrapper[4856]: I1122 08:47:20.626894 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pntkt\" (UniqueName: \"kubernetes.io/projected/83b13002-f7ec-482b-81b3-dc6297a4ebc9-kube-api-access-pntkt\") on node \"crc\" DevicePath \"\"" Nov 22 08:47:21 crc kubenswrapper[4856]: I1122 08:47:21.073095 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-bd20-account-create-ncmhw" event={"ID":"83b13002-f7ec-482b-81b3-dc6297a4ebc9","Type":"ContainerDied","Data":"a4920d58842528f0e9cb091e138cb37b0780f21061ea7e61d05575a47a7d5966"} Nov 22 08:47:21 crc kubenswrapper[4856]: I1122 08:47:21.073145 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4920d58842528f0e9cb091e138cb37b0780f21061ea7e61d05575a47a7d5966" Nov 22 08:47:21 crc kubenswrapper[4856]: I1122 08:47:21.073160 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-bd20-account-create-ncmhw" Nov 22 08:47:21 crc kubenswrapper[4856]: I1122 08:47:21.416660 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-s8hnn" Nov 22 08:47:21 crc kubenswrapper[4856]: I1122 08:47:21.445161 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/211ac788-84c0-47d0-a08f-574892036281-operator-scripts\") pod \"211ac788-84c0-47d0-a08f-574892036281\" (UID: \"211ac788-84c0-47d0-a08f-574892036281\") " Nov 22 08:47:21 crc kubenswrapper[4856]: I1122 08:47:21.445392 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffp98\" (UniqueName: \"kubernetes.io/projected/211ac788-84c0-47d0-a08f-574892036281-kube-api-access-ffp98\") pod \"211ac788-84c0-47d0-a08f-574892036281\" (UID: \"211ac788-84c0-47d0-a08f-574892036281\") " Nov 22 08:47:21 crc kubenswrapper[4856]: I1122 08:47:21.449716 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/211ac788-84c0-47d0-a08f-574892036281-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "211ac788-84c0-47d0-a08f-574892036281" (UID: "211ac788-84c0-47d0-a08f-574892036281"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:47:21 crc kubenswrapper[4856]: I1122 08:47:21.455547 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/211ac788-84c0-47d0-a08f-574892036281-kube-api-access-ffp98" (OuterVolumeSpecName: "kube-api-access-ffp98") pod "211ac788-84c0-47d0-a08f-574892036281" (UID: "211ac788-84c0-47d0-a08f-574892036281"). InnerVolumeSpecName "kube-api-access-ffp98". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:47:21 crc kubenswrapper[4856]: I1122 08:47:21.548720 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffp98\" (UniqueName: \"kubernetes.io/projected/211ac788-84c0-47d0-a08f-574892036281-kube-api-access-ffp98\") on node \"crc\" DevicePath \"\"" Nov 22 08:47:21 crc kubenswrapper[4856]: I1122 08:47:21.548755 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/211ac788-84c0-47d0-a08f-574892036281-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.049972 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-8qjfd"] Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.071409 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-acc9-account-create-tbhhz"] Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.084991 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-8qjfd"] Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.091310 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-s8hnn" event={"ID":"211ac788-84c0-47d0-a08f-574892036281","Type":"ContainerDied","Data":"c2b1dc3b678bfa3cb4a370f6f1726673a194e4da3f2572238a439c8bca940b49"} Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.091358 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2b1dc3b678bfa3cb4a370f6f1726673a194e4da3f2572238a439c8bca940b49" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.091369 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-s8hnn" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.094433 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-acc9-account-create-tbhhz"] Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.584713 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-4prz8"] Nov 22 08:47:22 crc kubenswrapper[4856]: E1122 08:47:22.585253 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="211ac788-84c0-47d0-a08f-574892036281" containerName="mariadb-database-create" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.585277 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="211ac788-84c0-47d0-a08f-574892036281" containerName="mariadb-database-create" Nov 22 08:47:22 crc kubenswrapper[4856]: E1122 08:47:22.585297 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b13002-f7ec-482b-81b3-dc6297a4ebc9" containerName="mariadb-account-create" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.585309 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b13002-f7ec-482b-81b3-dc6297a4ebc9" containerName="mariadb-account-create" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.585567 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b13002-f7ec-482b-81b3-dc6297a4ebc9" containerName="mariadb-account-create" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.585615 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="211ac788-84c0-47d0-a08f-574892036281" containerName="mariadb-database-create" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.586538 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-4prz8" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.588740 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-zldpj" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.588946 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.596256 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-4prz8"] Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.668834 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b079f0f9-7b51-4800-b13f-f8d23132560f-combined-ca-bundle\") pod \"heat-db-sync-4prz8\" (UID: \"b079f0f9-7b51-4800-b13f-f8d23132560f\") " pod="openstack/heat-db-sync-4prz8" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.668938 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js7gn\" (UniqueName: \"kubernetes.io/projected/b079f0f9-7b51-4800-b13f-f8d23132560f-kube-api-access-js7gn\") pod \"heat-db-sync-4prz8\" (UID: \"b079f0f9-7b51-4800-b13f-f8d23132560f\") " pod="openstack/heat-db-sync-4prz8" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.669239 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b079f0f9-7b51-4800-b13f-f8d23132560f-config-data\") pod \"heat-db-sync-4prz8\" (UID: \"b079f0f9-7b51-4800-b13f-f8d23132560f\") " pod="openstack/heat-db-sync-4prz8" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.723653 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17d00c65-c366-406b-b9be-1d9c80574db0" path="/var/lib/kubelet/pods/17d00c65-c366-406b-b9be-1d9c80574db0/volumes" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.724362 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39bd45e1-dd34-4aaa-b0f7-e939fdae1d40" path="/var/lib/kubelet/pods/39bd45e1-dd34-4aaa-b0f7-e939fdae1d40/volumes" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.771274 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b079f0f9-7b51-4800-b13f-f8d23132560f-combined-ca-bundle\") pod \"heat-db-sync-4prz8\" (UID: \"b079f0f9-7b51-4800-b13f-f8d23132560f\") " pod="openstack/heat-db-sync-4prz8" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.771740 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js7gn\" (UniqueName: \"kubernetes.io/projected/b079f0f9-7b51-4800-b13f-f8d23132560f-kube-api-access-js7gn\") pod \"heat-db-sync-4prz8\" (UID: \"b079f0f9-7b51-4800-b13f-f8d23132560f\") " pod="openstack/heat-db-sync-4prz8" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.771795 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b079f0f9-7b51-4800-b13f-f8d23132560f-config-data\") pod \"heat-db-sync-4prz8\" (UID: \"b079f0f9-7b51-4800-b13f-f8d23132560f\") " pod="openstack/heat-db-sync-4prz8" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.790722 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b079f0f9-7b51-4800-b13f-f8d23132560f-combined-ca-bundle\") pod \"heat-db-sync-4prz8\" (UID: \"b079f0f9-7b51-4800-b13f-f8d23132560f\") " pod="openstack/heat-db-sync-4prz8" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.791048 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b079f0f9-7b51-4800-b13f-f8d23132560f-config-data\") pod \"heat-db-sync-4prz8\" (UID: \"b079f0f9-7b51-4800-b13f-f8d23132560f\") " pod="openstack/heat-db-sync-4prz8" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.791455 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js7gn\" (UniqueName: \"kubernetes.io/projected/b079f0f9-7b51-4800-b13f-f8d23132560f-kube-api-access-js7gn\") pod \"heat-db-sync-4prz8\" (UID: \"b079f0f9-7b51-4800-b13f-f8d23132560f\") " pod="openstack/heat-db-sync-4prz8" Nov 22 08:47:22 crc kubenswrapper[4856]: I1122 08:47:22.903875 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-4prz8" Nov 22 08:47:23 crc kubenswrapper[4856]: I1122 08:47:23.418135 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-4prz8"] Nov 22 08:47:23 crc kubenswrapper[4856]: I1122 08:47:23.710371 4856 scope.go:117] "RemoveContainer" containerID="3b81f97ea589c8b96081f551c6306e09a909dbee2801dc337e1f6dbd5e143d52" Nov 22 08:47:23 crc kubenswrapper[4856]: I1122 08:47:23.733069 4856 scope.go:117] "RemoveContainer" containerID="1b5a366c26955a35d4496d1eb77522d7e86641bcf36a9510f81e1ce989dc803a" Nov 22 08:47:23 crc kubenswrapper[4856]: I1122 08:47:23.777563 4856 scope.go:117] "RemoveContainer" containerID="d9717c199e03ff9daea709aa0670bf7893dd675ccf9ef270d7ed1c515135bff0" Nov 22 08:47:23 crc kubenswrapper[4856]: I1122 08:47:23.834103 4856 scope.go:117] "RemoveContainer" containerID="d7644380a62eff558c0fa35ec29ae901b1559292572f2737f5aaba538794a13c" Nov 22 08:47:23 crc kubenswrapper[4856]: I1122 08:47:23.905189 4856 scope.go:117] "RemoveContainer" containerID="39ef4ef30c09002a4bb23bf2e7c579cc2c554c3d7baea7033164eedda904462d" Nov 22 08:47:24 crc kubenswrapper[4856]: I1122 08:47:24.112641 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-4prz8" event={"ID":"b079f0f9-7b51-4800-b13f-f8d23132560f","Type":"ContainerStarted","Data":"2f20b6b062c258c83e0ee3154e851ff6b512d4e0bee1755752e8caf29c9b7385"} Nov 22 08:47:26 crc kubenswrapper[4856]: I1122 08:47:26.429824 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:26 crc kubenswrapper[4856]: I1122 08:47:26.430189 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:33 crc kubenswrapper[4856]: I1122 08:47:33.710155 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:47:36 crc kubenswrapper[4856]: I1122 08:47:36.432117 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8484479b76-8csj5" podUID="14290ea7-6928-401a-8a9e-3ab8e557570d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.110:8443: connect: connection refused" Nov 22 08:47:42 crc kubenswrapper[4856]: I1122 08:47:42.041601 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-qxjbg"] Nov 22 08:47:42 crc kubenswrapper[4856]: I1122 08:47:42.050284 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-qxjbg"] Nov 22 08:47:42 crc kubenswrapper[4856]: E1122 08:47:42.582985 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:87d86758a49b8425a546c66207f21761" Nov 22 08:47:42 crc kubenswrapper[4856]: E1122 08:47:42.583056 4856 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:87d86758a49b8425a546c66207f21761" Nov 22 08:47:42 crc kubenswrapper[4856]: E1122 08:47:42.583215 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:87d86758a49b8425a546c66207f21761,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-js7gn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-4prz8_openstack(b079f0f9-7b51-4800-b13f-f8d23132560f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 08:47:42 crc kubenswrapper[4856]: E1122 08:47:42.584768 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-4prz8" podUID="b079f0f9-7b51-4800-b13f-f8d23132560f" Nov 22 08:47:42 crc kubenswrapper[4856]: I1122 08:47:42.723010 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0b62851-be03-45b3-8433-3d78718bc4c7" path="/var/lib/kubelet/pods/c0b62851-be03-45b3-8433-3d78718bc4c7/volumes" Nov 22 08:47:43 crc kubenswrapper[4856]: I1122 08:47:43.301539 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"2e4db6dfa0f8e0b89e30204c184a440910ad4ebbbe2c1f37db91bf8c459e660c"} Nov 22 08:47:43 crc kubenswrapper[4856]: E1122 08:47:43.303179 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-heat-engine:87d86758a49b8425a546c66207f21761\\\"\"" pod="openstack/heat-db-sync-4prz8" podUID="b079f0f9-7b51-4800-b13f-f8d23132560f" Nov 22 08:47:48 crc kubenswrapper[4856]: I1122 08:47:48.278789 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:50 crc kubenswrapper[4856]: I1122 08:47:50.002539 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-8484479b76-8csj5" Nov 22 08:47:50 crc kubenswrapper[4856]: I1122 08:47:50.079966 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5f58664c9d-xr6gw"] Nov 22 08:47:50 crc kubenswrapper[4856]: I1122 08:47:50.080252 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5f58664c9d-xr6gw" podUID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerName="horizon-log" containerID="cri-o://d25884b425897151b8b46a058d2e89e36cf56c280458025a88acd48b68c8f0b2" gracePeriod=30 Nov 22 08:47:50 crc kubenswrapper[4856]: I1122 08:47:50.080403 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5f58664c9d-xr6gw" podUID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerName="horizon" containerID="cri-o://d675e288c9fc3739d3c69a4edbe815fd636a8fe5322484a8ad3c0e206a0a7afc" gracePeriod=30 Nov 22 08:47:53 crc kubenswrapper[4856]: I1122 08:47:53.316250 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5f58664c9d-xr6gw" podUID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.107:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:59068->10.217.1.107:8443: read: connection reset by peer" Nov 22 08:47:54 crc kubenswrapper[4856]: I1122 08:47:54.402884 4856 generic.go:334] "Generic (PLEG): container finished" podID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerID="d675e288c9fc3739d3c69a4edbe815fd636a8fe5322484a8ad3c0e206a0a7afc" exitCode=0 Nov 22 08:47:54 crc kubenswrapper[4856]: I1122 08:47:54.402963 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f58664c9d-xr6gw" event={"ID":"1fac214f-463f-4451-a06c-2e4750ff1eb3","Type":"ContainerDied","Data":"d675e288c9fc3739d3c69a4edbe815fd636a8fe5322484a8ad3c0e206a0a7afc"} Nov 22 08:47:57 crc kubenswrapper[4856]: I1122 08:47:57.437624 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-4prz8" event={"ID":"b079f0f9-7b51-4800-b13f-f8d23132560f","Type":"ContainerStarted","Data":"e88d889e242c35376c936f9d02c0a603110a25b482418a75d4ea482a7124d628"} Nov 22 08:47:57 crc kubenswrapper[4856]: I1122 08:47:57.461849 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-4prz8" podStartSLOduration=2.668970534 podStartE2EDuration="35.461829042s" podCreationTimestamp="2025-11-22 08:47:22 +0000 UTC" firstStartedPulling="2025-11-22 08:47:23.419286559 +0000 UTC m=+6285.832679817" lastFinishedPulling="2025-11-22 08:47:56.212145067 +0000 UTC m=+6318.625538325" observedRunningTime="2025-11-22 08:47:57.453918968 +0000 UTC m=+6319.867312236" watchObservedRunningTime="2025-11-22 08:47:57.461829042 +0000 UTC m=+6319.875222290" Nov 22 08:47:59 crc kubenswrapper[4856]: I1122 08:47:59.462363 4856 generic.go:334] "Generic (PLEG): container finished" podID="b079f0f9-7b51-4800-b13f-f8d23132560f" containerID="e88d889e242c35376c936f9d02c0a603110a25b482418a75d4ea482a7124d628" exitCode=0 Nov 22 08:47:59 crc kubenswrapper[4856]: I1122 08:47:59.462459 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-4prz8" event={"ID":"b079f0f9-7b51-4800-b13f-f8d23132560f","Type":"ContainerDied","Data":"e88d889e242c35376c936f9d02c0a603110a25b482418a75d4ea482a7124d628"} Nov 22 08:48:00 crc kubenswrapper[4856]: I1122 08:48:00.794385 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-4prz8" Nov 22 08:48:00 crc kubenswrapper[4856]: I1122 08:48:00.828523 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5f58664c9d-xr6gw" podUID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.107:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8443: connect: connection refused" Nov 22 08:48:00 crc kubenswrapper[4856]: I1122 08:48:00.924469 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js7gn\" (UniqueName: \"kubernetes.io/projected/b079f0f9-7b51-4800-b13f-f8d23132560f-kube-api-access-js7gn\") pod \"b079f0f9-7b51-4800-b13f-f8d23132560f\" (UID: \"b079f0f9-7b51-4800-b13f-f8d23132560f\") " Nov 22 08:48:00 crc kubenswrapper[4856]: I1122 08:48:00.924575 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b079f0f9-7b51-4800-b13f-f8d23132560f-combined-ca-bundle\") pod \"b079f0f9-7b51-4800-b13f-f8d23132560f\" (UID: \"b079f0f9-7b51-4800-b13f-f8d23132560f\") " Nov 22 08:48:00 crc kubenswrapper[4856]: I1122 08:48:00.924734 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b079f0f9-7b51-4800-b13f-f8d23132560f-config-data\") pod \"b079f0f9-7b51-4800-b13f-f8d23132560f\" (UID: \"b079f0f9-7b51-4800-b13f-f8d23132560f\") " Nov 22 08:48:00 crc kubenswrapper[4856]: I1122 08:48:00.930958 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b079f0f9-7b51-4800-b13f-f8d23132560f-kube-api-access-js7gn" (OuterVolumeSpecName: "kube-api-access-js7gn") pod "b079f0f9-7b51-4800-b13f-f8d23132560f" (UID: "b079f0f9-7b51-4800-b13f-f8d23132560f"). InnerVolumeSpecName "kube-api-access-js7gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:48:00 crc kubenswrapper[4856]: I1122 08:48:00.955475 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b079f0f9-7b51-4800-b13f-f8d23132560f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b079f0f9-7b51-4800-b13f-f8d23132560f" (UID: "b079f0f9-7b51-4800-b13f-f8d23132560f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:01 crc kubenswrapper[4856]: I1122 08:48:01.007814 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b079f0f9-7b51-4800-b13f-f8d23132560f-config-data" (OuterVolumeSpecName: "config-data") pod "b079f0f9-7b51-4800-b13f-f8d23132560f" (UID: "b079f0f9-7b51-4800-b13f-f8d23132560f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:01 crc kubenswrapper[4856]: I1122 08:48:01.026843 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js7gn\" (UniqueName: \"kubernetes.io/projected/b079f0f9-7b51-4800-b13f-f8d23132560f-kube-api-access-js7gn\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:01 crc kubenswrapper[4856]: I1122 08:48:01.027167 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b079f0f9-7b51-4800-b13f-f8d23132560f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:01 crc kubenswrapper[4856]: I1122 08:48:01.027178 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b079f0f9-7b51-4800-b13f-f8d23132560f-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:01 crc kubenswrapper[4856]: I1122 08:48:01.483055 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-4prz8" event={"ID":"b079f0f9-7b51-4800-b13f-f8d23132560f","Type":"ContainerDied","Data":"2f20b6b062c258c83e0ee3154e851ff6b512d4e0bee1755752e8caf29c9b7385"} Nov 22 08:48:01 crc kubenswrapper[4856]: I1122 08:48:01.483321 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f20b6b062c258c83e0ee3154e851ff6b512d4e0bee1755752e8caf29c9b7385" Nov 22 08:48:01 crc kubenswrapper[4856]: I1122 08:48:01.483335 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-4prz8" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.619012 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6874b545dc-hd8t9"] Nov 22 08:48:02 crc kubenswrapper[4856]: E1122 08:48:02.619453 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b079f0f9-7b51-4800-b13f-f8d23132560f" containerName="heat-db-sync" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.619467 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b079f0f9-7b51-4800-b13f-f8d23132560f" containerName="heat-db-sync" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.619727 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b079f0f9-7b51-4800-b13f-f8d23132560f" containerName="heat-db-sync" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.620363 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.622944 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.623074 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-zldpj" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.623120 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.629384 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6874b545dc-hd8t9"] Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.660252 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-config-data\") pod \"heat-engine-6874b545dc-hd8t9\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.660301 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqx2g\" (UniqueName: \"kubernetes.io/projected/a33c09f9-1cb0-4669-b848-c83ad7aa9399-kube-api-access-gqx2g\") pod \"heat-engine-6874b545dc-hd8t9\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.660659 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-config-data-custom\") pod \"heat-engine-6874b545dc-hd8t9\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.660825 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-combined-ca-bundle\") pod \"heat-engine-6874b545dc-hd8t9\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.768766 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-combined-ca-bundle\") pod \"heat-engine-6874b545dc-hd8t9\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.769564 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqx2g\" (UniqueName: \"kubernetes.io/projected/a33c09f9-1cb0-4669-b848-c83ad7aa9399-kube-api-access-gqx2g\") pod \"heat-engine-6874b545dc-hd8t9\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.769599 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-config-data\") pod \"heat-engine-6874b545dc-hd8t9\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.769838 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-config-data-custom\") pod \"heat-engine-6874b545dc-hd8t9\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.806992 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-config-data\") pod \"heat-engine-6874b545dc-hd8t9\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.814025 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqx2g\" (UniqueName: \"kubernetes.io/projected/a33c09f9-1cb0-4669-b848-c83ad7aa9399-kube-api-access-gqx2g\") pod \"heat-engine-6874b545dc-hd8t9\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.821908 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-combined-ca-bundle\") pod \"heat-engine-6874b545dc-hd8t9\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.852550 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-config-data-custom\") pod \"heat-engine-6874b545dc-hd8t9\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.913633 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6d4998f8d4-f2d47"] Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.921023 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.923953 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.941464 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-77bdc5c485-2d4lv"] Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.942799 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.943087 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.946637 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.960568 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d4998f8d4-f2d47"] Nov 22 08:48:02 crc kubenswrapper[4856]: I1122 08:48:02.978211 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77bdc5c485-2d4lv"] Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.084993 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwz4b\" (UniqueName: \"kubernetes.io/projected/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-kube-api-access-kwz4b\") pod \"heat-cfnapi-77bdc5c485-2d4lv\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.085071 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-config-data-custom\") pod \"heat-cfnapi-77bdc5c485-2d4lv\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.085103 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwxsz\" (UniqueName: \"kubernetes.io/projected/4767616d-a5cc-4f87-b5e0-02597270df9c-kube-api-access-hwxsz\") pod \"heat-api-6d4998f8d4-f2d47\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.085141 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-combined-ca-bundle\") pod \"heat-cfnapi-77bdc5c485-2d4lv\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.085343 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-config-data\") pod \"heat-api-6d4998f8d4-f2d47\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.085401 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-config-data\") pod \"heat-cfnapi-77bdc5c485-2d4lv\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.085468 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-combined-ca-bundle\") pod \"heat-api-6d4998f8d4-f2d47\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.085664 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-config-data-custom\") pod \"heat-api-6d4998f8d4-f2d47\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.187827 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-config-data\") pod \"heat-api-6d4998f8d4-f2d47\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.188117 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-config-data\") pod \"heat-cfnapi-77bdc5c485-2d4lv\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.188146 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-combined-ca-bundle\") pod \"heat-api-6d4998f8d4-f2d47\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.188207 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-config-data-custom\") pod \"heat-api-6d4998f8d4-f2d47\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.188272 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwz4b\" (UniqueName: \"kubernetes.io/projected/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-kube-api-access-kwz4b\") pod \"heat-cfnapi-77bdc5c485-2d4lv\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.188310 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-config-data-custom\") pod \"heat-cfnapi-77bdc5c485-2d4lv\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.188345 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwxsz\" (UniqueName: \"kubernetes.io/projected/4767616d-a5cc-4f87-b5e0-02597270df9c-kube-api-access-hwxsz\") pod \"heat-api-6d4998f8d4-f2d47\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.188374 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-combined-ca-bundle\") pod \"heat-cfnapi-77bdc5c485-2d4lv\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.198124 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-combined-ca-bundle\") pod \"heat-cfnapi-77bdc5c485-2d4lv\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.198782 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-config-data-custom\") pod \"heat-cfnapi-77bdc5c485-2d4lv\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.204220 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-config-data\") pod \"heat-api-6d4998f8d4-f2d47\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.209439 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-config-data-custom\") pod \"heat-api-6d4998f8d4-f2d47\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.213067 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-config-data\") pod \"heat-cfnapi-77bdc5c485-2d4lv\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.216455 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-combined-ca-bundle\") pod \"heat-api-6d4998f8d4-f2d47\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.234495 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwxsz\" (UniqueName: \"kubernetes.io/projected/4767616d-a5cc-4f87-b5e0-02597270df9c-kube-api-access-hwxsz\") pod \"heat-api-6d4998f8d4-f2d47\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.236064 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwz4b\" (UniqueName: \"kubernetes.io/projected/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-kube-api-access-kwz4b\") pod \"heat-cfnapi-77bdc5c485-2d4lv\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.251460 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.271659 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.566199 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6874b545dc-hd8t9"] Nov 22 08:48:03 crc kubenswrapper[4856]: W1122 08:48:03.569894 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda33c09f9_1cb0_4669_b848_c83ad7aa9399.slice/crio-5b9f126d0252e08a0c2aa70a742e43a0886bb89c94f8fedaa10202265866ff48 WatchSource:0}: Error finding container 5b9f126d0252e08a0c2aa70a742e43a0886bb89c94f8fedaa10202265866ff48: Status 404 returned error can't find the container with id 5b9f126d0252e08a0c2aa70a742e43a0886bb89c94f8fedaa10202265866ff48 Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.700477 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77bdc5c485-2d4lv"] Nov 22 08:48:03 crc kubenswrapper[4856]: I1122 08:48:03.833343 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d4998f8d4-f2d47"] Nov 22 08:48:03 crc kubenswrapper[4856]: W1122 08:48:03.842403 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4767616d_a5cc_4f87_b5e0_02597270df9c.slice/crio-af882fcd7f075ce39da4e856c7b7e5f3d1e9b854b3ea39b3bbf62c8fea7523d8 WatchSource:0}: Error finding container af882fcd7f075ce39da4e856c7b7e5f3d1e9b854b3ea39b3bbf62c8fea7523d8: Status 404 returned error can't find the container with id af882fcd7f075ce39da4e856c7b7e5f3d1e9b854b3ea39b3bbf62c8fea7523d8 Nov 22 08:48:04 crc kubenswrapper[4856]: I1122 08:48:04.514162 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6874b545dc-hd8t9" event={"ID":"a33c09f9-1cb0-4669-b848-c83ad7aa9399","Type":"ContainerStarted","Data":"5b9f126d0252e08a0c2aa70a742e43a0886bb89c94f8fedaa10202265866ff48"} Nov 22 08:48:04 crc kubenswrapper[4856]: I1122 08:48:04.515188 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d4998f8d4-f2d47" event={"ID":"4767616d-a5cc-4f87-b5e0-02597270df9c","Type":"ContainerStarted","Data":"af882fcd7f075ce39da4e856c7b7e5f3d1e9b854b3ea39b3bbf62c8fea7523d8"} Nov 22 08:48:04 crc kubenswrapper[4856]: I1122 08:48:04.516392 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" event={"ID":"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6","Type":"ContainerStarted","Data":"1c6364454a8e9fca49f6a5e1c39e28547ca5c07e50272eb8c5ba5c9db78ae030"} Nov 22 08:48:07 crc kubenswrapper[4856]: I1122 08:48:07.545268 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6874b545dc-hd8t9" event={"ID":"a33c09f9-1cb0-4669-b848-c83ad7aa9399","Type":"ContainerStarted","Data":"6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a"} Nov 22 08:48:08 crc kubenswrapper[4856]: I1122 08:48:08.554367 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:08 crc kubenswrapper[4856]: I1122 08:48:08.578921 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6874b545dc-hd8t9" podStartSLOduration=6.578903641 podStartE2EDuration="6.578903641s" podCreationTimestamp="2025-11-22 08:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:48:08.573519345 +0000 UTC m=+6330.986912593" watchObservedRunningTime="2025-11-22 08:48:08.578903641 +0000 UTC m=+6330.992296899" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.162915 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6d55bbbf85-9nqnt"] Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.165042 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.174700 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6b7d77cf87-p5fpf"] Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.176632 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.188495 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d55bbbf85-9nqnt"] Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.198221 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-69cc8bfcfd-m6qhr"] Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.199888 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.216413 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6b7d77cf87-p5fpf"] Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.223939 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-69cc8bfcfd-m6qhr"] Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.338021 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-config-data-custom\") pod \"heat-cfnapi-69cc8bfcfd-m6qhr\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.338260 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad9a5183-bb59-4674-8656-2a931e90c81f-config-data-custom\") pod \"heat-engine-6d55bbbf85-9nqnt\" (UID: \"ad9a5183-bb59-4674-8656-2a931e90c81f\") " pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.338445 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-config-data-custom\") pod \"heat-api-6b7d77cf87-p5fpf\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.338534 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad9a5183-bb59-4674-8656-2a931e90c81f-config-data\") pod \"heat-engine-6d55bbbf85-9nqnt\" (UID: \"ad9a5183-bb59-4674-8656-2a931e90c81f\") " pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.338633 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-combined-ca-bundle\") pod \"heat-api-6b7d77cf87-p5fpf\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.338717 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn4lw\" (UniqueName: \"kubernetes.io/projected/8dc69498-58c4-486b-85e9-cf1a9c645a79-kube-api-access-kn4lw\") pod \"heat-cfnapi-69cc8bfcfd-m6qhr\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.338800 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-combined-ca-bundle\") pod \"heat-cfnapi-69cc8bfcfd-m6qhr\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.338880 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-config-data\") pod \"heat-cfnapi-69cc8bfcfd-m6qhr\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.339028 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad9a5183-bb59-4674-8656-2a931e90c81f-combined-ca-bundle\") pod \"heat-engine-6d55bbbf85-9nqnt\" (UID: \"ad9a5183-bb59-4674-8656-2a931e90c81f\") " pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.339099 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-config-data\") pod \"heat-api-6b7d77cf87-p5fpf\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.339137 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2ksh\" (UniqueName: \"kubernetes.io/projected/ad9a5183-bb59-4674-8656-2a931e90c81f-kube-api-access-b2ksh\") pod \"heat-engine-6d55bbbf85-9nqnt\" (UID: \"ad9a5183-bb59-4674-8656-2a931e90c81f\") " pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.339196 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8rjr\" (UniqueName: \"kubernetes.io/projected/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-kube-api-access-l8rjr\") pod \"heat-api-6b7d77cf87-p5fpf\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.441251 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn4lw\" (UniqueName: \"kubernetes.io/projected/8dc69498-58c4-486b-85e9-cf1a9c645a79-kube-api-access-kn4lw\") pod \"heat-cfnapi-69cc8bfcfd-m6qhr\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.441537 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-combined-ca-bundle\") pod \"heat-cfnapi-69cc8bfcfd-m6qhr\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.441567 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-config-data\") pod \"heat-cfnapi-69cc8bfcfd-m6qhr\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.441612 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad9a5183-bb59-4674-8656-2a931e90c81f-combined-ca-bundle\") pod \"heat-engine-6d55bbbf85-9nqnt\" (UID: \"ad9a5183-bb59-4674-8656-2a931e90c81f\") " pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.441643 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-config-data\") pod \"heat-api-6b7d77cf87-p5fpf\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.441665 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2ksh\" (UniqueName: \"kubernetes.io/projected/ad9a5183-bb59-4674-8656-2a931e90c81f-kube-api-access-b2ksh\") pod \"heat-engine-6d55bbbf85-9nqnt\" (UID: \"ad9a5183-bb59-4674-8656-2a931e90c81f\") " pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.441686 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8rjr\" (UniqueName: \"kubernetes.io/projected/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-kube-api-access-l8rjr\") pod \"heat-api-6b7d77cf87-p5fpf\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.441702 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-config-data-custom\") pod \"heat-cfnapi-69cc8bfcfd-m6qhr\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.441744 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad9a5183-bb59-4674-8656-2a931e90c81f-config-data-custom\") pod \"heat-engine-6d55bbbf85-9nqnt\" (UID: \"ad9a5183-bb59-4674-8656-2a931e90c81f\") " pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.441773 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-config-data-custom\") pod \"heat-api-6b7d77cf87-p5fpf\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.441796 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad9a5183-bb59-4674-8656-2a931e90c81f-config-data\") pod \"heat-engine-6d55bbbf85-9nqnt\" (UID: \"ad9a5183-bb59-4674-8656-2a931e90c81f\") " pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.441826 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-combined-ca-bundle\") pod \"heat-api-6b7d77cf87-p5fpf\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.447979 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-config-data-custom\") pod \"heat-api-6b7d77cf87-p5fpf\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.448780 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-config-data\") pod \"heat-cfnapi-69cc8bfcfd-m6qhr\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.449008 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad9a5183-bb59-4674-8656-2a931e90c81f-config-data\") pod \"heat-engine-6d55bbbf85-9nqnt\" (UID: \"ad9a5183-bb59-4674-8656-2a931e90c81f\") " pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.449373 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-combined-ca-bundle\") pod \"heat-cfnapi-69cc8bfcfd-m6qhr\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.449582 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-combined-ca-bundle\") pod \"heat-api-6b7d77cf87-p5fpf\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.450789 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-config-data-custom\") pod \"heat-cfnapi-69cc8bfcfd-m6qhr\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.453767 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad9a5183-bb59-4674-8656-2a931e90c81f-config-data-custom\") pod \"heat-engine-6d55bbbf85-9nqnt\" (UID: \"ad9a5183-bb59-4674-8656-2a931e90c81f\") " pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.458885 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-config-data\") pod \"heat-api-6b7d77cf87-p5fpf\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.463576 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad9a5183-bb59-4674-8656-2a931e90c81f-combined-ca-bundle\") pod \"heat-engine-6d55bbbf85-9nqnt\" (UID: \"ad9a5183-bb59-4674-8656-2a931e90c81f\") " pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.465683 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn4lw\" (UniqueName: \"kubernetes.io/projected/8dc69498-58c4-486b-85e9-cf1a9c645a79-kube-api-access-kn4lw\") pod \"heat-cfnapi-69cc8bfcfd-m6qhr\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.480457 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8rjr\" (UniqueName: \"kubernetes.io/projected/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-kube-api-access-l8rjr\") pod \"heat-api-6b7d77cf87-p5fpf\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.482965 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2ksh\" (UniqueName: \"kubernetes.io/projected/ad9a5183-bb59-4674-8656-2a931e90c81f-kube-api-access-b2ksh\") pod \"heat-engine-6d55bbbf85-9nqnt\" (UID: \"ad9a5183-bb59-4674-8656-2a931e90c81f\") " pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.492195 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.521149 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.530070 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.829018 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5f58664c9d-xr6gw" podUID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.107:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.107:8443: connect: connection refused" Nov 22 08:48:10 crc kubenswrapper[4856]: I1122 08:48:10.829131 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.090385 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-77bdc5c485-2d4lv"] Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.099763 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d4998f8d4-f2d47"] Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.117782 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5dfbf757c6-zbhzc"] Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.119069 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.121114 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.123270 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.138778 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6df48cd58f-ngxlf"] Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.140373 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.144080 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.144343 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.155552 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5dfbf757c6-zbhzc"] Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.167369 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6df48cd58f-ngxlf"] Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.258557 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-internal-tls-certs\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.258603 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-config-data-custom\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.258629 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-public-tls-certs\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.258679 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-combined-ca-bundle\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.258717 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-config-data\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.258747 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrp45\" (UniqueName: \"kubernetes.io/projected/19d88c37-ea75-4207-bb92-9265863c4da6-kube-api-access-rrp45\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.258769 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-combined-ca-bundle\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.258789 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wx2q\" (UniqueName: \"kubernetes.io/projected/6d443d4c-63dd-49d9-ba0e-815576ade7a6-kube-api-access-8wx2q\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.258828 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-config-data\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.258857 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-public-tls-certs\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.258876 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-internal-tls-certs\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.258893 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-config-data-custom\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.360590 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-config-data\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.360659 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrp45\" (UniqueName: \"kubernetes.io/projected/19d88c37-ea75-4207-bb92-9265863c4da6-kube-api-access-rrp45\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.360686 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-combined-ca-bundle\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.360704 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wx2q\" (UniqueName: \"kubernetes.io/projected/6d443d4c-63dd-49d9-ba0e-815576ade7a6-kube-api-access-8wx2q\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.360750 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-config-data\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.360783 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-public-tls-certs\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.360798 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-internal-tls-certs\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.360815 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-config-data-custom\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.360862 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-internal-tls-certs\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.360881 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-config-data-custom\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.360899 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-public-tls-certs\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.360944 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-combined-ca-bundle\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.365611 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-public-tls-certs\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.365930 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-combined-ca-bundle\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.366325 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-public-tls-certs\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.368301 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-combined-ca-bundle\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.369095 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-internal-tls-certs\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.370910 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-config-data-custom\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.372469 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-config-data\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.378132 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-internal-tls-certs\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.383857 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrp45\" (UniqueName: \"kubernetes.io/projected/19d88c37-ea75-4207-bb92-9265863c4da6-kube-api-access-rrp45\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.385165 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wx2q\" (UniqueName: \"kubernetes.io/projected/6d443d4c-63dd-49d9-ba0e-815576ade7a6-kube-api-access-8wx2q\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.385617 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19d88c37-ea75-4207-bb92-9265863c4da6-config-data\") pod \"heat-api-6df48cd58f-ngxlf\" (UID: \"19d88c37-ea75-4207-bb92-9265863c4da6\") " pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.385934 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6d443d4c-63dd-49d9-ba0e-815576ade7a6-config-data-custom\") pod \"heat-cfnapi-5dfbf757c6-zbhzc\" (UID: \"6d443d4c-63dd-49d9-ba0e-815576ade7a6\") " pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.438181 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:11 crc kubenswrapper[4856]: I1122 08:48:11.460299 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:17 crc kubenswrapper[4856]: I1122 08:48:17.065725 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="56903f1f-89ce-4eca-bd84-0cd0e3814079" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.1.59:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:48:18 crc kubenswrapper[4856]: I1122 08:48:18.947965 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x8ch2"] Nov 22 08:48:18 crc kubenswrapper[4856]: I1122 08:48:18.950491 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:18 crc kubenswrapper[4856]: I1122 08:48:18.965325 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x8ch2"] Nov 22 08:48:19 crc kubenswrapper[4856]: I1122 08:48:19.145475 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svpb7\" (UniqueName: \"kubernetes.io/projected/5f5048ca-07db-4e30-9138-c93910df1958-kube-api-access-svpb7\") pod \"community-operators-x8ch2\" (UID: \"5f5048ca-07db-4e30-9138-c93910df1958\") " pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:19 crc kubenswrapper[4856]: I1122 08:48:19.145592 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5048ca-07db-4e30-9138-c93910df1958-catalog-content\") pod \"community-operators-x8ch2\" (UID: \"5f5048ca-07db-4e30-9138-c93910df1958\") " pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:19 crc kubenswrapper[4856]: I1122 08:48:19.145616 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5048ca-07db-4e30-9138-c93910df1958-utilities\") pod \"community-operators-x8ch2\" (UID: \"5f5048ca-07db-4e30-9138-c93910df1958\") " pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:19 crc kubenswrapper[4856]: I1122 08:48:19.247612 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svpb7\" (UniqueName: \"kubernetes.io/projected/5f5048ca-07db-4e30-9138-c93910df1958-kube-api-access-svpb7\") pod \"community-operators-x8ch2\" (UID: \"5f5048ca-07db-4e30-9138-c93910df1958\") " pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:19 crc kubenswrapper[4856]: I1122 08:48:19.247748 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5048ca-07db-4e30-9138-c93910df1958-catalog-content\") pod \"community-operators-x8ch2\" (UID: \"5f5048ca-07db-4e30-9138-c93910df1958\") " pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:19 crc kubenswrapper[4856]: I1122 08:48:19.247775 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5048ca-07db-4e30-9138-c93910df1958-utilities\") pod \"community-operators-x8ch2\" (UID: \"5f5048ca-07db-4e30-9138-c93910df1958\") " pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:19 crc kubenswrapper[4856]: I1122 08:48:19.248551 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5048ca-07db-4e30-9138-c93910df1958-utilities\") pod \"community-operators-x8ch2\" (UID: \"5f5048ca-07db-4e30-9138-c93910df1958\") " pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:19 crc kubenswrapper[4856]: I1122 08:48:19.249160 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5048ca-07db-4e30-9138-c93910df1958-catalog-content\") pod \"community-operators-x8ch2\" (UID: \"5f5048ca-07db-4e30-9138-c93910df1958\") " pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:19 crc kubenswrapper[4856]: I1122 08:48:19.275577 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svpb7\" (UniqueName: \"kubernetes.io/projected/5f5048ca-07db-4e30-9138-c93910df1958-kube-api-access-svpb7\") pod \"community-operators-x8ch2\" (UID: \"5f5048ca-07db-4e30-9138-c93910df1958\") " pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:19 crc kubenswrapper[4856]: I1122 08:48:19.285221 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:19 crc kubenswrapper[4856]: E1122 08:48:19.382046 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-api-cfn:87d86758a49b8425a546c66207f21761" Nov 22 08:48:19 crc kubenswrapper[4856]: E1122 08:48:19.382116 4856 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-heat-api-cfn:87d86758a49b8425a546c66207f21761" Nov 22 08:48:19 crc kubenswrapper[4856]: E1122 08:48:19.382283 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-cfnapi,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-heat-api-cfn:87d86758a49b8425a546c66207f21761,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_httpd_setup && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n556hdchb6h5bbhb7h55ch5dbh654h5cfhddh69h6ch89h68dhf6h578h55h55bhcch677h664h694h656h7bh66hfh65hf6hfbh648h5fdhc7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:heat-cfnapi-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-custom,ReadOnly:true,MountPath:/etc/heat/heat.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwz4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthcheck,Port:{0 8000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:10,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-cfnapi-77bdc5c485-2d4lv_openstack(ffc936a2-ec4f-4e0d-a69e-505b55c08ce6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 08:48:19 crc kubenswrapper[4856]: E1122 08:48:19.383849 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" podUID="ffc936a2-ec4f-4e0d-a69e-505b55c08ce6" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.001065 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.166878 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-config-data\") pod \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.167351 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwz4b\" (UniqueName: \"kubernetes.io/projected/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-kube-api-access-kwz4b\") pod \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.167429 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-config-data-custom\") pod \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.167568 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-combined-ca-bundle\") pod \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\" (UID: \"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6\") " Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.174129 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-config-data" (OuterVolumeSpecName: "config-data") pod "ffc936a2-ec4f-4e0d-a69e-505b55c08ce6" (UID: "ffc936a2-ec4f-4e0d-a69e-505b55c08ce6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.175055 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-kube-api-access-kwz4b" (OuterVolumeSpecName: "kube-api-access-kwz4b") pod "ffc936a2-ec4f-4e0d-a69e-505b55c08ce6" (UID: "ffc936a2-ec4f-4e0d-a69e-505b55c08ce6"). InnerVolumeSpecName "kube-api-access-kwz4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.177634 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffc936a2-ec4f-4e0d-a69e-505b55c08ce6" (UID: "ffc936a2-ec4f-4e0d-a69e-505b55c08ce6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.179325 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ffc936a2-ec4f-4e0d-a69e-505b55c08ce6" (UID: "ffc936a2-ec4f-4e0d-a69e-505b55c08ce6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.271100 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwz4b\" (UniqueName: \"kubernetes.io/projected/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-kube-api-access-kwz4b\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.271661 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.271672 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.271707 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.457899 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-69cc8bfcfd-m6qhr"] Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.474950 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6df48cd58f-ngxlf"] Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.623100 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6b7d77cf87-p5fpf"] Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.657947 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d55bbbf85-9nqnt"] Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.682202 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5dfbf757c6-zbhzc"] Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.754971 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6d4998f8d4-f2d47" podUID="4767616d-a5cc-4f87-b5e0-02597270df9c" containerName="heat-api" containerID="cri-o://19ec5d404d572f1e408ca47d1429feb741561336dee71c8d596b3bb7366f4975" gracePeriod=60 Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.759111 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.759145 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" event={"ID":"8dc69498-58c4-486b-85e9-cf1a9c645a79","Type":"ContainerStarted","Data":"7de05e0e5ea3868af990f78eebdc06e7c9b9d1ca9103f17e9b08530fd16706e0"} Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.759183 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d4998f8d4-f2d47" event={"ID":"4767616d-a5cc-4f87-b5e0-02597270df9c","Type":"ContainerStarted","Data":"19ec5d404d572f1e408ca47d1429feb741561336dee71c8d596b3bb7366f4975"} Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.771776 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6df48cd58f-ngxlf" event={"ID":"19d88c37-ea75-4207-bb92-9265863c4da6","Type":"ContainerStarted","Data":"5a278a5695d679494085f0336dd5c5b0083751bcf3951905b95b5e59dcef07a9"} Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.773423 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.779561 4856 generic.go:334] "Generic (PLEG): container finished" podID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerID="d25884b425897151b8b46a058d2e89e36cf56c280458025a88acd48b68c8f0b2" exitCode=137 Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.780351 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f58664c9d-xr6gw" event={"ID":"1fac214f-463f-4451-a06c-2e4750ff1eb3","Type":"ContainerDied","Data":"d25884b425897151b8b46a058d2e89e36cf56c280458025a88acd48b68c8f0b2"} Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.780403 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x8ch2"] Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.780428 4856 scope.go:117] "RemoveContainer" containerID="d675e288c9fc3739d3c69a4edbe815fd636a8fe5322484a8ad3c0e206a0a7afc" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.785715 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6d4998f8d4-f2d47" podStartSLOduration=2.711246122 podStartE2EDuration="18.785678621s" podCreationTimestamp="2025-11-22 08:48:02 +0000 UTC" firstStartedPulling="2025-11-22 08:48:03.844871929 +0000 UTC m=+6326.258265187" lastFinishedPulling="2025-11-22 08:48:19.919304428 +0000 UTC m=+6342.332697686" observedRunningTime="2025-11-22 08:48:20.776833402 +0000 UTC m=+6343.190226650" watchObservedRunningTime="2025-11-22 08:48:20.785678621 +0000 UTC m=+6343.199071879" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.790028 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" event={"ID":"ffc936a2-ec4f-4e0d-a69e-505b55c08ce6","Type":"ContainerDied","Data":"1c6364454a8e9fca49f6a5e1c39e28547ca5c07e50272eb8c5ba5c9db78ae030"} Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.790105 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77bdc5c485-2d4lv" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.898182 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-77bdc5c485-2d4lv"] Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.898217 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-horizon-secret-key\") pod \"1fac214f-463f-4451-a06c-2e4750ff1eb3\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.898299 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fac214f-463f-4451-a06c-2e4750ff1eb3-scripts\") pod \"1fac214f-463f-4451-a06c-2e4750ff1eb3\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.898328 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fac214f-463f-4451-a06c-2e4750ff1eb3-config-data\") pod \"1fac214f-463f-4451-a06c-2e4750ff1eb3\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.898374 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-combined-ca-bundle\") pod \"1fac214f-463f-4451-a06c-2e4750ff1eb3\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.898432 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fac214f-463f-4451-a06c-2e4750ff1eb3-logs\") pod \"1fac214f-463f-4451-a06c-2e4750ff1eb3\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.898542 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-horizon-tls-certs\") pod \"1fac214f-463f-4451-a06c-2e4750ff1eb3\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.898634 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qph66\" (UniqueName: \"kubernetes.io/projected/1fac214f-463f-4451-a06c-2e4750ff1eb3-kube-api-access-qph66\") pod \"1fac214f-463f-4451-a06c-2e4750ff1eb3\" (UID: \"1fac214f-463f-4451-a06c-2e4750ff1eb3\") " Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.899555 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fac214f-463f-4451-a06c-2e4750ff1eb3-logs" (OuterVolumeSpecName: "logs") pod "1fac214f-463f-4451-a06c-2e4750ff1eb3" (UID: "1fac214f-463f-4451-a06c-2e4750ff1eb3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.903579 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1fac214f-463f-4451-a06c-2e4750ff1eb3" (UID: "1fac214f-463f-4451-a06c-2e4750ff1eb3"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.911164 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fac214f-463f-4451-a06c-2e4750ff1eb3-kube-api-access-qph66" (OuterVolumeSpecName: "kube-api-access-qph66") pod "1fac214f-463f-4451-a06c-2e4750ff1eb3" (UID: "1fac214f-463f-4451-a06c-2e4750ff1eb3"). InnerVolumeSpecName "kube-api-access-qph66". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.912622 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-77bdc5c485-2d4lv"] Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.934667 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fac214f-463f-4451-a06c-2e4750ff1eb3-config-data" (OuterVolumeSpecName: "config-data") pod "1fac214f-463f-4451-a06c-2e4750ff1eb3" (UID: "1fac214f-463f-4451-a06c-2e4750ff1eb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.941081 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fac214f-463f-4451-a06c-2e4750ff1eb3-scripts" (OuterVolumeSpecName: "scripts") pod "1fac214f-463f-4451-a06c-2e4750ff1eb3" (UID: "1fac214f-463f-4451-a06c-2e4750ff1eb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.946435 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fac214f-463f-4451-a06c-2e4750ff1eb3" (UID: "1fac214f-463f-4451-a06c-2e4750ff1eb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:20 crc kubenswrapper[4856]: I1122 08:48:20.977082 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "1fac214f-463f-4451-a06c-2e4750ff1eb3" (UID: "1fac214f-463f-4451-a06c-2e4750ff1eb3"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.001256 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1fac214f-463f-4451-a06c-2e4750ff1eb3-logs\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.001306 4856 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.001321 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qph66\" (UniqueName: \"kubernetes.io/projected/1fac214f-463f-4451-a06c-2e4750ff1eb3-kube-api-access-qph66\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.001335 4856 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.001350 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1fac214f-463f-4451-a06c-2e4750ff1eb3-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.001361 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1fac214f-463f-4451-a06c-2e4750ff1eb3-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.001372 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fac214f-463f-4451-a06c-2e4750ff1eb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.056338 4856 scope.go:117] "RemoveContainer" containerID="d25884b425897151b8b46a058d2e89e36cf56c280458025a88acd48b68c8f0b2" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.855130 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" event={"ID":"6d443d4c-63dd-49d9-ba0e-815576ade7a6","Type":"ContainerStarted","Data":"28112066078fbc7fe87b49529a03161770c6082a344d696d551760000bcd25d8"} Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.871480 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d55bbbf85-9nqnt" event={"ID":"ad9a5183-bb59-4674-8656-2a931e90c81f","Type":"ContainerStarted","Data":"2750913c60e2e88e507b5319a80a97d968df2ce23c3dbf9f76499d8c60ccfd13"} Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.871546 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d55bbbf85-9nqnt" event={"ID":"ad9a5183-bb59-4674-8656-2a931e90c81f","Type":"ContainerStarted","Data":"6a24b294ca5beafbb5e0badc0c9f1b869f8c512994ee0c1f5482893b68c4aabe"} Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.871935 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.877894 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b7d77cf87-p5fpf" event={"ID":"3f534a95-bc51-4b61-ab48-27a0ad0cf6de","Type":"ContainerStarted","Data":"84d8495dd39a6d39d8a230853678600bc5f831419bf1c07c0069a6e5022663c9"} Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.877961 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b7d77cf87-p5fpf" event={"ID":"3f534a95-bc51-4b61-ab48-27a0ad0cf6de","Type":"ContainerStarted","Data":"f701b1d66215b3a7467e3de11568f79af7df23ced1c3c91ed2173c5b67cafb43"} Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.878038 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.893501 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5f58664c9d-xr6gw" event={"ID":"1fac214f-463f-4451-a06c-2e4750ff1eb3","Type":"ContainerDied","Data":"8e21a2d25639438f16c8f942b2046ca73bc44474b9df9c3b608f42328c1c03aa"} Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.893907 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5f58664c9d-xr6gw" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.899051 4856 generic.go:334] "Generic (PLEG): container finished" podID="5f5048ca-07db-4e30-9138-c93910df1958" containerID="9331ef412b1f64ae243083133a6ade26f95287d625719cedcb1e896ce6a87d94" exitCode=0 Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.899167 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8ch2" event={"ID":"5f5048ca-07db-4e30-9138-c93910df1958","Type":"ContainerDied","Data":"9331ef412b1f64ae243083133a6ade26f95287d625719cedcb1e896ce6a87d94"} Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.899201 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8ch2" event={"ID":"5f5048ca-07db-4e30-9138-c93910df1958","Type":"ContainerStarted","Data":"ff133cbac52dc251de9b587493a85a9fc9e1bdd545253d541efe8d8719b71f41"} Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.906744 4856 generic.go:334] "Generic (PLEG): container finished" podID="4767616d-a5cc-4f87-b5e0-02597270df9c" containerID="19ec5d404d572f1e408ca47d1429feb741561336dee71c8d596b3bb7366f4975" exitCode=0 Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.906885 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d4998f8d4-f2d47" event={"ID":"4767616d-a5cc-4f87-b5e0-02597270df9c","Type":"ContainerDied","Data":"19ec5d404d572f1e408ca47d1429feb741561336dee71c8d596b3bb7366f4975"} Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.911934 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6d55bbbf85-9nqnt" podStartSLOduration=11.911912937 podStartE2EDuration="11.911912937s" podCreationTimestamp="2025-11-22 08:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:48:21.895813823 +0000 UTC m=+6344.309207091" watchObservedRunningTime="2025-11-22 08:48:21.911912937 +0000 UTC m=+6344.325306195" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.921667 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6df48cd58f-ngxlf" event={"ID":"19d88c37-ea75-4207-bb92-9265863c4da6","Type":"ContainerStarted","Data":"67c509b4b6522eed7d679fa0a2c168d09ce4d3b97f0bac4854ca7b254b631652"} Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.922351 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.924741 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6b7d77cf87-p5fpf" podStartSLOduration=11.924712332 podStartE2EDuration="11.924712332s" podCreationTimestamp="2025-11-22 08:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:48:21.921646839 +0000 UTC m=+6344.335040117" watchObservedRunningTime="2025-11-22 08:48:21.924712332 +0000 UTC m=+6344.338105590" Nov 22 08:48:21 crc kubenswrapper[4856]: I1122 08:48:21.989358 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6df48cd58f-ngxlf" podStartSLOduration=10.989336393 podStartE2EDuration="10.989336393s" podCreationTimestamp="2025-11-22 08:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:48:21.974415052 +0000 UTC m=+6344.387808330" watchObservedRunningTime="2025-11-22 08:48:21.989336393 +0000 UTC m=+6344.402729651" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.023247 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5f58664c9d-xr6gw"] Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.036541 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5f58664c9d-xr6gw"] Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.328474 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.458901 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-config-data\") pod \"4767616d-a5cc-4f87-b5e0-02597270df9c\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.459805 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwxsz\" (UniqueName: \"kubernetes.io/projected/4767616d-a5cc-4f87-b5e0-02597270df9c-kube-api-access-hwxsz\") pod \"4767616d-a5cc-4f87-b5e0-02597270df9c\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.460000 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-config-data-custom\") pod \"4767616d-a5cc-4f87-b5e0-02597270df9c\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.460062 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-combined-ca-bundle\") pod \"4767616d-a5cc-4f87-b5e0-02597270df9c\" (UID: \"4767616d-a5cc-4f87-b5e0-02597270df9c\") " Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.465136 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4767616d-a5cc-4f87-b5e0-02597270df9c" (UID: "4767616d-a5cc-4f87-b5e0-02597270df9c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.465570 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4767616d-a5cc-4f87-b5e0-02597270df9c-kube-api-access-hwxsz" (OuterVolumeSpecName: "kube-api-access-hwxsz") pod "4767616d-a5cc-4f87-b5e0-02597270df9c" (UID: "4767616d-a5cc-4f87-b5e0-02597270df9c"). InnerVolumeSpecName "kube-api-access-hwxsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.488350 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4767616d-a5cc-4f87-b5e0-02597270df9c" (UID: "4767616d-a5cc-4f87-b5e0-02597270df9c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.525195 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-config-data" (OuterVolumeSpecName: "config-data") pod "4767616d-a5cc-4f87-b5e0-02597270df9c" (UID: "4767616d-a5cc-4f87-b5e0-02597270df9c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.564349 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.564426 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.564438 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwxsz\" (UniqueName: \"kubernetes.io/projected/4767616d-a5cc-4f87-b5e0-02597270df9c-kube-api-access-hwxsz\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.564454 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4767616d-a5cc-4f87-b5e0-02597270df9c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.721185 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fac214f-463f-4451-a06c-2e4750ff1eb3" path="/var/lib/kubelet/pods/1fac214f-463f-4451-a06c-2e4750ff1eb3/volumes" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.721948 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffc936a2-ec4f-4e0d-a69e-505b55c08ce6" path="/var/lib/kubelet/pods/ffc936a2-ec4f-4e0d-a69e-505b55c08ce6/volumes" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.932806 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d4998f8d4-f2d47" event={"ID":"4767616d-a5cc-4f87-b5e0-02597270df9c","Type":"ContainerDied","Data":"af882fcd7f075ce39da4e856c7b7e5f3d1e9b854b3ea39b3bbf62c8fea7523d8"} Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.933178 4856 scope.go:117] "RemoveContainer" containerID="19ec5d404d572f1e408ca47d1429feb741561336dee71c8d596b3bb7366f4975" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.933442 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d4998f8d4-f2d47" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.936739 4856 generic.go:334] "Generic (PLEG): container finished" podID="3f534a95-bc51-4b61-ab48-27a0ad0cf6de" containerID="84d8495dd39a6d39d8a230853678600bc5f831419bf1c07c0069a6e5022663c9" exitCode=1 Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.937728 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b7d77cf87-p5fpf" event={"ID":"3f534a95-bc51-4b61-ab48-27a0ad0cf6de","Type":"ContainerDied","Data":"84d8495dd39a6d39d8a230853678600bc5f831419bf1c07c0069a6e5022663c9"} Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.937947 4856 scope.go:117] "RemoveContainer" containerID="84d8495dd39a6d39d8a230853678600bc5f831419bf1c07c0069a6e5022663c9" Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.978485 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d4998f8d4-f2d47"] Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.985288 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6d4998f8d4-f2d47"] Nov 22 08:48:22 crc kubenswrapper[4856]: I1122 08:48:22.986315 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:24 crc kubenswrapper[4856]: I1122 08:48:24.082771 4856 scope.go:117] "RemoveContainer" containerID="fccad72b5ea8ef54c9f9294cd09d881b479aa1a8606754fc56187acb4b451818" Nov 22 08:48:24 crc kubenswrapper[4856]: I1122 08:48:24.721921 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4767616d-a5cc-4f87-b5e0-02597270df9c" path="/var/lib/kubelet/pods/4767616d-a5cc-4f87-b5e0-02597270df9c/volumes" Nov 22 08:48:24 crc kubenswrapper[4856]: I1122 08:48:24.959006 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" event={"ID":"6d443d4c-63dd-49d9-ba0e-815576ade7a6","Type":"ContainerStarted","Data":"6b84eb7178a5fe2888c70edcc48c6146d7c07b35a13ee8ab817f686a4eaeaf65"} Nov 22 08:48:24 crc kubenswrapper[4856]: I1122 08:48:24.963141 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b7d77cf87-p5fpf" event={"ID":"3f534a95-bc51-4b61-ab48-27a0ad0cf6de","Type":"ContainerStarted","Data":"711a2ed759d80bf91947ea320f4b494674b78d07d506d750c3a0441b11de9f8b"} Nov 22 08:48:24 crc kubenswrapper[4856]: I1122 08:48:24.963191 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:24 crc kubenswrapper[4856]: I1122 08:48:24.966146 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" event={"ID":"8dc69498-58c4-486b-85e9-cf1a9c645a79","Type":"ContainerStarted","Data":"d54de8eff90e72e788dc54b5c9862ffa21a326e29acf1779ee393cb174fc5775"} Nov 22 08:48:25 crc kubenswrapper[4856]: I1122 08:48:25.975763 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:25 crc kubenswrapper[4856]: I1122 08:48:25.997665 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" podStartSLOduration=12.665725204 podStartE2EDuration="15.997638303s" podCreationTimestamp="2025-11-22 08:48:10 +0000 UTC" firstStartedPulling="2025-11-22 08:48:20.562488794 +0000 UTC m=+6342.975882052" lastFinishedPulling="2025-11-22 08:48:23.894401893 +0000 UTC m=+6346.307795151" observedRunningTime="2025-11-22 08:48:25.992294639 +0000 UTC m=+6348.405687897" watchObservedRunningTime="2025-11-22 08:48:25.997638303 +0000 UTC m=+6348.411031561" Nov 22 08:48:26 crc kubenswrapper[4856]: I1122 08:48:26.019591 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" podStartSLOduration=11.813963461 podStartE2EDuration="15.019565775s" podCreationTimestamp="2025-11-22 08:48:11 +0000 UTC" firstStartedPulling="2025-11-22 08:48:20.688529832 +0000 UTC m=+6343.101923090" lastFinishedPulling="2025-11-22 08:48:23.894132146 +0000 UTC m=+6346.307525404" observedRunningTime="2025-11-22 08:48:26.011311152 +0000 UTC m=+6348.424704610" watchObservedRunningTime="2025-11-22 08:48:26.019565775 +0000 UTC m=+6348.432959033" Nov 22 08:48:26 crc kubenswrapper[4856]: I1122 08:48:26.438499 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:26 crc kubenswrapper[4856]: I1122 08:48:26.987886 4856 generic.go:334] "Generic (PLEG): container finished" podID="8dc69498-58c4-486b-85e9-cf1a9c645a79" containerID="d54de8eff90e72e788dc54b5c9862ffa21a326e29acf1779ee393cb174fc5775" exitCode=1 Nov 22 08:48:26 crc kubenswrapper[4856]: I1122 08:48:26.987966 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" event={"ID":"8dc69498-58c4-486b-85e9-cf1a9c645a79","Type":"ContainerDied","Data":"d54de8eff90e72e788dc54b5c9862ffa21a326e29acf1779ee393cb174fc5775"} Nov 22 08:48:26 crc kubenswrapper[4856]: I1122 08:48:26.988754 4856 scope.go:117] "RemoveContainer" containerID="d54de8eff90e72e788dc54b5c9862ffa21a326e29acf1779ee393cb174fc5775" Nov 22 08:48:26 crc kubenswrapper[4856]: I1122 08:48:26.992942 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8ch2" event={"ID":"5f5048ca-07db-4e30-9138-c93910df1958","Type":"ContainerStarted","Data":"2a5044269d4b755baf7afb6c7d1308b6a4afc9a75f51f91c7383ba39f11add3b"} Nov 22 08:48:26 crc kubenswrapper[4856]: I1122 08:48:26.996282 4856 generic.go:334] "Generic (PLEG): container finished" podID="3f534a95-bc51-4b61-ab48-27a0ad0cf6de" containerID="711a2ed759d80bf91947ea320f4b494674b78d07d506d750c3a0441b11de9f8b" exitCode=1 Nov 22 08:48:26 crc kubenswrapper[4856]: I1122 08:48:26.997204 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b7d77cf87-p5fpf" event={"ID":"3f534a95-bc51-4b61-ab48-27a0ad0cf6de","Type":"ContainerDied","Data":"711a2ed759d80bf91947ea320f4b494674b78d07d506d750c3a0441b11de9f8b"} Nov 22 08:48:26 crc kubenswrapper[4856]: I1122 08:48:26.997270 4856 scope.go:117] "RemoveContainer" containerID="84d8495dd39a6d39d8a230853678600bc5f831419bf1c07c0069a6e5022663c9" Nov 22 08:48:26 crc kubenswrapper[4856]: I1122 08:48:26.998892 4856 scope.go:117] "RemoveContainer" containerID="711a2ed759d80bf91947ea320f4b494674b78d07d506d750c3a0441b11de9f8b" Nov 22 08:48:27 crc kubenswrapper[4856]: E1122 08:48:27.009102 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-6b7d77cf87-p5fpf_openstack(3f534a95-bc51-4b61-ab48-27a0ad0cf6de)\"" pod="openstack/heat-api-6b7d77cf87-p5fpf" podUID="3f534a95-bc51-4b61-ab48-27a0ad0cf6de" Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.021137 4856 generic.go:334] "Generic (PLEG): container finished" podID="5f5048ca-07db-4e30-9138-c93910df1958" containerID="2a5044269d4b755baf7afb6c7d1308b6a4afc9a75f51f91c7383ba39f11add3b" exitCode=0 Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.021183 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8ch2" event={"ID":"5f5048ca-07db-4e30-9138-c93910df1958","Type":"ContainerDied","Data":"2a5044269d4b755baf7afb6c7d1308b6a4afc9a75f51f91c7383ba39f11add3b"} Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.088725 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6df48cd58f-ngxlf" Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.149751 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6b7d77cf87-p5fpf"] Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.510555 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.689340 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-config-data-custom\") pod \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.689381 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8rjr\" (UniqueName: \"kubernetes.io/projected/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-kube-api-access-l8rjr\") pod \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.689486 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-config-data\") pod \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.689521 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-combined-ca-bundle\") pod \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\" (UID: \"3f534a95-bc51-4b61-ab48-27a0ad0cf6de\") " Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.695469 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3f534a95-bc51-4b61-ab48-27a0ad0cf6de" (UID: "3f534a95-bc51-4b61-ab48-27a0ad0cf6de"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.696131 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-kube-api-access-l8rjr" (OuterVolumeSpecName: "kube-api-access-l8rjr") pod "3f534a95-bc51-4b61-ab48-27a0ad0cf6de" (UID: "3f534a95-bc51-4b61-ab48-27a0ad0cf6de"). InnerVolumeSpecName "kube-api-access-l8rjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.718541 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f534a95-bc51-4b61-ab48-27a0ad0cf6de" (UID: "3f534a95-bc51-4b61-ab48-27a0ad0cf6de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.746498 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-config-data" (OuterVolumeSpecName: "config-data") pod "3f534a95-bc51-4b61-ab48-27a0ad0cf6de" (UID: "3f534a95-bc51-4b61-ab48-27a0ad0cf6de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.792706 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8rjr\" (UniqueName: \"kubernetes.io/projected/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-kube-api-access-l8rjr\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.792741 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.792750 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:28 crc kubenswrapper[4856]: I1122 08:48:28.792759 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f534a95-bc51-4b61-ab48-27a0ad0cf6de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:29 crc kubenswrapper[4856]: I1122 08:48:29.034530 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b7d77cf87-p5fpf" event={"ID":"3f534a95-bc51-4b61-ab48-27a0ad0cf6de","Type":"ContainerDied","Data":"f701b1d66215b3a7467e3de11568f79af7df23ced1c3c91ed2173c5b67cafb43"} Nov 22 08:48:29 crc kubenswrapper[4856]: I1122 08:48:29.034867 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b7d77cf87-p5fpf" Nov 22 08:48:29 crc kubenswrapper[4856]: I1122 08:48:29.071551 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6b7d77cf87-p5fpf"] Nov 22 08:48:29 crc kubenswrapper[4856]: I1122 08:48:29.079979 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6b7d77cf87-p5fpf"] Nov 22 08:48:29 crc kubenswrapper[4856]: I1122 08:48:29.252048 4856 scope.go:117] "RemoveContainer" containerID="711a2ed759d80bf91947ea320f4b494674b78d07d506d750c3a0441b11de9f8b" Nov 22 08:48:29 crc kubenswrapper[4856]: I1122 08:48:29.274240 4856 scope.go:117] "RemoveContainer" containerID="84d8495dd39a6d39d8a230853678600bc5f831419bf1c07c0069a6e5022663c9" Nov 22 08:48:29 crc kubenswrapper[4856]: E1122 08:48:29.274731 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d8495dd39a6d39d8a230853678600bc5f831419bf1c07c0069a6e5022663c9\": container with ID starting with 84d8495dd39a6d39d8a230853678600bc5f831419bf1c07c0069a6e5022663c9 not found: ID does not exist" containerID="84d8495dd39a6d39d8a230853678600bc5f831419bf1c07c0069a6e5022663c9" Nov 22 08:48:29 crc kubenswrapper[4856]: I1122 08:48:29.274774 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d8495dd39a6d39d8a230853678600bc5f831419bf1c07c0069a6e5022663c9"} err="failed to get container status \"84d8495dd39a6d39d8a230853678600bc5f831419bf1c07c0069a6e5022663c9\": rpc error: code = NotFound desc = could not find container \"84d8495dd39a6d39d8a230853678600bc5f831419bf1c07c0069a6e5022663c9\": container with ID starting with 84d8495dd39a6d39d8a230853678600bc5f831419bf1c07c0069a6e5022663c9 not found: ID does not exist" Nov 22 08:48:30 crc kubenswrapper[4856]: I1122 08:48:30.048572 4856 generic.go:334] "Generic (PLEG): container finished" podID="8dc69498-58c4-486b-85e9-cf1a9c645a79" containerID="b9728356bb05cd8f849ff4afba13bdfc0f4cb1135ac3053757b288ac27be1eb4" exitCode=1 Nov 22 08:48:30 crc kubenswrapper[4856]: I1122 08:48:30.048783 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" event={"ID":"8dc69498-58c4-486b-85e9-cf1a9c645a79","Type":"ContainerDied","Data":"b9728356bb05cd8f849ff4afba13bdfc0f4cb1135ac3053757b288ac27be1eb4"} Nov 22 08:48:30 crc kubenswrapper[4856]: I1122 08:48:30.049070 4856 scope.go:117] "RemoveContainer" containerID="d54de8eff90e72e788dc54b5c9862ffa21a326e29acf1779ee393cb174fc5775" Nov 22 08:48:30 crc kubenswrapper[4856]: I1122 08:48:30.049300 4856 scope.go:117] "RemoveContainer" containerID="b9728356bb05cd8f849ff4afba13bdfc0f4cb1135ac3053757b288ac27be1eb4" Nov 22 08:48:30 crc kubenswrapper[4856]: E1122 08:48:30.049607 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-69cc8bfcfd-m6qhr_openstack(8dc69498-58c4-486b-85e9-cf1a9c645a79)\"" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" podUID="8dc69498-58c4-486b-85e9-cf1a9c645a79" Nov 22 08:48:30 crc kubenswrapper[4856]: I1122 08:48:30.051722 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8ch2" event={"ID":"5f5048ca-07db-4e30-9138-c93910df1958","Type":"ContainerStarted","Data":"639ba7d4e9e6e71e401f00140f7a6ee66ff1a882a258eb6897a64b739129ffed"} Nov 22 08:48:30 crc kubenswrapper[4856]: I1122 08:48:30.090853 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x8ch2" podStartSLOduration=4.876789935 podStartE2EDuration="12.090835182s" podCreationTimestamp="2025-11-22 08:48:18 +0000 UTC" firstStartedPulling="2025-11-22 08:48:22.237397211 +0000 UTC m=+6344.650790469" lastFinishedPulling="2025-11-22 08:48:29.451442458 +0000 UTC m=+6351.864835716" observedRunningTime="2025-11-22 08:48:30.081713256 +0000 UTC m=+6352.495106534" watchObservedRunningTime="2025-11-22 08:48:30.090835182 +0000 UTC m=+6352.504228440" Nov 22 08:48:30 crc kubenswrapper[4856]: I1122 08:48:30.531165 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:30 crc kubenswrapper[4856]: I1122 08:48:30.531248 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:30 crc kubenswrapper[4856]: I1122 08:48:30.724809 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f534a95-bc51-4b61-ab48-27a0ad0cf6de" path="/var/lib/kubelet/pods/3f534a95-bc51-4b61-ab48-27a0ad0cf6de/volumes" Nov 22 08:48:31 crc kubenswrapper[4856]: I1122 08:48:31.065177 4856 scope.go:117] "RemoveContainer" containerID="b9728356bb05cd8f849ff4afba13bdfc0f4cb1135ac3053757b288ac27be1eb4" Nov 22 08:48:31 crc kubenswrapper[4856]: E1122 08:48:31.065419 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-69cc8bfcfd-m6qhr_openstack(8dc69498-58c4-486b-85e9-cf1a9c645a79)\"" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" podUID="8dc69498-58c4-486b-85e9-cf1a9c645a79" Nov 22 08:48:32 crc kubenswrapper[4856]: I1122 08:48:32.074183 4856 scope.go:117] "RemoveContainer" containerID="b9728356bb05cd8f849ff4afba13bdfc0f4cb1135ac3053757b288ac27be1eb4" Nov 22 08:48:32 crc kubenswrapper[4856]: E1122 08:48:32.074683 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-69cc8bfcfd-m6qhr_openstack(8dc69498-58c4-486b-85e9-cf1a9c645a79)\"" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" podUID="8dc69498-58c4-486b-85e9-cf1a9c645a79" Nov 22 08:48:32 crc kubenswrapper[4856]: I1122 08:48:32.860966 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5dfbf757c6-zbhzc" Nov 22 08:48:32 crc kubenswrapper[4856]: I1122 08:48:32.935623 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-69cc8bfcfd-m6qhr"] Nov 22 08:48:33 crc kubenswrapper[4856]: I1122 08:48:33.436399 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:33 crc kubenswrapper[4856]: I1122 08:48:33.492611 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-combined-ca-bundle\") pod \"8dc69498-58c4-486b-85e9-cf1a9c645a79\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " Nov 22 08:48:33 crc kubenswrapper[4856]: I1122 08:48:33.492698 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn4lw\" (UniqueName: \"kubernetes.io/projected/8dc69498-58c4-486b-85e9-cf1a9c645a79-kube-api-access-kn4lw\") pod \"8dc69498-58c4-486b-85e9-cf1a9c645a79\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " Nov 22 08:48:33 crc kubenswrapper[4856]: I1122 08:48:33.492732 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-config-data-custom\") pod \"8dc69498-58c4-486b-85e9-cf1a9c645a79\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " Nov 22 08:48:33 crc kubenswrapper[4856]: I1122 08:48:33.492985 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-config-data\") pod \"8dc69498-58c4-486b-85e9-cf1a9c645a79\" (UID: \"8dc69498-58c4-486b-85e9-cf1a9c645a79\") " Nov 22 08:48:33 crc kubenswrapper[4856]: I1122 08:48:33.498099 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8dc69498-58c4-486b-85e9-cf1a9c645a79" (UID: "8dc69498-58c4-486b-85e9-cf1a9c645a79"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:33 crc kubenswrapper[4856]: I1122 08:48:33.502086 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc69498-58c4-486b-85e9-cf1a9c645a79-kube-api-access-kn4lw" (OuterVolumeSpecName: "kube-api-access-kn4lw") pod "8dc69498-58c4-486b-85e9-cf1a9c645a79" (UID: "8dc69498-58c4-486b-85e9-cf1a9c645a79"). InnerVolumeSpecName "kube-api-access-kn4lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:48:33 crc kubenswrapper[4856]: I1122 08:48:33.527788 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8dc69498-58c4-486b-85e9-cf1a9c645a79" (UID: "8dc69498-58c4-486b-85e9-cf1a9c645a79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:33 crc kubenswrapper[4856]: I1122 08:48:33.554036 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-config-data" (OuterVolumeSpecName: "config-data") pod "8dc69498-58c4-486b-85e9-cf1a9c645a79" (UID: "8dc69498-58c4-486b-85e9-cf1a9c645a79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:33 crc kubenswrapper[4856]: I1122 08:48:33.595495 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:33 crc kubenswrapper[4856]: I1122 08:48:33.595566 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:33 crc kubenswrapper[4856]: I1122 08:48:33.595584 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn4lw\" (UniqueName: \"kubernetes.io/projected/8dc69498-58c4-486b-85e9-cf1a9c645a79-kube-api-access-kn4lw\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:33 crc kubenswrapper[4856]: I1122 08:48:33.595596 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8dc69498-58c4-486b-85e9-cf1a9c645a79-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:34 crc kubenswrapper[4856]: I1122 08:48:34.092934 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" event={"ID":"8dc69498-58c4-486b-85e9-cf1a9c645a79","Type":"ContainerDied","Data":"7de05e0e5ea3868af990f78eebdc06e7c9b9d1ca9103f17e9b08530fd16706e0"} Nov 22 08:48:34 crc kubenswrapper[4856]: I1122 08:48:34.092996 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-69cc8bfcfd-m6qhr" Nov 22 08:48:34 crc kubenswrapper[4856]: I1122 08:48:34.093306 4856 scope.go:117] "RemoveContainer" containerID="b9728356bb05cd8f849ff4afba13bdfc0f4cb1135ac3053757b288ac27be1eb4" Nov 22 08:48:34 crc kubenswrapper[4856]: I1122 08:48:34.134492 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-69cc8bfcfd-m6qhr"] Nov 22 08:48:34 crc kubenswrapper[4856]: I1122 08:48:34.144710 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-69cc8bfcfd-m6qhr"] Nov 22 08:48:34 crc kubenswrapper[4856]: I1122 08:48:34.720497 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc69498-58c4-486b-85e9-cf1a9c645a79" path="/var/lib/kubelet/pods/8dc69498-58c4-486b-85e9-cf1a9c645a79/volumes" Nov 22 08:48:39 crc kubenswrapper[4856]: I1122 08:48:39.285496 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:39 crc kubenswrapper[4856]: I1122 08:48:39.286072 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:39 crc kubenswrapper[4856]: I1122 08:48:39.330451 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:40 crc kubenswrapper[4856]: I1122 08:48:40.195829 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:40 crc kubenswrapper[4856]: I1122 08:48:40.247834 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x8ch2"] Nov 22 08:48:40 crc kubenswrapper[4856]: I1122 08:48:40.527727 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6d55bbbf85-9nqnt" Nov 22 08:48:40 crc kubenswrapper[4856]: I1122 08:48:40.576812 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6874b545dc-hd8t9"] Nov 22 08:48:40 crc kubenswrapper[4856]: I1122 08:48:40.577203 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-6874b545dc-hd8t9" podUID="a33c09f9-1cb0-4669-b848-c83ad7aa9399" containerName="heat-engine" containerID="cri-o://6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a" gracePeriod=60 Nov 22 08:48:42 crc kubenswrapper[4856]: I1122 08:48:42.166201 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x8ch2" podUID="5f5048ca-07db-4e30-9138-c93910df1958" containerName="registry-server" containerID="cri-o://639ba7d4e9e6e71e401f00140f7a6ee66ff1a882a258eb6897a64b739129ffed" gracePeriod=2 Nov 22 08:48:42 crc kubenswrapper[4856]: I1122 08:48:42.652679 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:42 crc kubenswrapper[4856]: I1122 08:48:42.701200 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5048ca-07db-4e30-9138-c93910df1958-catalog-content\") pod \"5f5048ca-07db-4e30-9138-c93910df1958\" (UID: \"5f5048ca-07db-4e30-9138-c93910df1958\") " Nov 22 08:48:42 crc kubenswrapper[4856]: I1122 08:48:42.701457 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5048ca-07db-4e30-9138-c93910df1958-utilities\") pod \"5f5048ca-07db-4e30-9138-c93910df1958\" (UID: \"5f5048ca-07db-4e30-9138-c93910df1958\") " Nov 22 08:48:42 crc kubenswrapper[4856]: I1122 08:48:42.701490 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svpb7\" (UniqueName: \"kubernetes.io/projected/5f5048ca-07db-4e30-9138-c93910df1958-kube-api-access-svpb7\") pod \"5f5048ca-07db-4e30-9138-c93910df1958\" (UID: \"5f5048ca-07db-4e30-9138-c93910df1958\") " Nov 22 08:48:42 crc kubenswrapper[4856]: I1122 08:48:42.703045 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5048ca-07db-4e30-9138-c93910df1958-utilities" (OuterVolumeSpecName: "utilities") pod "5f5048ca-07db-4e30-9138-c93910df1958" (UID: "5f5048ca-07db-4e30-9138-c93910df1958"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:48:42 crc kubenswrapper[4856]: I1122 08:48:42.710742 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5048ca-07db-4e30-9138-c93910df1958-kube-api-access-svpb7" (OuterVolumeSpecName: "kube-api-access-svpb7") pod "5f5048ca-07db-4e30-9138-c93910df1958" (UID: "5f5048ca-07db-4e30-9138-c93910df1958"). InnerVolumeSpecName "kube-api-access-svpb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:48:42 crc kubenswrapper[4856]: I1122 08:48:42.805057 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5048ca-07db-4e30-9138-c93910df1958-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:42 crc kubenswrapper[4856]: I1122 08:48:42.805337 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svpb7\" (UniqueName: \"kubernetes.io/projected/5f5048ca-07db-4e30-9138-c93910df1958-kube-api-access-svpb7\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:42 crc kubenswrapper[4856]: I1122 08:48:42.814752 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5048ca-07db-4e30-9138-c93910df1958-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f5048ca-07db-4e30-9138-c93910df1958" (UID: "5f5048ca-07db-4e30-9138-c93910df1958"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:48:42 crc kubenswrapper[4856]: I1122 08:48:42.907905 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5048ca-07db-4e30-9138-c93910df1958-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:42 crc kubenswrapper[4856]: E1122 08:48:42.945099 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 08:48:42 crc kubenswrapper[4856]: E1122 08:48:42.946608 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 08:48:42 crc kubenswrapper[4856]: E1122 08:48:42.950619 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 08:48:42 crc kubenswrapper[4856]: E1122 08:48:42.950680 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6874b545dc-hd8t9" podUID="a33c09f9-1cb0-4669-b848-c83ad7aa9399" containerName="heat-engine" Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.176771 4856 generic.go:334] "Generic (PLEG): container finished" podID="5f5048ca-07db-4e30-9138-c93910df1958" containerID="639ba7d4e9e6e71e401f00140f7a6ee66ff1a882a258eb6897a64b739129ffed" exitCode=0 Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.176817 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8ch2" event={"ID":"5f5048ca-07db-4e30-9138-c93910df1958","Type":"ContainerDied","Data":"639ba7d4e9e6e71e401f00140f7a6ee66ff1a882a258eb6897a64b739129ffed"} Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.176847 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x8ch2" event={"ID":"5f5048ca-07db-4e30-9138-c93910df1958","Type":"ContainerDied","Data":"ff133cbac52dc251de9b587493a85a9fc9e1bdd545253d541efe8d8719b71f41"} Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.176869 4856 scope.go:117] "RemoveContainer" containerID="639ba7d4e9e6e71e401f00140f7a6ee66ff1a882a258eb6897a64b739129ffed" Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.176870 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x8ch2" Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.199072 4856 scope.go:117] "RemoveContainer" containerID="2a5044269d4b755baf7afb6c7d1308b6a4afc9a75f51f91c7383ba39f11add3b" Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.216279 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x8ch2"] Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.224419 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x8ch2"] Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.242679 4856 scope.go:117] "RemoveContainer" containerID="9331ef412b1f64ae243083133a6ade26f95287d625719cedcb1e896ce6a87d94" Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.282783 4856 scope.go:117] "RemoveContainer" containerID="639ba7d4e9e6e71e401f00140f7a6ee66ff1a882a258eb6897a64b739129ffed" Nov 22 08:48:43 crc kubenswrapper[4856]: E1122 08:48:43.283233 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"639ba7d4e9e6e71e401f00140f7a6ee66ff1a882a258eb6897a64b739129ffed\": container with ID starting with 639ba7d4e9e6e71e401f00140f7a6ee66ff1a882a258eb6897a64b739129ffed not found: ID does not exist" containerID="639ba7d4e9e6e71e401f00140f7a6ee66ff1a882a258eb6897a64b739129ffed" Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.283291 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"639ba7d4e9e6e71e401f00140f7a6ee66ff1a882a258eb6897a64b739129ffed"} err="failed to get container status \"639ba7d4e9e6e71e401f00140f7a6ee66ff1a882a258eb6897a64b739129ffed\": rpc error: code = NotFound desc = could not find container \"639ba7d4e9e6e71e401f00140f7a6ee66ff1a882a258eb6897a64b739129ffed\": container with ID starting with 639ba7d4e9e6e71e401f00140f7a6ee66ff1a882a258eb6897a64b739129ffed not found: ID does not exist" Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.283324 4856 scope.go:117] "RemoveContainer" containerID="2a5044269d4b755baf7afb6c7d1308b6a4afc9a75f51f91c7383ba39f11add3b" Nov 22 08:48:43 crc kubenswrapper[4856]: E1122 08:48:43.283658 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a5044269d4b755baf7afb6c7d1308b6a4afc9a75f51f91c7383ba39f11add3b\": container with ID starting with 2a5044269d4b755baf7afb6c7d1308b6a4afc9a75f51f91c7383ba39f11add3b not found: ID does not exist" containerID="2a5044269d4b755baf7afb6c7d1308b6a4afc9a75f51f91c7383ba39f11add3b" Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.283695 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5044269d4b755baf7afb6c7d1308b6a4afc9a75f51f91c7383ba39f11add3b"} err="failed to get container status \"2a5044269d4b755baf7afb6c7d1308b6a4afc9a75f51f91c7383ba39f11add3b\": rpc error: code = NotFound desc = could not find container \"2a5044269d4b755baf7afb6c7d1308b6a4afc9a75f51f91c7383ba39f11add3b\": container with ID starting with 2a5044269d4b755baf7afb6c7d1308b6a4afc9a75f51f91c7383ba39f11add3b not found: ID does not exist" Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.283716 4856 scope.go:117] "RemoveContainer" containerID="9331ef412b1f64ae243083133a6ade26f95287d625719cedcb1e896ce6a87d94" Nov 22 08:48:43 crc kubenswrapper[4856]: E1122 08:48:43.283960 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9331ef412b1f64ae243083133a6ade26f95287d625719cedcb1e896ce6a87d94\": container with ID starting with 9331ef412b1f64ae243083133a6ade26f95287d625719cedcb1e896ce6a87d94 not found: ID does not exist" containerID="9331ef412b1f64ae243083133a6ade26f95287d625719cedcb1e896ce6a87d94" Nov 22 08:48:43 crc kubenswrapper[4856]: I1122 08:48:43.283992 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9331ef412b1f64ae243083133a6ade26f95287d625719cedcb1e896ce6a87d94"} err="failed to get container status \"9331ef412b1f64ae243083133a6ade26f95287d625719cedcb1e896ce6a87d94\": rpc error: code = NotFound desc = could not find container \"9331ef412b1f64ae243083133a6ade26f95287d625719cedcb1e896ce6a87d94\": container with ID starting with 9331ef412b1f64ae243083133a6ade26f95287d625719cedcb1e896ce6a87d94 not found: ID does not exist" Nov 22 08:48:44 crc kubenswrapper[4856]: I1122 08:48:44.723294 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5048ca-07db-4e30-9138-c93910df1958" path="/var/lib/kubelet/pods/5f5048ca-07db-4e30-9138-c93910df1958/volumes" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.238219 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g"] Nov 22 08:48:50 crc kubenswrapper[4856]: E1122 08:48:50.238995 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc69498-58c4-486b-85e9-cf1a9c645a79" containerName="heat-cfnapi" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239008 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc69498-58c4-486b-85e9-cf1a9c645a79" containerName="heat-cfnapi" Nov 22 08:48:50 crc kubenswrapper[4856]: E1122 08:48:50.239023 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f534a95-bc51-4b61-ab48-27a0ad0cf6de" containerName="heat-api" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239028 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f534a95-bc51-4b61-ab48-27a0ad0cf6de" containerName="heat-api" Nov 22 08:48:50 crc kubenswrapper[4856]: E1122 08:48:50.239041 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4767616d-a5cc-4f87-b5e0-02597270df9c" containerName="heat-api" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239047 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4767616d-a5cc-4f87-b5e0-02597270df9c" containerName="heat-api" Nov 22 08:48:50 crc kubenswrapper[4856]: E1122 08:48:50.239059 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f534a95-bc51-4b61-ab48-27a0ad0cf6de" containerName="heat-api" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239064 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f534a95-bc51-4b61-ab48-27a0ad0cf6de" containerName="heat-api" Nov 22 08:48:50 crc kubenswrapper[4856]: E1122 08:48:50.239078 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerName="horizon-log" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239084 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerName="horizon-log" Nov 22 08:48:50 crc kubenswrapper[4856]: E1122 08:48:50.239099 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5048ca-07db-4e30-9138-c93910df1958" containerName="extract-content" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239106 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5048ca-07db-4e30-9138-c93910df1958" containerName="extract-content" Nov 22 08:48:50 crc kubenswrapper[4856]: E1122 08:48:50.239119 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5048ca-07db-4e30-9138-c93910df1958" containerName="extract-utilities" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239125 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5048ca-07db-4e30-9138-c93910df1958" containerName="extract-utilities" Nov 22 08:48:50 crc kubenswrapper[4856]: E1122 08:48:50.239135 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc69498-58c4-486b-85e9-cf1a9c645a79" containerName="heat-cfnapi" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239141 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc69498-58c4-486b-85e9-cf1a9c645a79" containerName="heat-cfnapi" Nov 22 08:48:50 crc kubenswrapper[4856]: E1122 08:48:50.239159 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5048ca-07db-4e30-9138-c93910df1958" containerName="registry-server" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239164 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5048ca-07db-4e30-9138-c93910df1958" containerName="registry-server" Nov 22 08:48:50 crc kubenswrapper[4856]: E1122 08:48:50.239177 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerName="horizon" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239182 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerName="horizon" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239353 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerName="horizon-log" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239368 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f534a95-bc51-4b61-ab48-27a0ad0cf6de" containerName="heat-api" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239375 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f534a95-bc51-4b61-ab48-27a0ad0cf6de" containerName="heat-api" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239382 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5048ca-07db-4e30-9138-c93910df1958" containerName="registry-server" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239393 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc69498-58c4-486b-85e9-cf1a9c645a79" containerName="heat-cfnapi" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239400 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fac214f-463f-4451-a06c-2e4750ff1eb3" containerName="horizon" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239414 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4767616d-a5cc-4f87-b5e0-02597270df9c" containerName="heat-api" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.239760 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc69498-58c4-486b-85e9-cf1a9c645a79" containerName="heat-cfnapi" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.240905 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.244750 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.257840 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57f3408b-029f-4f55-a8ee-d0dea3c82197-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g\" (UID: \"57f3408b-029f-4f55-a8ee-d0dea3c82197\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.257960 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57f3408b-029f-4f55-a8ee-d0dea3c82197-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g\" (UID: \"57f3408b-029f-4f55-a8ee-d0dea3c82197\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.258051 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tfnj\" (UniqueName: \"kubernetes.io/projected/57f3408b-029f-4f55-a8ee-d0dea3c82197-kube-api-access-4tfnj\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g\" (UID: \"57f3408b-029f-4f55-a8ee-d0dea3c82197\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.262914 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g"] Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.359421 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57f3408b-029f-4f55-a8ee-d0dea3c82197-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g\" (UID: \"57f3408b-029f-4f55-a8ee-d0dea3c82197\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.359589 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tfnj\" (UniqueName: \"kubernetes.io/projected/57f3408b-029f-4f55-a8ee-d0dea3c82197-kube-api-access-4tfnj\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g\" (UID: \"57f3408b-029f-4f55-a8ee-d0dea3c82197\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.359684 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57f3408b-029f-4f55-a8ee-d0dea3c82197-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g\" (UID: \"57f3408b-029f-4f55-a8ee-d0dea3c82197\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.360147 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57f3408b-029f-4f55-a8ee-d0dea3c82197-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g\" (UID: \"57f3408b-029f-4f55-a8ee-d0dea3c82197\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.360208 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57f3408b-029f-4f55-a8ee-d0dea3c82197-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g\" (UID: \"57f3408b-029f-4f55-a8ee-d0dea3c82197\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.379785 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tfnj\" (UniqueName: \"kubernetes.io/projected/57f3408b-029f-4f55-a8ee-d0dea3c82197-kube-api-access-4tfnj\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g\" (UID: \"57f3408b-029f-4f55-a8ee-d0dea3c82197\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" Nov 22 08:48:50 crc kubenswrapper[4856]: I1122 08:48:50.565909 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" Nov 22 08:48:51 crc kubenswrapper[4856]: I1122 08:48:51.030864 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g"] Nov 22 08:48:51 crc kubenswrapper[4856]: I1122 08:48:51.255229 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" event={"ID":"57f3408b-029f-4f55-a8ee-d0dea3c82197","Type":"ContainerStarted","Data":"590c538235519e1f8eb2526cc73150b0755368b57bafde48240631885eb365b9"} Nov 22 08:48:51 crc kubenswrapper[4856]: I1122 08:48:51.255283 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" event={"ID":"57f3408b-029f-4f55-a8ee-d0dea3c82197","Type":"ContainerStarted","Data":"6248a4d5119e2e40379c4bb8523a1ec02f70dcfaa2fb35f18fcab6eea8d1f4b0"} Nov 22 08:48:52 crc kubenswrapper[4856]: I1122 08:48:52.268103 4856 generic.go:334] "Generic (PLEG): container finished" podID="57f3408b-029f-4f55-a8ee-d0dea3c82197" containerID="590c538235519e1f8eb2526cc73150b0755368b57bafde48240631885eb365b9" exitCode=0 Nov 22 08:48:52 crc kubenswrapper[4856]: I1122 08:48:52.268610 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" event={"ID":"57f3408b-029f-4f55-a8ee-d0dea3c82197","Type":"ContainerDied","Data":"590c538235519e1f8eb2526cc73150b0755368b57bafde48240631885eb365b9"} Nov 22 08:48:52 crc kubenswrapper[4856]: I1122 08:48:52.271272 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:48:52 crc kubenswrapper[4856]: E1122 08:48:52.945635 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 08:48:52 crc kubenswrapper[4856]: E1122 08:48:52.948155 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 08:48:52 crc kubenswrapper[4856]: E1122 08:48:52.951000 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 22 08:48:52 crc kubenswrapper[4856]: E1122 08:48:52.951079 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-6874b545dc-hd8t9" podUID="a33c09f9-1cb0-4669-b848-c83ad7aa9399" containerName="heat-engine" Nov 22 08:48:54 crc kubenswrapper[4856]: I1122 08:48:54.288287 4856 generic.go:334] "Generic (PLEG): container finished" podID="57f3408b-029f-4f55-a8ee-d0dea3c82197" containerID="7ffba184b8a26fccb1dc5c1973cc4214adb2daec610e7331ea6d156678165aac" exitCode=0 Nov 22 08:48:54 crc kubenswrapper[4856]: I1122 08:48:54.288650 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" event={"ID":"57f3408b-029f-4f55-a8ee-d0dea3c82197","Type":"ContainerDied","Data":"7ffba184b8a26fccb1dc5c1973cc4214adb2daec610e7331ea6d156678165aac"} Nov 22 08:48:55 crc kubenswrapper[4856]: I1122 08:48:55.299860 4856 generic.go:334] "Generic (PLEG): container finished" podID="57f3408b-029f-4f55-a8ee-d0dea3c82197" containerID="e3f6ba2ddf8708fe7132e0029a94a9e3f129d4c59f92523cbb8400c826b4ecec" exitCode=0 Nov 22 08:48:55 crc kubenswrapper[4856]: I1122 08:48:55.300177 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" event={"ID":"57f3408b-029f-4f55-a8ee-d0dea3c82197","Type":"ContainerDied","Data":"e3f6ba2ddf8708fe7132e0029a94a9e3f129d4c59f92523cbb8400c826b4ecec"} Nov 22 08:48:56 crc kubenswrapper[4856]: I1122 08:48:56.686530 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" Nov 22 08:48:56 crc kubenswrapper[4856]: I1122 08:48:56.719015 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tfnj\" (UniqueName: \"kubernetes.io/projected/57f3408b-029f-4f55-a8ee-d0dea3c82197-kube-api-access-4tfnj\") pod \"57f3408b-029f-4f55-a8ee-d0dea3c82197\" (UID: \"57f3408b-029f-4f55-a8ee-d0dea3c82197\") " Nov 22 08:48:56 crc kubenswrapper[4856]: I1122 08:48:56.719280 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57f3408b-029f-4f55-a8ee-d0dea3c82197-util\") pod \"57f3408b-029f-4f55-a8ee-d0dea3c82197\" (UID: \"57f3408b-029f-4f55-a8ee-d0dea3c82197\") " Nov 22 08:48:56 crc kubenswrapper[4856]: I1122 08:48:56.719415 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57f3408b-029f-4f55-a8ee-d0dea3c82197-bundle\") pod \"57f3408b-029f-4f55-a8ee-d0dea3c82197\" (UID: \"57f3408b-029f-4f55-a8ee-d0dea3c82197\") " Nov 22 08:48:56 crc kubenswrapper[4856]: I1122 08:48:56.721314 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f3408b-029f-4f55-a8ee-d0dea3c82197-bundle" (OuterVolumeSpecName: "bundle") pod "57f3408b-029f-4f55-a8ee-d0dea3c82197" (UID: "57f3408b-029f-4f55-a8ee-d0dea3c82197"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:48:56 crc kubenswrapper[4856]: I1122 08:48:56.725087 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f3408b-029f-4f55-a8ee-d0dea3c82197-kube-api-access-4tfnj" (OuterVolumeSpecName: "kube-api-access-4tfnj") pod "57f3408b-029f-4f55-a8ee-d0dea3c82197" (UID: "57f3408b-029f-4f55-a8ee-d0dea3c82197"). InnerVolumeSpecName "kube-api-access-4tfnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:48:56 crc kubenswrapper[4856]: I1122 08:48:56.728983 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57f3408b-029f-4f55-a8ee-d0dea3c82197-util" (OuterVolumeSpecName: "util") pod "57f3408b-029f-4f55-a8ee-d0dea3c82197" (UID: "57f3408b-029f-4f55-a8ee-d0dea3c82197"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:48:56 crc kubenswrapper[4856]: I1122 08:48:56.822686 4856 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57f3408b-029f-4f55-a8ee-d0dea3c82197-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:56 crc kubenswrapper[4856]: I1122 08:48:56.822987 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tfnj\" (UniqueName: \"kubernetes.io/projected/57f3408b-029f-4f55-a8ee-d0dea3c82197-kube-api-access-4tfnj\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:56 crc kubenswrapper[4856]: I1122 08:48:56.823005 4856 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57f3408b-029f-4f55-a8ee-d0dea3c82197-util\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:57 crc kubenswrapper[4856]: I1122 08:48:57.322461 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" event={"ID":"57f3408b-029f-4f55-a8ee-d0dea3c82197","Type":"ContainerDied","Data":"6248a4d5119e2e40379c4bb8523a1ec02f70dcfaa2fb35f18fcab6eea8d1f4b0"} Nov 22 08:48:57 crc kubenswrapper[4856]: I1122 08:48:57.322524 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6248a4d5119e2e40379c4bb8523a1ec02f70dcfaa2fb35f18fcab6eea8d1f4b0" Nov 22 08:48:57 crc kubenswrapper[4856]: I1122 08:48:57.322584 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g" Nov 22 08:48:57 crc kubenswrapper[4856]: I1122 08:48:57.847190 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:57 crc kubenswrapper[4856]: I1122 08:48:57.946602 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-config-data\") pod \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " Nov 22 08:48:57 crc kubenswrapper[4856]: I1122 08:48:57.946648 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-config-data-custom\") pod \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " Nov 22 08:48:57 crc kubenswrapper[4856]: I1122 08:48:57.946683 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-combined-ca-bundle\") pod \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " Nov 22 08:48:57 crc kubenswrapper[4856]: I1122 08:48:57.946789 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqx2g\" (UniqueName: \"kubernetes.io/projected/a33c09f9-1cb0-4669-b848-c83ad7aa9399-kube-api-access-gqx2g\") pod \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\" (UID: \"a33c09f9-1cb0-4669-b848-c83ad7aa9399\") " Nov 22 08:48:57 crc kubenswrapper[4856]: I1122 08:48:57.952398 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a33c09f9-1cb0-4669-b848-c83ad7aa9399" (UID: "a33c09f9-1cb0-4669-b848-c83ad7aa9399"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:57 crc kubenswrapper[4856]: I1122 08:48:57.952411 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a33c09f9-1cb0-4669-b848-c83ad7aa9399-kube-api-access-gqx2g" (OuterVolumeSpecName: "kube-api-access-gqx2g") pod "a33c09f9-1cb0-4669-b848-c83ad7aa9399" (UID: "a33c09f9-1cb0-4669-b848-c83ad7aa9399"). InnerVolumeSpecName "kube-api-access-gqx2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:48:57 crc kubenswrapper[4856]: I1122 08:48:57.974463 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a33c09f9-1cb0-4669-b848-c83ad7aa9399" (UID: "a33c09f9-1cb0-4669-b848-c83ad7aa9399"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:57 crc kubenswrapper[4856]: I1122 08:48:57.996424 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-config-data" (OuterVolumeSpecName: "config-data") pod "a33c09f9-1cb0-4669-b848-c83ad7aa9399" (UID: "a33c09f9-1cb0-4669-b848-c83ad7aa9399"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.049903 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.049984 4856 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.050000 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a33c09f9-1cb0-4669-b848-c83ad7aa9399-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.050017 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqx2g\" (UniqueName: \"kubernetes.io/projected/a33c09f9-1cb0-4669-b848-c83ad7aa9399-kube-api-access-gqx2g\") on node \"crc\" DevicePath \"\"" Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.332926 4856 generic.go:334] "Generic (PLEG): container finished" podID="a33c09f9-1cb0-4669-b848-c83ad7aa9399" containerID="6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a" exitCode=0 Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.332967 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6874b545dc-hd8t9" event={"ID":"a33c09f9-1cb0-4669-b848-c83ad7aa9399","Type":"ContainerDied","Data":"6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a"} Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.332993 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6874b545dc-hd8t9" event={"ID":"a33c09f9-1cb0-4669-b848-c83ad7aa9399","Type":"ContainerDied","Data":"5b9f126d0252e08a0c2aa70a742e43a0886bb89c94f8fedaa10202265866ff48"} Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.333009 4856 scope.go:117] "RemoveContainer" containerID="6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a" Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.333541 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6874b545dc-hd8t9" Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.356942 4856 scope.go:117] "RemoveContainer" containerID="6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a" Nov 22 08:48:58 crc kubenswrapper[4856]: E1122 08:48:58.357474 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a\": container with ID starting with 6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a not found: ID does not exist" containerID="6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a" Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.357539 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a"} err="failed to get container status \"6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a\": rpc error: code = NotFound desc = could not find container \"6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a\": container with ID starting with 6ec3d79148d3a476306ced24d5155023da9b8f00b00eb38b9c1d2cca9eb2410a not found: ID does not exist" Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.370954 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-6874b545dc-hd8t9"] Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.378997 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-6874b545dc-hd8t9"] Nov 22 08:48:58 crc kubenswrapper[4856]: I1122 08:48:58.722034 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a33c09f9-1cb0-4669-b848-c83ad7aa9399" path="/var/lib/kubelet/pods/a33c09f9-1cb0-4669-b848-c83ad7aa9399/volumes" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.372248 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-65pzs"] Nov 22 08:49:09 crc kubenswrapper[4856]: E1122 08:49:09.373229 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f3408b-029f-4f55-a8ee-d0dea3c82197" containerName="extract" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.373247 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f3408b-029f-4f55-a8ee-d0dea3c82197" containerName="extract" Nov 22 08:49:09 crc kubenswrapper[4856]: E1122 08:49:09.373270 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f3408b-029f-4f55-a8ee-d0dea3c82197" containerName="pull" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.373279 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f3408b-029f-4f55-a8ee-d0dea3c82197" containerName="pull" Nov 22 08:49:09 crc kubenswrapper[4856]: E1122 08:49:09.373298 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a33c09f9-1cb0-4669-b848-c83ad7aa9399" containerName="heat-engine" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.373307 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a33c09f9-1cb0-4669-b848-c83ad7aa9399" containerName="heat-engine" Nov 22 08:49:09 crc kubenswrapper[4856]: E1122 08:49:09.373346 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f3408b-029f-4f55-a8ee-d0dea3c82197" containerName="util" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.373354 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f3408b-029f-4f55-a8ee-d0dea3c82197" containerName="util" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.373596 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f3408b-029f-4f55-a8ee-d0dea3c82197" containerName="extract" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.373625 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a33c09f9-1cb0-4669-b848-c83ad7aa9399" containerName="heat-engine" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.374500 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-65pzs" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.379496 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.379592 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.380225 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-82cpm" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.402365 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-65pzs"] Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.447568 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5"] Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.449157 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.451777 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.475910 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-sbkkx" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.476170 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r"] Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.477386 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.482788 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/efab8443-6c3b-47ee-9ba2-22a3e1f28892-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5\" (UID: \"efab8443-6c3b-47ee-9ba2-22a3e1f28892\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.482894 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbdqf\" (UniqueName: \"kubernetes.io/projected/6161a409-9230-4400-a777-a234bd4f9747-kube-api-access-cbdqf\") pod \"obo-prometheus-operator-668cf9dfbb-65pzs\" (UID: \"6161a409-9230-4400-a777-a234bd4f9747\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-65pzs" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.482932 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/efab8443-6c3b-47ee-9ba2-22a3e1f28892-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5\" (UID: \"efab8443-6c3b-47ee-9ba2-22a3e1f28892\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.495317 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5"] Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.516582 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r"] Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.584844 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/efab8443-6c3b-47ee-9ba2-22a3e1f28892-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5\" (UID: \"efab8443-6c3b-47ee-9ba2-22a3e1f28892\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.585234 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbdqf\" (UniqueName: \"kubernetes.io/projected/6161a409-9230-4400-a777-a234bd4f9747-kube-api-access-cbdqf\") pod \"obo-prometheus-operator-668cf9dfbb-65pzs\" (UID: \"6161a409-9230-4400-a777-a234bd4f9747\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-65pzs" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.585268 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71c7fcd5-848f-4503-b1be-09ae67600084-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r\" (UID: \"71c7fcd5-848f-4503-b1be-09ae67600084\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.585304 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/efab8443-6c3b-47ee-9ba2-22a3e1f28892-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5\" (UID: \"efab8443-6c3b-47ee-9ba2-22a3e1f28892\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.585392 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71c7fcd5-848f-4503-b1be-09ae67600084-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r\" (UID: \"71c7fcd5-848f-4503-b1be-09ae67600084\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.609555 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/efab8443-6c3b-47ee-9ba2-22a3e1f28892-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5\" (UID: \"efab8443-6c3b-47ee-9ba2-22a3e1f28892\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.617015 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/efab8443-6c3b-47ee-9ba2-22a3e1f28892-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5\" (UID: \"efab8443-6c3b-47ee-9ba2-22a3e1f28892\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.627052 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-2zz67"] Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.627090 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbdqf\" (UniqueName: \"kubernetes.io/projected/6161a409-9230-4400-a777-a234bd4f9747-kube-api-access-cbdqf\") pod \"obo-prometheus-operator-668cf9dfbb-65pzs\" (UID: \"6161a409-9230-4400-a777-a234bd4f9747\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-65pzs" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.628407 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-2zz67" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.640910 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-7cgp5" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.641277 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.642037 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-2zz67"] Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.693105 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71c7fcd5-848f-4503-b1be-09ae67600084-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r\" (UID: \"71c7fcd5-848f-4503-b1be-09ae67600084\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.693183 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpzk9\" (UniqueName: \"kubernetes.io/projected/6a78f586-cd46-4f0e-b24b-62b93885a986-kube-api-access-tpzk9\") pod \"observability-operator-d8bb48f5d-2zz67\" (UID: \"6a78f586-cd46-4f0e-b24b-62b93885a986\") " pod="openshift-operators/observability-operator-d8bb48f5d-2zz67" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.693216 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71c7fcd5-848f-4503-b1be-09ae67600084-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r\" (UID: \"71c7fcd5-848f-4503-b1be-09ae67600084\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.693234 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a78f586-cd46-4f0e-b24b-62b93885a986-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-2zz67\" (UID: \"6a78f586-cd46-4f0e-b24b-62b93885a986\") " pod="openshift-operators/observability-operator-d8bb48f5d-2zz67" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.701302 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-65pzs" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.706309 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/71c7fcd5-848f-4503-b1be-09ae67600084-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r\" (UID: \"71c7fcd5-848f-4503-b1be-09ae67600084\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.708723 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/71c7fcd5-848f-4503-b1be-09ae67600084-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r\" (UID: \"71c7fcd5-848f-4503-b1be-09ae67600084\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.773982 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.794484 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpzk9\" (UniqueName: \"kubernetes.io/projected/6a78f586-cd46-4f0e-b24b-62b93885a986-kube-api-access-tpzk9\") pod \"observability-operator-d8bb48f5d-2zz67\" (UID: \"6a78f586-cd46-4f0e-b24b-62b93885a986\") " pod="openshift-operators/observability-operator-d8bb48f5d-2zz67" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.794601 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a78f586-cd46-4f0e-b24b-62b93885a986-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-2zz67\" (UID: \"6a78f586-cd46-4f0e-b24b-62b93885a986\") " pod="openshift-operators/observability-operator-d8bb48f5d-2zz67" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.802617 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/6a78f586-cd46-4f0e-b24b-62b93885a986-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-2zz67\" (UID: \"6a78f586-cd46-4f0e-b24b-62b93885a986\") " pod="openshift-operators/observability-operator-d8bb48f5d-2zz67" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.806002 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.814077 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-nfw7d"] Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.815796 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-nfw7d" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.820504 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-6s6d8" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.829963 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpzk9\" (UniqueName: \"kubernetes.io/projected/6a78f586-cd46-4f0e-b24b-62b93885a986-kube-api-access-tpzk9\") pod \"observability-operator-d8bb48f5d-2zz67\" (UID: \"6a78f586-cd46-4f0e-b24b-62b93885a986\") " pod="openshift-operators/observability-operator-d8bb48f5d-2zz67" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.835378 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-nfw7d"] Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.894792 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-2zz67" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.896749 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv25b\" (UniqueName: \"kubernetes.io/projected/5b1b8d7d-8d9e-4cfc-93ca-764793a0b848-kube-api-access-vv25b\") pod \"perses-operator-5446b9c989-nfw7d\" (UID: \"5b1b8d7d-8d9e-4cfc-93ca-764793a0b848\") " pod="openshift-operators/perses-operator-5446b9c989-nfw7d" Nov 22 08:49:09 crc kubenswrapper[4856]: I1122 08:49:09.896873 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b1b8d7d-8d9e-4cfc-93ca-764793a0b848-openshift-service-ca\") pod \"perses-operator-5446b9c989-nfw7d\" (UID: \"5b1b8d7d-8d9e-4cfc-93ca-764793a0b848\") " pod="openshift-operators/perses-operator-5446b9c989-nfw7d" Nov 22 08:49:10 crc kubenswrapper[4856]: I1122 08:49:09.998550 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv25b\" (UniqueName: \"kubernetes.io/projected/5b1b8d7d-8d9e-4cfc-93ca-764793a0b848-kube-api-access-vv25b\") pod \"perses-operator-5446b9c989-nfw7d\" (UID: \"5b1b8d7d-8d9e-4cfc-93ca-764793a0b848\") " pod="openshift-operators/perses-operator-5446b9c989-nfw7d" Nov 22 08:49:10 crc kubenswrapper[4856]: I1122 08:49:09.999004 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b1b8d7d-8d9e-4cfc-93ca-764793a0b848-openshift-service-ca\") pod \"perses-operator-5446b9c989-nfw7d\" (UID: \"5b1b8d7d-8d9e-4cfc-93ca-764793a0b848\") " pod="openshift-operators/perses-operator-5446b9c989-nfw7d" Nov 22 08:49:10 crc kubenswrapper[4856]: I1122 08:49:10.000114 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5b1b8d7d-8d9e-4cfc-93ca-764793a0b848-openshift-service-ca\") pod \"perses-operator-5446b9c989-nfw7d\" (UID: \"5b1b8d7d-8d9e-4cfc-93ca-764793a0b848\") " pod="openshift-operators/perses-operator-5446b9c989-nfw7d" Nov 22 08:49:10 crc kubenswrapper[4856]: I1122 08:49:10.022533 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv25b\" (UniqueName: \"kubernetes.io/projected/5b1b8d7d-8d9e-4cfc-93ca-764793a0b848-kube-api-access-vv25b\") pod \"perses-operator-5446b9c989-nfw7d\" (UID: \"5b1b8d7d-8d9e-4cfc-93ca-764793a0b848\") " pod="openshift-operators/perses-operator-5446b9c989-nfw7d" Nov 22 08:49:10 crc kubenswrapper[4856]: I1122 08:49:10.207880 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-nfw7d" Nov 22 08:49:10 crc kubenswrapper[4856]: I1122 08:49:10.425595 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r"] Nov 22 08:49:10 crc kubenswrapper[4856]: I1122 08:49:10.466712 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r" event={"ID":"71c7fcd5-848f-4503-b1be-09ae67600084","Type":"ContainerStarted","Data":"f7464ec0eaa9b30ed142b93b54975d08649b7cf8d5c2c644c3afe12d3838ac39"} Nov 22 08:49:10 crc kubenswrapper[4856]: I1122 08:49:10.467823 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-65pzs" event={"ID":"6161a409-9230-4400-a777-a234bd4f9747","Type":"ContainerStarted","Data":"bf6e67e3ddb2530b00fe71ab982cb715e1300495064e70499e631fb1742e777b"} Nov 22 08:49:10 crc kubenswrapper[4856]: I1122 08:49:10.490572 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-65pzs"] Nov 22 08:49:10 crc kubenswrapper[4856]: I1122 08:49:10.499809 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5"] Nov 22 08:49:10 crc kubenswrapper[4856]: I1122 08:49:10.585627 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-2zz67"] Nov 22 08:49:10 crc kubenswrapper[4856]: W1122 08:49:10.603072 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a78f586_cd46_4f0e_b24b_62b93885a986.slice/crio-3d9045a6e3e0d872d7914cee0854e950612ff6f42654140a7c92574d3021cd75 WatchSource:0}: Error finding container 3d9045a6e3e0d872d7914cee0854e950612ff6f42654140a7c92574d3021cd75: Status 404 returned error can't find the container with id 3d9045a6e3e0d872d7914cee0854e950612ff6f42654140a7c92574d3021cd75 Nov 22 08:49:10 crc kubenswrapper[4856]: I1122 08:49:10.770368 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-nfw7d"] Nov 22 08:49:10 crc kubenswrapper[4856]: W1122 08:49:10.774707 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b1b8d7d_8d9e_4cfc_93ca_764793a0b848.slice/crio-eacb9f4c27921915029db40f96fad49657fcce272033ca9b927c470a38ab58c3 WatchSource:0}: Error finding container eacb9f4c27921915029db40f96fad49657fcce272033ca9b927c470a38ab58c3: Status 404 returned error can't find the container with id eacb9f4c27921915029db40f96fad49657fcce272033ca9b927c470a38ab58c3 Nov 22 08:49:11 crc kubenswrapper[4856]: I1122 08:49:11.506760 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5" event={"ID":"efab8443-6c3b-47ee-9ba2-22a3e1f28892","Type":"ContainerStarted","Data":"49aad045dfab1045f931dba4c4db34b788d8f04534d3d32260be475d8322a3ef"} Nov 22 08:49:11 crc kubenswrapper[4856]: I1122 08:49:11.509448 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-nfw7d" event={"ID":"5b1b8d7d-8d9e-4cfc-93ca-764793a0b848","Type":"ContainerStarted","Data":"eacb9f4c27921915029db40f96fad49657fcce272033ca9b927c470a38ab58c3"} Nov 22 08:49:11 crc kubenswrapper[4856]: I1122 08:49:11.528930 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-2zz67" event={"ID":"6a78f586-cd46-4f0e-b24b-62b93885a986","Type":"ContainerStarted","Data":"3d9045a6e3e0d872d7914cee0854e950612ff6f42654140a7c92574d3021cd75"} Nov 22 08:49:17 crc kubenswrapper[4856]: I1122 08:49:17.040679 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c90d-account-create-4hzwx"] Nov 22 08:49:17 crc kubenswrapper[4856]: I1122 08:49:17.068302 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-tgjkx"] Nov 22 08:49:17 crc kubenswrapper[4856]: I1122 08:49:17.081660 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-tgjkx"] Nov 22 08:49:17 crc kubenswrapper[4856]: I1122 08:49:17.093607 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c90d-account-create-4hzwx"] Nov 22 08:49:18 crc kubenswrapper[4856]: I1122 08:49:18.735499 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29c8fd83-ca48-40c4-b640-dded6ec91e69" path="/var/lib/kubelet/pods/29c8fd83-ca48-40c4-b640-dded6ec91e69/volumes" Nov 22 08:49:18 crc kubenswrapper[4856]: I1122 08:49:18.736897 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b9bb96-b75a-450e-afba-b290ec554b4b" path="/var/lib/kubelet/pods/a9b9bb96-b75a-450e-afba-b290ec554b4b/volumes" Nov 22 08:49:19 crc kubenswrapper[4856]: I1122 08:49:19.636925 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r" event={"ID":"71c7fcd5-848f-4503-b1be-09ae67600084","Type":"ContainerStarted","Data":"c3f9ebd9692495f2c32e7150be937ed535dc1d691d8db92b4d2c3e164d933434"} Nov 22 08:49:19 crc kubenswrapper[4856]: I1122 08:49:19.638444 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-nfw7d" event={"ID":"5b1b8d7d-8d9e-4cfc-93ca-764793a0b848","Type":"ContainerStarted","Data":"2fd6deaccc87af2857aeab732770d2ca2fbdad07fa224839d0e2fd838a6a17a4"} Nov 22 08:49:19 crc kubenswrapper[4856]: I1122 08:49:19.638554 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-nfw7d" Nov 22 08:49:19 crc kubenswrapper[4856]: I1122 08:49:19.640497 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-2zz67" event={"ID":"6a78f586-cd46-4f0e-b24b-62b93885a986","Type":"ContainerStarted","Data":"658fbe0beaa10918744b3a1a171c9f78eaa46150605806cb343c6a469f561b34"} Nov 22 08:49:19 crc kubenswrapper[4856]: I1122 08:49:19.640701 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-2zz67" Nov 22 08:49:19 crc kubenswrapper[4856]: I1122 08:49:19.642502 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-65pzs" event={"ID":"6161a409-9230-4400-a777-a234bd4f9747","Type":"ContainerStarted","Data":"b57508b625f825da4fcfec2881a34f0dac56ddf0db1e05b83676bfd2bf7cbe98"} Nov 22 08:49:19 crc kubenswrapper[4856]: I1122 08:49:19.644244 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5" event={"ID":"efab8443-6c3b-47ee-9ba2-22a3e1f28892","Type":"ContainerStarted","Data":"42db2100108e010e59f5dc831b65861a3b70f0c2705a221b391ad01975396d4e"} Nov 22 08:49:19 crc kubenswrapper[4856]: I1122 08:49:19.662077 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r" podStartSLOduration=2.434315473 podStartE2EDuration="10.662045492s" podCreationTimestamp="2025-11-22 08:49:09 +0000 UTC" firstStartedPulling="2025-11-22 08:49:10.430452725 +0000 UTC m=+6392.843845983" lastFinishedPulling="2025-11-22 08:49:18.658182744 +0000 UTC m=+6401.071576002" observedRunningTime="2025-11-22 08:49:19.657101238 +0000 UTC m=+6402.070494516" watchObservedRunningTime="2025-11-22 08:49:19.662045492 +0000 UTC m=+6402.075438750" Nov 22 08:49:19 crc kubenswrapper[4856]: I1122 08:49:19.672552 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-2zz67" Nov 22 08:49:19 crc kubenswrapper[4856]: I1122 08:49:19.707696 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-2zz67" podStartSLOduration=2.558231514 podStartE2EDuration="10.707605271s" podCreationTimestamp="2025-11-22 08:49:09 +0000 UTC" firstStartedPulling="2025-11-22 08:49:10.606913191 +0000 UTC m=+6393.020306449" lastFinishedPulling="2025-11-22 08:49:18.756286948 +0000 UTC m=+6401.169680206" observedRunningTime="2025-11-22 08:49:19.696195542 +0000 UTC m=+6402.109588790" watchObservedRunningTime="2025-11-22 08:49:19.707605271 +0000 UTC m=+6402.120998539" Nov 22 08:49:19 crc kubenswrapper[4856]: I1122 08:49:19.726742 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-nfw7d" podStartSLOduration=2.844963721 podStartE2EDuration="10.726720775s" podCreationTimestamp="2025-11-22 08:49:09 +0000 UTC" firstStartedPulling="2025-11-22 08:49:10.777078797 +0000 UTC m=+6393.190472055" lastFinishedPulling="2025-11-22 08:49:18.658835851 +0000 UTC m=+6401.072229109" observedRunningTime="2025-11-22 08:49:19.721900006 +0000 UTC m=+6402.135293274" watchObservedRunningTime="2025-11-22 08:49:19.726720775 +0000 UTC m=+6402.140114033" Nov 22 08:49:19 crc kubenswrapper[4856]: I1122 08:49:19.802292 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-65pzs" podStartSLOduration=2.5779897849999998 podStartE2EDuration="10.802265211s" podCreationTimestamp="2025-11-22 08:49:09 +0000 UTC" firstStartedPulling="2025-11-22 08:49:10.439715254 +0000 UTC m=+6392.853108512" lastFinishedPulling="2025-11-22 08:49:18.66399068 +0000 UTC m=+6401.077383938" observedRunningTime="2025-11-22 08:49:19.771580094 +0000 UTC m=+6402.184973352" watchObservedRunningTime="2025-11-22 08:49:19.802265211 +0000 UTC m=+6402.215658469" Nov 22 08:49:19 crc kubenswrapper[4856]: I1122 08:49:19.936358 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5" podStartSLOduration=2.738702007 podStartE2EDuration="10.936336245s" podCreationTimestamp="2025-11-22 08:49:09 +0000 UTC" firstStartedPulling="2025-11-22 08:49:10.466134086 +0000 UTC m=+6392.879527344" lastFinishedPulling="2025-11-22 08:49:18.663768314 +0000 UTC m=+6401.077161582" observedRunningTime="2025-11-22 08:49:19.861365734 +0000 UTC m=+6402.274758992" watchObservedRunningTime="2025-11-22 08:49:19.936336245 +0000 UTC m=+6402.349729503" Nov 22 08:49:24 crc kubenswrapper[4856]: I1122 08:49:24.438447 4856 scope.go:117] "RemoveContainer" containerID="f1c8f505db70f4824efdd09dfdc3295943db6c043dd547788670aafe338e3a3e" Nov 22 08:49:24 crc kubenswrapper[4856]: I1122 08:49:24.709057 4856 scope.go:117] "RemoveContainer" containerID="f0d618af93e239ba26dfe5c8c86a88e8fe73ea7f034082350ee2dd9bc4c81710" Nov 22 08:49:30 crc kubenswrapper[4856]: I1122 08:49:30.212301 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-nfw7d" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.094602 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.096496 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="bc894d74-307d-4700-aa80-9a72d7abe560" containerName="openstackclient" containerID="cri-o://2a4f14e48ec7fecdcb60a87a4e347194e7df540fa59d368042b79737ad8c8f67" gracePeriod=2 Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.120083 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.175464 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 08:49:33 crc kubenswrapper[4856]: E1122 08:49:33.176228 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc894d74-307d-4700-aa80-9a72d7abe560" containerName="openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.176351 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc894d74-307d-4700-aa80-9a72d7abe560" containerName="openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.176706 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc894d74-307d-4700-aa80-9a72d7abe560" containerName="openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.177671 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.193806 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="bc894d74-307d-4700-aa80-9a72d7abe560" podUID="88ec791b-7c80-478e-a95e-f8c1f93f478b" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.218835 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.270460 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88ec791b-7c80-478e-a95e-f8c1f93f478b-openstack-config-secret\") pod \"openstackclient\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.270982 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ec791b-7c80-478e-a95e-f8c1f93f478b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.271250 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td775\" (UniqueName: \"kubernetes.io/projected/88ec791b-7c80-478e-a95e-f8c1f93f478b-kube-api-access-td775\") pod \"openstackclient\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.271437 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88ec791b-7c80-478e-a95e-f8c1f93f478b-openstack-config\") pod \"openstackclient\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.385830 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88ec791b-7c80-478e-a95e-f8c1f93f478b-openstack-config-secret\") pod \"openstackclient\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.385901 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ec791b-7c80-478e-a95e-f8c1f93f478b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.385927 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td775\" (UniqueName: \"kubernetes.io/projected/88ec791b-7c80-478e-a95e-f8c1f93f478b-kube-api-access-td775\") pod \"openstackclient\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.385966 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88ec791b-7c80-478e-a95e-f8c1f93f478b-openstack-config\") pod \"openstackclient\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.396250 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.396527 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ec791b-7c80-478e-a95e-f8c1f93f478b-combined-ca-bundle\") pod \"openstackclient\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.398521 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.398739 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88ec791b-7c80-478e-a95e-f8c1f93f478b-openstack-config\") pod \"openstackclient\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.412710 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-l9hnk" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.412993 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88ec791b-7c80-478e-a95e-f8c1f93f478b-openstack-config-secret\") pod \"openstackclient\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.417585 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.446656 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td775\" (UniqueName: \"kubernetes.io/projected/88ec791b-7c80-478e-a95e-f8c1f93f478b-kube-api-access-td775\") pod \"openstackclient\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.503600 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9pf8\" (UniqueName: \"kubernetes.io/projected/0a74f189-af8a-4787-99d5-ec500950ccc8-kube-api-access-k9pf8\") pod \"kube-state-metrics-0\" (UID: \"0a74f189-af8a-4787-99d5-ec500950ccc8\") " pod="openstack/kube-state-metrics-0" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.529472 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.614866 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9pf8\" (UniqueName: \"kubernetes.io/projected/0a74f189-af8a-4787-99d5-ec500950ccc8-kube-api-access-k9pf8\") pod \"kube-state-metrics-0\" (UID: \"0a74f189-af8a-4787-99d5-ec500950ccc8\") " pod="openstack/kube-state-metrics-0" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.663418 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9pf8\" (UniqueName: \"kubernetes.io/projected/0a74f189-af8a-4787-99d5-ec500950ccc8-kube-api-access-k9pf8\") pod \"kube-state-metrics-0\" (UID: \"0a74f189-af8a-4787-99d5-ec500950ccc8\") " pod="openstack/kube-state-metrics-0" Nov 22 08:49:33 crc kubenswrapper[4856]: I1122 08:49:33.828789 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.271092 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.273791 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.296752 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.297109 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-p7qwb" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.297120 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.297317 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.297409 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.309912 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.348337 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0982dad5-4a0f-43a7-a561-a90a5c6a2070-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.348392 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0982dad5-4a0f-43a7-a561-a90a5c6a2070-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.348443 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0982dad5-4a0f-43a7-a561-a90a5c6a2070-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.348528 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0982dad5-4a0f-43a7-a561-a90a5c6a2070-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.348596 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn5jr\" (UniqueName: \"kubernetes.io/projected/0982dad5-4a0f-43a7-a561-a90a5c6a2070-kube-api-access-gn5jr\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.348658 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0982dad5-4a0f-43a7-a561-a90a5c6a2070-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.348681 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0982dad5-4a0f-43a7-a561-a90a5c6a2070-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.424800 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.454149 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0982dad5-4a0f-43a7-a561-a90a5c6a2070-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.454208 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0982dad5-4a0f-43a7-a561-a90a5c6a2070-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.454248 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0982dad5-4a0f-43a7-a561-a90a5c6a2070-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.454298 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0982dad5-4a0f-43a7-a561-a90a5c6a2070-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.454360 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn5jr\" (UniqueName: \"kubernetes.io/projected/0982dad5-4a0f-43a7-a561-a90a5c6a2070-kube-api-access-gn5jr\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.454423 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0982dad5-4a0f-43a7-a561-a90a5c6a2070-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.454447 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0982dad5-4a0f-43a7-a561-a90a5c6a2070-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.455017 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/0982dad5-4a0f-43a7-a561-a90a5c6a2070-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.466288 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0982dad5-4a0f-43a7-a561-a90a5c6a2070-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.471132 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/0982dad5-4a0f-43a7-a561-a90a5c6a2070-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.484173 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/0982dad5-4a0f-43a7-a561-a90a5c6a2070-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.486372 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0982dad5-4a0f-43a7-a561-a90a5c6a2070-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.488812 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0982dad5-4a0f-43a7-a561-a90a5c6a2070-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.496567 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn5jr\" (UniqueName: \"kubernetes.io/projected/0982dad5-4a0f-43a7-a561-a90a5c6a2070-kube-api-access-gn5jr\") pod \"alertmanager-metric-storage-0\" (UID: \"0982dad5-4a0f-43a7-a561-a90a5c6a2070\") " pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.643550 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: W1122 08:49:34.799207 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a74f189_af8a_4787_99d5_ec500950ccc8.slice/crio-57ae82c11425083f71976cd6740334b050873b03d59cc88a1feb23afa31e48e6 WatchSource:0}: Error finding container 57ae82c11425083f71976cd6740334b050873b03d59cc88a1feb23afa31e48e6: Status 404 returned error can't find the container with id 57ae82c11425083f71976cd6740334b050873b03d59cc88a1feb23afa31e48e6 Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.805366 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.805580 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.821395 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.835977 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.836207 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.836340 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.836499 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.836679 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.836777 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-d62q6" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.844293 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.934356 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"88ec791b-7c80-478e-a95e-f8c1f93f478b","Type":"ContainerStarted","Data":"5b768fda51202844dd51df36fa2a0b7b14148a3bdae4bd6c761d8b72902fd192"} Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.985965 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-config\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.986021 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/49b252a7-5676-4326-8238-28d33e7d097a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.986065 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-111f3d86-645a-479b-b29a-0573c913bea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-111f3d86-645a-479b-b29a-0573c913bea4\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.986240 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.986426 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/49b252a7-5676-4326-8238-28d33e7d097a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.986603 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/49b252a7-5676-4326-8238-28d33e7d097a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.986711 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:34 crc kubenswrapper[4856]: I1122 08:49:34.986756 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtkzs\" (UniqueName: \"kubernetes.io/projected/49b252a7-5676-4326-8238-28d33e7d097a-kube-api-access-rtkzs\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.088496 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.088638 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/49b252a7-5676-4326-8238-28d33e7d097a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.088741 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/49b252a7-5676-4326-8238-28d33e7d097a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.088800 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.088828 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtkzs\" (UniqueName: \"kubernetes.io/projected/49b252a7-5676-4326-8238-28d33e7d097a-kube-api-access-rtkzs\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.088870 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-config\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.088896 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/49b252a7-5676-4326-8238-28d33e7d097a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.088944 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-111f3d86-645a-479b-b29a-0573c913bea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-111f3d86-645a-479b-b29a-0573c913bea4\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.100568 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/49b252a7-5676-4326-8238-28d33e7d097a-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.118870 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.118918 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-111f3d86-645a-479b-b29a-0573c913bea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-111f3d86-645a-479b-b29a-0573c913bea4\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bd9ece169a90f74cd4430627dee0e4356066eec80a3078f069b25bc3a35bddf6/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.125634 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/49b252a7-5676-4326-8238-28d33e7d097a-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.130019 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.131319 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-config\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.152043 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/49b252a7-5676-4326-8238-28d33e7d097a-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.156888 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.163354 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtkzs\" (UniqueName: \"kubernetes.io/projected/49b252a7-5676-4326-8238-28d33e7d097a-kube-api-access-rtkzs\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.286148 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-111f3d86-645a-479b-b29a-0573c913bea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-111f3d86-645a-479b-b29a-0573c913bea4\") pod \"prometheus-metric-storage-0\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.488120 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.492955 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 22 08:49:35 crc kubenswrapper[4856]: W1122 08:49:35.494675 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0982dad5_4a0f_43a7_a561_a90a5c6a2070.slice/crio-0f0fd02c77a78ef97cdeb4b52730b4d48831d05d5e334d57688197d32d3cd956 WatchSource:0}: Error finding container 0f0fd02c77a78ef97cdeb4b52730b4d48831d05d5e334d57688197d32d3cd956: Status 404 returned error can't find the container with id 0f0fd02c77a78ef97cdeb4b52730b4d48831d05d5e334d57688197d32d3cd956 Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.614637 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.702644 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc894d74-307d-4700-aa80-9a72d7abe560-openstack-config-secret\") pod \"bc894d74-307d-4700-aa80-9a72d7abe560\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.702777 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc894d74-307d-4700-aa80-9a72d7abe560-combined-ca-bundle\") pod \"bc894d74-307d-4700-aa80-9a72d7abe560\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.702809 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpdch\" (UniqueName: \"kubernetes.io/projected/bc894d74-307d-4700-aa80-9a72d7abe560-kube-api-access-jpdch\") pod \"bc894d74-307d-4700-aa80-9a72d7abe560\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.702834 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc894d74-307d-4700-aa80-9a72d7abe560-openstack-config\") pod \"bc894d74-307d-4700-aa80-9a72d7abe560\" (UID: \"bc894d74-307d-4700-aa80-9a72d7abe560\") " Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.709919 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc894d74-307d-4700-aa80-9a72d7abe560-kube-api-access-jpdch" (OuterVolumeSpecName: "kube-api-access-jpdch") pod "bc894d74-307d-4700-aa80-9a72d7abe560" (UID: "bc894d74-307d-4700-aa80-9a72d7abe560"). InnerVolumeSpecName "kube-api-access-jpdch". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.764002 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc894d74-307d-4700-aa80-9a72d7abe560-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "bc894d74-307d-4700-aa80-9a72d7abe560" (UID: "bc894d74-307d-4700-aa80-9a72d7abe560"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.771092 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc894d74-307d-4700-aa80-9a72d7abe560-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "bc894d74-307d-4700-aa80-9a72d7abe560" (UID: "bc894d74-307d-4700-aa80-9a72d7abe560"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.795123 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc894d74-307d-4700-aa80-9a72d7abe560-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc894d74-307d-4700-aa80-9a72d7abe560" (UID: "bc894d74-307d-4700-aa80-9a72d7abe560"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.805770 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpdch\" (UniqueName: \"kubernetes.io/projected/bc894d74-307d-4700-aa80-9a72d7abe560-kube-api-access-jpdch\") on node \"crc\" DevicePath \"\"" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.805800 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/bc894d74-307d-4700-aa80-9a72d7abe560-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.805813 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/bc894d74-307d-4700-aa80-9a72d7abe560-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.805821 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc894d74-307d-4700-aa80-9a72d7abe560-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.946843 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0982dad5-4a0f-43a7-a561-a90a5c6a2070","Type":"ContainerStarted","Data":"0f0fd02c77a78ef97cdeb4b52730b4d48831d05d5e334d57688197d32d3cd956"} Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.948778 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a74f189-af8a-4787-99d5-ec500950ccc8","Type":"ContainerStarted","Data":"57ae82c11425083f71976cd6740334b050873b03d59cc88a1feb23afa31e48e6"} Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.950935 4856 generic.go:334] "Generic (PLEG): container finished" podID="bc894d74-307d-4700-aa80-9a72d7abe560" containerID="2a4f14e48ec7fecdcb60a87a4e347194e7df540fa59d368042b79737ad8c8f67" exitCode=137 Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.951009 4856 scope.go:117] "RemoveContainer" containerID="2a4f14e48ec7fecdcb60a87a4e347194e7df540fa59d368042b79737ad8c8f67" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.951021 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.952782 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"88ec791b-7c80-478e-a95e-f8c1f93f478b","Type":"ContainerStarted","Data":"c52847850096cd4307bd30bfd25b0d25b80fabe0d00b9c2da2c02e6fe7607a9d"} Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.976154 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.976134961 podStartE2EDuration="2.976134961s" podCreationTimestamp="2025-11-22 08:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:49:35.972668967 +0000 UTC m=+6418.386062215" watchObservedRunningTime="2025-11-22 08:49:35.976134961 +0000 UTC m=+6418.389528219" Nov 22 08:49:35 crc kubenswrapper[4856]: I1122 08:49:35.976883 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="bc894d74-307d-4700-aa80-9a72d7abe560" podUID="88ec791b-7c80-478e-a95e-f8c1f93f478b" Nov 22 08:49:36 crc kubenswrapper[4856]: I1122 08:49:36.004002 4856 scope.go:117] "RemoveContainer" containerID="2a4f14e48ec7fecdcb60a87a4e347194e7df540fa59d368042b79737ad8c8f67" Nov 22 08:49:36 crc kubenswrapper[4856]: E1122 08:49:36.004581 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a4f14e48ec7fecdcb60a87a4e347194e7df540fa59d368042b79737ad8c8f67\": container with ID starting with 2a4f14e48ec7fecdcb60a87a4e347194e7df540fa59d368042b79737ad8c8f67 not found: ID does not exist" containerID="2a4f14e48ec7fecdcb60a87a4e347194e7df540fa59d368042b79737ad8c8f67" Nov 22 08:49:36 crc kubenswrapper[4856]: I1122 08:49:36.004646 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a4f14e48ec7fecdcb60a87a4e347194e7df540fa59d368042b79737ad8c8f67"} err="failed to get container status \"2a4f14e48ec7fecdcb60a87a4e347194e7df540fa59d368042b79737ad8c8f67\": rpc error: code = NotFound desc = could not find container \"2a4f14e48ec7fecdcb60a87a4e347194e7df540fa59d368042b79737ad8c8f67\": container with ID starting with 2a4f14e48ec7fecdcb60a87a4e347194e7df540fa59d368042b79737ad8c8f67 not found: ID does not exist" Nov 22 08:49:36 crc kubenswrapper[4856]: I1122 08:49:36.508580 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 08:49:36 crc kubenswrapper[4856]: I1122 08:49:36.720850 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc894d74-307d-4700-aa80-9a72d7abe560" path="/var/lib/kubelet/pods/bc894d74-307d-4700-aa80-9a72d7abe560/volumes" Nov 22 08:49:36 crc kubenswrapper[4856]: I1122 08:49:36.984576 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"49b252a7-5676-4326-8238-28d33e7d097a","Type":"ContainerStarted","Data":"1e26b66308915f4943eddd0810c1928356d86b5bd6b592ee4c215f6a2bd4cd58"} Nov 22 08:49:37 crc kubenswrapper[4856]: I1122 08:49:37.001011 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a74f189-af8a-4787-99d5-ec500950ccc8","Type":"ContainerStarted","Data":"a28aeefba809388cc4215a01b24777b3ab43655ded8ee0e351291026471d517f"} Nov 22 08:49:37 crc kubenswrapper[4856]: I1122 08:49:37.034474 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.879411084 podStartE2EDuration="4.034446457s" podCreationTimestamp="2025-11-22 08:49:33 +0000 UTC" firstStartedPulling="2025-11-22 08:49:34.860249084 +0000 UTC m=+6417.273642342" lastFinishedPulling="2025-11-22 08:49:36.015284457 +0000 UTC m=+6418.428677715" observedRunningTime="2025-11-22 08:49:37.025655569 +0000 UTC m=+6419.439048847" watchObservedRunningTime="2025-11-22 08:49:37.034446457 +0000 UTC m=+6419.447839715" Nov 22 08:49:38 crc kubenswrapper[4856]: I1122 08:49:38.014556 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 22 08:49:43 crc kubenswrapper[4856]: I1122 08:49:43.066374 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0982dad5-4a0f-43a7-a561-a90a5c6a2070","Type":"ContainerStarted","Data":"8ec56954821308604459708b913805d0e337ea5f1a8906b35634745d751709eb"} Nov 22 08:49:43 crc kubenswrapper[4856]: I1122 08:49:43.069946 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"49b252a7-5676-4326-8238-28d33e7d097a","Type":"ContainerStarted","Data":"cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb"} Nov 22 08:49:43 crc kubenswrapper[4856]: I1122 08:49:43.833284 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 22 08:49:44 crc kubenswrapper[4856]: I1122 08:49:44.050121 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-x96jr"] Nov 22 08:49:44 crc kubenswrapper[4856]: I1122 08:49:44.059682 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-x96jr"] Nov 22 08:49:44 crc kubenswrapper[4856]: I1122 08:49:44.720741 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7f7ea0-76af-4d66-955a-3ad2b1f034e5" path="/var/lib/kubelet/pods/cc7f7ea0-76af-4d66-955a-3ad2b1f034e5/volumes" Nov 22 08:49:49 crc kubenswrapper[4856]: I1122 08:49:49.125895 4856 generic.go:334] "Generic (PLEG): container finished" podID="0982dad5-4a0f-43a7-a561-a90a5c6a2070" containerID="8ec56954821308604459708b913805d0e337ea5f1a8906b35634745d751709eb" exitCode=0 Nov 22 08:49:49 crc kubenswrapper[4856]: I1122 08:49:49.125993 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0982dad5-4a0f-43a7-a561-a90a5c6a2070","Type":"ContainerDied","Data":"8ec56954821308604459708b913805d0e337ea5f1a8906b35634745d751709eb"} Nov 22 08:49:50 crc kubenswrapper[4856]: I1122 08:49:50.140391 4856 generic.go:334] "Generic (PLEG): container finished" podID="49b252a7-5676-4326-8238-28d33e7d097a" containerID="cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb" exitCode=0 Nov 22 08:49:50 crc kubenswrapper[4856]: I1122 08:49:50.140445 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"49b252a7-5676-4326-8238-28d33e7d097a","Type":"ContainerDied","Data":"cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb"} Nov 22 08:49:52 crc kubenswrapper[4856]: I1122 08:49:52.168103 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0982dad5-4a0f-43a7-a561-a90a5c6a2070","Type":"ContainerStarted","Data":"9a88ca0242c1865b8f78349edf52f4fe98104ac94be6d90919212489c8e4fee1"} Nov 22 08:49:57 crc kubenswrapper[4856]: I1122 08:49:57.216382 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"0982dad5-4a0f-43a7-a561-a90a5c6a2070","Type":"ContainerStarted","Data":"c3ecf4a641aa1aa8535f0b0a9ed69091fc1fc541eaf3a2a52c5c2cafed9911f2"} Nov 22 08:49:58 crc kubenswrapper[4856]: I1122 08:49:58.229385 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"49b252a7-5676-4326-8238-28d33e7d097a","Type":"ContainerStarted","Data":"7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2"} Nov 22 08:49:58 crc kubenswrapper[4856]: I1122 08:49:58.229781 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:58 crc kubenswrapper[4856]: I1122 08:49:58.234067 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Nov 22 08:49:58 crc kubenswrapper[4856]: I1122 08:49:58.256676 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=8.952457381 podStartE2EDuration="24.256636289s" podCreationTimestamp="2025-11-22 08:49:34 +0000 UTC" firstStartedPulling="2025-11-22 08:49:35.990073687 +0000 UTC m=+6418.403466945" lastFinishedPulling="2025-11-22 08:49:51.294252595 +0000 UTC m=+6433.707645853" observedRunningTime="2025-11-22 08:49:58.248445618 +0000 UTC m=+6440.661838876" watchObservedRunningTime="2025-11-22 08:49:58.256636289 +0000 UTC m=+6440.670029547" Nov 22 08:49:59 crc kubenswrapper[4856]: I1122 08:49:59.754881 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:49:59 crc kubenswrapper[4856]: I1122 08:49:59.755234 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:50:02 crc kubenswrapper[4856]: I1122 08:50:02.269351 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"49b252a7-5676-4326-8238-28d33e7d097a","Type":"ContainerStarted","Data":"da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032"} Nov 22 08:50:04 crc kubenswrapper[4856]: I1122 08:50:04.296477 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"49b252a7-5676-4326-8238-28d33e7d097a","Type":"ContainerStarted","Data":"2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc"} Nov 22 08:50:04 crc kubenswrapper[4856]: I1122 08:50:04.330978 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.872967223 podStartE2EDuration="31.330958235s" podCreationTimestamp="2025-11-22 08:49:33 +0000 UTC" firstStartedPulling="2025-11-22 08:49:36.51405887 +0000 UTC m=+6418.927452128" lastFinishedPulling="2025-11-22 08:50:03.972049882 +0000 UTC m=+6446.385443140" observedRunningTime="2025-11-22 08:50:04.321623983 +0000 UTC m=+6446.735017261" watchObservedRunningTime="2025-11-22 08:50:04.330958235 +0000 UTC m=+6446.744351493" Nov 22 08:50:05 crc kubenswrapper[4856]: I1122 08:50:05.489863 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:05 crc kubenswrapper[4856]: I1122 08:50:05.490070 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:05 crc kubenswrapper[4856]: I1122 08:50:05.494130 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:06 crc kubenswrapper[4856]: I1122 08:50:06.316078 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.381343 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.381822 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="88ec791b-7c80-478e-a95e-f8c1f93f478b" containerName="openstackclient" containerID="cri-o://c52847850096cd4307bd30bfd25b0d25b80fabe0d00b9c2da2c02e6fe7607a9d" gracePeriod=2 Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.396506 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.425554 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 22 08:50:07 crc kubenswrapper[4856]: E1122 08:50:07.425970 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88ec791b-7c80-478e-a95e-f8c1f93f478b" containerName="openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.425984 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="88ec791b-7c80-478e-a95e-f8c1f93f478b" containerName="openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.426200 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="88ec791b-7c80-478e-a95e-f8c1f93f478b" containerName="openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.426915 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.442122 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.451188 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="88ec791b-7c80-478e-a95e-f8c1f93f478b" podUID="1a757c5a-d91e-485c-bf37-0d90b5e87f89" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.566908 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a757c5a-d91e-485c-bf37-0d90b5e87f89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a757c5a-d91e-485c-bf37-0d90b5e87f89\") " pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.567020 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a757c5a-d91e-485c-bf37-0d90b5e87f89-openstack-config\") pod \"openstackclient\" (UID: \"1a757c5a-d91e-485c-bf37-0d90b5e87f89\") " pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.567105 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a757c5a-d91e-485c-bf37-0d90b5e87f89-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a757c5a-d91e-485c-bf37-0d90b5e87f89\") " pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.567144 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snr9t\" (UniqueName: \"kubernetes.io/projected/1a757c5a-d91e-485c-bf37-0d90b5e87f89-kube-api-access-snr9t\") pod \"openstackclient\" (UID: \"1a757c5a-d91e-485c-bf37-0d90b5e87f89\") " pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.600917 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.603478 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.605582 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.609961 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.624767 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.669412 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/870368ad-d281-4f1a-a37f-2aa672506c81-run-httpd\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.669457 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.669794 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a757c5a-d91e-485c-bf37-0d90b5e87f89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a757c5a-d91e-485c-bf37-0d90b5e87f89\") " pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.669883 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a757c5a-d91e-485c-bf37-0d90b5e87f89-openstack-config\") pod \"openstackclient\" (UID: \"1a757c5a-d91e-485c-bf37-0d90b5e87f89\") " pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.669975 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-config-data\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.670063 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a757c5a-d91e-485c-bf37-0d90b5e87f89-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a757c5a-d91e-485c-bf37-0d90b5e87f89\") " pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.670093 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-scripts\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.670119 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snr9t\" (UniqueName: \"kubernetes.io/projected/1a757c5a-d91e-485c-bf37-0d90b5e87f89-kube-api-access-snr9t\") pod \"openstackclient\" (UID: \"1a757c5a-d91e-485c-bf37-0d90b5e87f89\") " pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.670140 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj25r\" (UniqueName: \"kubernetes.io/projected/870368ad-d281-4f1a-a37f-2aa672506c81-kube-api-access-rj25r\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.670163 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.670257 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/870368ad-d281-4f1a-a37f-2aa672506c81-log-httpd\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.672239 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1a757c5a-d91e-485c-bf37-0d90b5e87f89-openstack-config\") pod \"openstackclient\" (UID: \"1a757c5a-d91e-485c-bf37-0d90b5e87f89\") " pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.687335 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a757c5a-d91e-485c-bf37-0d90b5e87f89-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1a757c5a-d91e-485c-bf37-0d90b5e87f89\") " pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.687759 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1a757c5a-d91e-485c-bf37-0d90b5e87f89-openstack-config-secret\") pod \"openstackclient\" (UID: \"1a757c5a-d91e-485c-bf37-0d90b5e87f89\") " pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.693397 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snr9t\" (UniqueName: \"kubernetes.io/projected/1a757c5a-d91e-485c-bf37-0d90b5e87f89-kube-api-access-snr9t\") pod \"openstackclient\" (UID: \"1a757c5a-d91e-485c-bf37-0d90b5e87f89\") " pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.746476 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.772182 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-config-data\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.772296 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-scripts\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.772318 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj25r\" (UniqueName: \"kubernetes.io/projected/870368ad-d281-4f1a-a37f-2aa672506c81-kube-api-access-rj25r\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.772338 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.772389 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/870368ad-d281-4f1a-a37f-2aa672506c81-log-httpd\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.772468 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/870368ad-d281-4f1a-a37f-2aa672506c81-run-httpd\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.772489 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.774661 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/870368ad-d281-4f1a-a37f-2aa672506c81-run-httpd\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.775872 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/870368ad-d281-4f1a-a37f-2aa672506c81-log-httpd\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.777146 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-scripts\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.777478 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-config-data\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.779086 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.779553 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.796534 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj25r\" (UniqueName: \"kubernetes.io/projected/870368ad-d281-4f1a-a37f-2aa672506c81-kube-api-access-rj25r\") pod \"ceilometer-0\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " pod="openstack/ceilometer-0" Nov 22 08:50:07 crc kubenswrapper[4856]: I1122 08:50:07.924493 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 08:50:08 crc kubenswrapper[4856]: I1122 08:50:08.349403 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 22 08:50:08 crc kubenswrapper[4856]: W1122 08:50:08.350189 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a757c5a_d91e_485c_bf37_0d90b5e87f89.slice/crio-d9bd4aacef11b6cd28b14280978e95da1ecb2e18550b9befe78dba7ca13a97f8 WatchSource:0}: Error finding container d9bd4aacef11b6cd28b14280978e95da1ecb2e18550b9befe78dba7ca13a97f8: Status 404 returned error can't find the container with id d9bd4aacef11b6cd28b14280978e95da1ecb2e18550b9befe78dba7ca13a97f8 Nov 22 08:50:08 crc kubenswrapper[4856]: W1122 08:50:08.420664 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod870368ad_d281_4f1a_a37f_2aa672506c81.slice/crio-7802f598813d7b36c59a93ace305ec22f87dd859f78e145d23e079d0f83e9282 WatchSource:0}: Error finding container 7802f598813d7b36c59a93ace305ec22f87dd859f78e145d23e079d0f83e9282: Status 404 returned error can't find the container with id 7802f598813d7b36c59a93ace305ec22f87dd859f78e145d23e079d0f83e9282 Nov 22 08:50:08 crc kubenswrapper[4856]: I1122 08:50:08.422688 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:50:09 crc kubenswrapper[4856]: I1122 08:50:09.018484 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 08:50:09 crc kubenswrapper[4856]: I1122 08:50:09.345965 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1a757c5a-d91e-485c-bf37-0d90b5e87f89","Type":"ContainerStarted","Data":"d9bd4aacef11b6cd28b14280978e95da1ecb2e18550b9befe78dba7ca13a97f8"} Nov 22 08:50:09 crc kubenswrapper[4856]: I1122 08:50:09.347529 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"870368ad-d281-4f1a-a37f-2aa672506c81","Type":"ContainerStarted","Data":"7802f598813d7b36c59a93ace305ec22f87dd859f78e145d23e079d0f83e9282"} Nov 22 08:50:09 crc kubenswrapper[4856]: I1122 08:50:09.347709 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="prometheus" containerID="cri-o://7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2" gracePeriod=600 Nov 22 08:50:09 crc kubenswrapper[4856]: I1122 08:50:09.347787 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="thanos-sidecar" containerID="cri-o://2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc" gracePeriod=600 Nov 22 08:50:09 crc kubenswrapper[4856]: I1122 08:50:09.347779 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="config-reloader" containerID="cri-o://da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032" gracePeriod=600 Nov 22 08:50:10 crc kubenswrapper[4856]: I1122 08:50:10.362869 4856 generic.go:334] "Generic (PLEG): container finished" podID="88ec791b-7c80-478e-a95e-f8c1f93f478b" containerID="c52847850096cd4307bd30bfd25b0d25b80fabe0d00b9c2da2c02e6fe7607a9d" exitCode=137 Nov 22 08:50:10 crc kubenswrapper[4856]: I1122 08:50:10.366303 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1a757c5a-d91e-485c-bf37-0d90b5e87f89","Type":"ContainerStarted","Data":"7985aca5656e99a0d873cda588406c3c71dd1ae18a38ce2181f5cbb15322b43d"} Nov 22 08:50:10 crc kubenswrapper[4856]: I1122 08:50:10.491972 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.132:9090/-/ready\": dial tcp 10.217.1.132:9090: connect: connection refused" Nov 22 08:50:10 crc kubenswrapper[4856]: I1122 08:50:10.968004 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 08:50:10 crc kubenswrapper[4856]: I1122 08:50:10.975307 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="88ec791b-7c80-478e-a95e-f8c1f93f478b" podUID="1a757c5a-d91e-485c-bf37-0d90b5e87f89" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.066118 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td775\" (UniqueName: \"kubernetes.io/projected/88ec791b-7c80-478e-a95e-f8c1f93f478b-kube-api-access-td775\") pod \"88ec791b-7c80-478e-a95e-f8c1f93f478b\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.066241 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88ec791b-7c80-478e-a95e-f8c1f93f478b-openstack-config\") pod \"88ec791b-7c80-478e-a95e-f8c1f93f478b\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.066296 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88ec791b-7c80-478e-a95e-f8c1f93f478b-openstack-config-secret\") pod \"88ec791b-7c80-478e-a95e-f8c1f93f478b\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.066354 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ec791b-7c80-478e-a95e-f8c1f93f478b-combined-ca-bundle\") pod \"88ec791b-7c80-478e-a95e-f8c1f93f478b\" (UID: \"88ec791b-7c80-478e-a95e-f8c1f93f478b\") " Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.079333 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88ec791b-7c80-478e-a95e-f8c1f93f478b-kube-api-access-td775" (OuterVolumeSpecName: "kube-api-access-td775") pod "88ec791b-7c80-478e-a95e-f8c1f93f478b" (UID: "88ec791b-7c80-478e-a95e-f8c1f93f478b"). InnerVolumeSpecName "kube-api-access-td775". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.102191 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ec791b-7c80-478e-a95e-f8c1f93f478b-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "88ec791b-7c80-478e-a95e-f8c1f93f478b" (UID: "88ec791b-7c80-478e-a95e-f8c1f93f478b"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.104755 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88ec791b-7c80-478e-a95e-f8c1f93f478b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88ec791b-7c80-478e-a95e-f8c1f93f478b" (UID: "88ec791b-7c80-478e-a95e-f8c1f93f478b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.138132 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88ec791b-7c80-478e-a95e-f8c1f93f478b-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "88ec791b-7c80-478e-a95e-f8c1f93f478b" (UID: "88ec791b-7c80-478e-a95e-f8c1f93f478b"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.172347 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/88ec791b-7c80-478e-a95e-f8c1f93f478b-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.172391 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/88ec791b-7c80-478e-a95e-f8c1f93f478b-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.172408 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88ec791b-7c80-478e-a95e-f8c1f93f478b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.172419 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td775\" (UniqueName: \"kubernetes.io/projected/88ec791b-7c80-478e-a95e-f8c1f93f478b-kube-api-access-td775\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.358599 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.378564 4856 scope.go:117] "RemoveContainer" containerID="c52847850096cd4307bd30bfd25b0d25b80fabe0d00b9c2da2c02e6fe7607a9d" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.378741 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.392694 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="88ec791b-7c80-478e-a95e-f8c1f93f478b" podUID="1a757c5a-d91e-485c-bf37-0d90b5e87f89" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.395405 4856 generic.go:334] "Generic (PLEG): container finished" podID="49b252a7-5676-4326-8238-28d33e7d097a" containerID="2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc" exitCode=0 Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.395439 4856 generic.go:334] "Generic (PLEG): container finished" podID="49b252a7-5676-4326-8238-28d33e7d097a" containerID="da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032" exitCode=0 Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.395449 4856 generic.go:334] "Generic (PLEG): container finished" podID="49b252a7-5676-4326-8238-28d33e7d097a" containerID="7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2" exitCode=0 Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.395583 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.395992 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"49b252a7-5676-4326-8238-28d33e7d097a","Type":"ContainerDied","Data":"2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc"} Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.396036 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"49b252a7-5676-4326-8238-28d33e7d097a","Type":"ContainerDied","Data":"da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032"} Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.396053 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"49b252a7-5676-4326-8238-28d33e7d097a","Type":"ContainerDied","Data":"7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2"} Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.396066 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"49b252a7-5676-4326-8238-28d33e7d097a","Type":"ContainerDied","Data":"1e26b66308915f4943eddd0810c1928356d86b5bd6b592ee4c215f6a2bd4cd58"} Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.441212 4856 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="88ec791b-7c80-478e-a95e-f8c1f93f478b" podUID="1a757c5a-d91e-485c-bf37-0d90b5e87f89" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.446527 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.446484157 podStartE2EDuration="4.446484157s" podCreationTimestamp="2025-11-22 08:50:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:50:11.427486414 +0000 UTC m=+6453.840879682" watchObservedRunningTime="2025-11-22 08:50:11.446484157 +0000 UTC m=+6453.859877415" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.489302 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-web-config\") pod \"49b252a7-5676-4326-8238-28d33e7d097a\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.489389 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-config\") pod \"49b252a7-5676-4326-8238-28d33e7d097a\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.489578 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-111f3d86-645a-479b-b29a-0573c913bea4\") pod \"49b252a7-5676-4326-8238-28d33e7d097a\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.489751 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtkzs\" (UniqueName: \"kubernetes.io/projected/49b252a7-5676-4326-8238-28d33e7d097a-kube-api-access-rtkzs\") pod \"49b252a7-5676-4326-8238-28d33e7d097a\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.489789 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/49b252a7-5676-4326-8238-28d33e7d097a-config-out\") pod \"49b252a7-5676-4326-8238-28d33e7d097a\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.489863 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-thanos-prometheus-http-client-file\") pod \"49b252a7-5676-4326-8238-28d33e7d097a\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.489937 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/49b252a7-5676-4326-8238-28d33e7d097a-tls-assets\") pod \"49b252a7-5676-4326-8238-28d33e7d097a\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.489981 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/49b252a7-5676-4326-8238-28d33e7d097a-prometheus-metric-storage-rulefiles-0\") pod \"49b252a7-5676-4326-8238-28d33e7d097a\" (UID: \"49b252a7-5676-4326-8238-28d33e7d097a\") " Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.499210 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-config" (OuterVolumeSpecName: "config") pod "49b252a7-5676-4326-8238-28d33e7d097a" (UID: "49b252a7-5676-4326-8238-28d33e7d097a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.502005 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b252a7-5676-4326-8238-28d33e7d097a-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "49b252a7-5676-4326-8238-28d33e7d097a" (UID: "49b252a7-5676-4326-8238-28d33e7d097a"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.504673 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49b252a7-5676-4326-8238-28d33e7d097a-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "49b252a7-5676-4326-8238-28d33e7d097a" (UID: "49b252a7-5676-4326-8238-28d33e7d097a"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.509964 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49b252a7-5676-4326-8238-28d33e7d097a-config-out" (OuterVolumeSpecName: "config-out") pod "49b252a7-5676-4326-8238-28d33e7d097a" (UID: "49b252a7-5676-4326-8238-28d33e7d097a"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.512430 4856 scope.go:117] "RemoveContainer" containerID="2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.519302 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "49b252a7-5676-4326-8238-28d33e7d097a" (UID: "49b252a7-5676-4326-8238-28d33e7d097a"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.520952 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b252a7-5676-4326-8238-28d33e7d097a-kube-api-access-rtkzs" (OuterVolumeSpecName: "kube-api-access-rtkzs") pod "49b252a7-5676-4326-8238-28d33e7d097a" (UID: "49b252a7-5676-4326-8238-28d33e7d097a"). InnerVolumeSpecName "kube-api-access-rtkzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.545529 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-web-config" (OuterVolumeSpecName: "web-config") pod "49b252a7-5676-4326-8238-28d33e7d097a" (UID: "49b252a7-5676-4326-8238-28d33e7d097a"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.574478 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-111f3d86-645a-479b-b29a-0573c913bea4" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "49b252a7-5676-4326-8238-28d33e7d097a" (UID: "49b252a7-5676-4326-8238-28d33e7d097a"). InnerVolumeSpecName "pvc-111f3d86-645a-479b-b29a-0573c913bea4". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.592477 4856 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-web-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.593044 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.593084 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-111f3d86-645a-479b-b29a-0573c913bea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-111f3d86-645a-479b-b29a-0573c913bea4\") on node \"crc\" " Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.593101 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtkzs\" (UniqueName: \"kubernetes.io/projected/49b252a7-5676-4326-8238-28d33e7d097a-kube-api-access-rtkzs\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.593115 4856 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/49b252a7-5676-4326-8238-28d33e7d097a-config-out\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.593128 4856 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/49b252a7-5676-4326-8238-28d33e7d097a-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.593140 4856 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/49b252a7-5676-4326-8238-28d33e7d097a-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.593152 4856 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/49b252a7-5676-4326-8238-28d33e7d097a-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.628006 4856 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.628496 4856 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-111f3d86-645a-479b-b29a-0573c913bea4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-111f3d86-645a-479b-b29a-0573c913bea4") on node "crc" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.687590 4856 scope.go:117] "RemoveContainer" containerID="da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.695073 4856 reconciler_common.go:293] "Volume detached for volume \"pvc-111f3d86-645a-479b-b29a-0573c913bea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-111f3d86-645a-479b-b29a-0573c913bea4\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.710866 4856 scope.go:117] "RemoveContainer" containerID="7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.736212 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.748766 4856 scope.go:117] "RemoveContainer" containerID="cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.749691 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.774910 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 08:50:11 crc kubenswrapper[4856]: E1122 08:50:11.775335 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="thanos-sidecar" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.775353 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="thanos-sidecar" Nov 22 08:50:11 crc kubenswrapper[4856]: E1122 08:50:11.775400 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="config-reloader" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.775406 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="config-reloader" Nov 22 08:50:11 crc kubenswrapper[4856]: E1122 08:50:11.775416 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="init-config-reloader" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.775424 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="init-config-reloader" Nov 22 08:50:11 crc kubenswrapper[4856]: E1122 08:50:11.775436 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="prometheus" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.775443 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="prometheus" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.775635 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="config-reloader" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.775648 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="prometheus" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.775659 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b252a7-5676-4326-8238-28d33e7d097a" containerName="thanos-sidecar" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.779926 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.781166 4856 scope.go:117] "RemoveContainer" containerID="2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc" Nov 22 08:50:11 crc kubenswrapper[4856]: E1122 08:50:11.781964 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc\": container with ID starting with 2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc not found: ID does not exist" containerID="2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.782002 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc"} err="failed to get container status \"2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc\": rpc error: code = NotFound desc = could not find container \"2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc\": container with ID starting with 2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc not found: ID does not exist" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.782027 4856 scope.go:117] "RemoveContainer" containerID="da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032" Nov 22 08:50:11 crc kubenswrapper[4856]: E1122 08:50:11.782272 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032\": container with ID starting with da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032 not found: ID does not exist" containerID="da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.782292 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032"} err="failed to get container status \"da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032\": rpc error: code = NotFound desc = could not find container \"da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032\": container with ID starting with da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032 not found: ID does not exist" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.782304 4856 scope.go:117] "RemoveContainer" containerID="7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2" Nov 22 08:50:11 crc kubenswrapper[4856]: E1122 08:50:11.782490 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2\": container with ID starting with 7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2 not found: ID does not exist" containerID="7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.782525 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2"} err="failed to get container status \"7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2\": rpc error: code = NotFound desc = could not find container \"7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2\": container with ID starting with 7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2 not found: ID does not exist" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.782538 4856 scope.go:117] "RemoveContainer" containerID="cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb" Nov 22 08:50:11 crc kubenswrapper[4856]: E1122 08:50:11.782728 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb\": container with ID starting with cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb not found: ID does not exist" containerID="cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.782746 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb"} err="failed to get container status \"cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb\": rpc error: code = NotFound desc = could not find container \"cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb\": container with ID starting with cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb not found: ID does not exist" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.782759 4856 scope.go:117] "RemoveContainer" containerID="2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.782947 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc"} err="failed to get container status \"2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc\": rpc error: code = NotFound desc = could not find container \"2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc\": container with ID starting with 2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc not found: ID does not exist" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.782964 4856 scope.go:117] "RemoveContainer" containerID="da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.783198 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.783384 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032"} err="failed to get container status \"da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032\": rpc error: code = NotFound desc = could not find container \"da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032\": container with ID starting with da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032 not found: ID does not exist" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.783403 4856 scope.go:117] "RemoveContainer" containerID="7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.783606 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2"} err="failed to get container status \"7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2\": rpc error: code = NotFound desc = could not find container \"7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2\": container with ID starting with 7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2 not found: ID does not exist" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.783624 4856 scope.go:117] "RemoveContainer" containerID="cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.783810 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb"} err="failed to get container status \"cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb\": rpc error: code = NotFound desc = could not find container \"cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb\": container with ID starting with cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb not found: ID does not exist" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.783826 4856 scope.go:117] "RemoveContainer" containerID="2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.783993 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc"} err="failed to get container status \"2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc\": rpc error: code = NotFound desc = could not find container \"2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc\": container with ID starting with 2a96d20c9c17ce6c7c1af152c83dcfc0d957ce44f013e010352f6e3a83baf7bc not found: ID does not exist" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.784006 4856 scope.go:117] "RemoveContainer" containerID="da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.784181 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032"} err="failed to get container status \"da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032\": rpc error: code = NotFound desc = could not find container \"da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032\": container with ID starting with da6ae3503fd9d84eea669a509f75117bcdd9edc8e31de6a27fd856c1ca775032 not found: ID does not exist" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.784194 4856 scope.go:117] "RemoveContainer" containerID="7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.784359 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2"} err="failed to get container status \"7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2\": rpc error: code = NotFound desc = could not find container \"7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2\": container with ID starting with 7e6f58e87c9ed088f904f3454e963f8696068b3a69d6f4e1a89a26d1e861c8d2 not found: ID does not exist" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.784377 4856 scope.go:117] "RemoveContainer" containerID="cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.784607 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.784829 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.785317 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.785437 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.785867 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-d62q6" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.788121 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.788445 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb"} err="failed to get container status \"cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb\": rpc error: code = NotFound desc = could not find container \"cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb\": container with ID starting with cc236af8a1521f72ea3bd26b0a2cf83d9a56e71576ba686b0d9bab2d5e3d6dcb not found: ID does not exist" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.792404 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.907934 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.908072 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.908167 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqj6g\" (UniqueName: \"kubernetes.io/projected/3690a9de-19a8-491f-bf84-3fff9a9d52b3-kube-api-access-dqj6g\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.908206 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.908278 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-111f3d86-645a-479b-b29a-0573c913bea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-111f3d86-645a-479b-b29a-0573c913bea4\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.908404 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3690a9de-19a8-491f-bf84-3fff9a9d52b3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.908570 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-config\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.908672 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.908781 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.908864 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3690a9de-19a8-491f-bf84-3fff9a9d52b3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:11 crc kubenswrapper[4856]: I1122 08:50:11.908966 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3690a9de-19a8-491f-bf84-3fff9a9d52b3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.012057 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.012149 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.012190 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3690a9de-19a8-491f-bf84-3fff9a9d52b3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.012232 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3690a9de-19a8-491f-bf84-3fff9a9d52b3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.012279 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.012319 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.012369 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqj6g\" (UniqueName: \"kubernetes.io/projected/3690a9de-19a8-491f-bf84-3fff9a9d52b3-kube-api-access-dqj6g\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.012434 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.012795 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-111f3d86-645a-479b-b29a-0573c913bea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-111f3d86-645a-479b-b29a-0573c913bea4\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.012825 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3690a9de-19a8-491f-bf84-3fff9a9d52b3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.012885 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-config\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.018066 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.018133 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3690a9de-19a8-491f-bf84-3fff9a9d52b3-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.018662 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.020083 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.020318 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.022722 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-config\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.023247 4856 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.023371 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-111f3d86-645a-479b-b29a-0573c913bea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-111f3d86-645a-479b-b29a-0573c913bea4\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bd9ece169a90f74cd4430627dee0e4356066eec80a3078f069b25bc3a35bddf6/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.026644 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3690a9de-19a8-491f-bf84-3fff9a9d52b3-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.029866 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3690a9de-19a8-491f-bf84-3fff9a9d52b3-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.033471 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqj6g\" (UniqueName: \"kubernetes.io/projected/3690a9de-19a8-491f-bf84-3fff9a9d52b3-kube-api-access-dqj6g\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.034035 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3690a9de-19a8-491f-bf84-3fff9a9d52b3-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.075416 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-111f3d86-645a-479b-b29a-0573c913bea4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-111f3d86-645a-479b-b29a-0573c913bea4\") pod \"prometheus-metric-storage-0\" (UID: \"3690a9de-19a8-491f-bf84-3fff9a9d52b3\") " pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.156870 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.635029 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.722284 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49b252a7-5676-4326-8238-28d33e7d097a" path="/var/lib/kubelet/pods/49b252a7-5676-4326-8238-28d33e7d097a/volumes" Nov 22 08:50:12 crc kubenswrapper[4856]: I1122 08:50:12.723130 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88ec791b-7c80-478e-a95e-f8c1f93f478b" path="/var/lib/kubelet/pods/88ec791b-7c80-478e-a95e-f8c1f93f478b/volumes" Nov 22 08:50:20 crc kubenswrapper[4856]: I1122 08:50:20.587242 4856 patch_prober.go:28] interesting pod/oauth-openshift-5f78599457-lsztc container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.54:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 08:50:20 crc kubenswrapper[4856]: I1122 08:50:20.588292 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-5f78599457-lsztc" podUID="85222058-81a4-4395-9292-f7b16d6e5669" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.54:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 08:50:23 crc kubenswrapper[4856]: I1122 08:50:23.738700 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-s8jpj" podUID="a8b51997-87ba-499c-903d-82c1b85c0968" containerName="registry-server" probeResult="failure" output=< Nov 22 08:50:23 crc kubenswrapper[4856]: timeout: health rpc did not complete within 1s Nov 22 08:50:23 crc kubenswrapper[4856]: > Nov 22 08:50:24 crc kubenswrapper[4856]: I1122 08:50:24.920200 4856 scope.go:117] "RemoveContainer" containerID="3d94b2e93e319a110512c4b292b238bbea11e95c312c8368ba51264713bcc977" Nov 22 08:50:28 crc kubenswrapper[4856]: I1122 08:50:28.428722 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-rdwqk" podUID="47dda6c4-0264-433f-9edd-4599ee978799" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 22 08:50:29 crc kubenswrapper[4856]: I1122 08:50:29.754823 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:50:29 crc kubenswrapper[4856]: I1122 08:50:29.754926 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:50:34 crc kubenswrapper[4856]: I1122 08:50:34.647240 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3690a9de-19a8-491f-bf84-3fff9a9d52b3","Type":"ContainerStarted","Data":"c023e554e74cad1ddca9b0d205c657a08e9e9cdcd9969ce347f06b0a494957ab"} Nov 22 08:50:34 crc kubenswrapper[4856]: I1122 08:50:34.700109 4856 scope.go:117] "RemoveContainer" containerID="4b0d32af063078fcdf7da05237996b7339742fb51dea6b8b1b2d3b8d0da0028c" Nov 22 08:50:34 crc kubenswrapper[4856]: I1122 08:50:34.881679 4856 scope.go:117] "RemoveContainer" containerID="7a93ad5b8f171ce19f218b3b342691891e521fb536632eb9c7c9b0c49dadafd0" Nov 22 08:50:35 crc kubenswrapper[4856]: I1122 08:50:35.037691 4856 scope.go:117] "RemoveContainer" containerID="f1e9e90546389b0088fe327346b91714495ff6d955bfb1d17e8e444a855843c3" Nov 22 08:50:35 crc kubenswrapper[4856]: I1122 08:50:35.085860 4856 scope.go:117] "RemoveContainer" containerID="7eb599cca049e58a1214ab401ad7fbbfe537e484fe5ba55db233ef84c50389c9" Nov 22 08:50:35 crc kubenswrapper[4856]: I1122 08:50:35.126463 4856 scope.go:117] "RemoveContainer" containerID="ef1e41f2c9277a64d1a6de8db6459f671cdfcf85c5eb15d70795318e3d82fa0d" Nov 22 08:50:35 crc kubenswrapper[4856]: I1122 08:50:35.188050 4856 scope.go:117] "RemoveContainer" containerID="eac5280c8add0b4b9199a29b8732420e42b3fdde1da2ef89de524585e48304ce" Nov 22 08:50:36 crc kubenswrapper[4856]: E1122 08:50:36.244501 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-ceilometer-central:87d86758a49b8425a546c66207f21761" Nov 22 08:50:36 crc kubenswrapper[4856]: E1122 08:50:36.244598 4856 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-ceilometer-central:87d86758a49b8425a546c66207f21761" Nov 22 08:50:36 crc kubenswrapper[4856]: E1122 08:50:36.244773 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-ceilometer-central:87d86758a49b8425a546c66207f21761,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66fhb6hc6h578h599h65bh559h588h65h8bh9h65bhbh58fh699h94hf5h569h69h5ffh5d9h56h59fhdbh694h55chb8h56bhfh6ch657h58dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rj25r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(870368ad-d281-4f1a-a37f-2aa672506c81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 08:50:38 crc kubenswrapper[4856]: I1122 08:50:38.693556 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3690a9de-19a8-491f-bf84-3fff9a9d52b3","Type":"ContainerStarted","Data":"016c918ddb9d7ae1e23d0fb6cf28b21e2c9070f2b6a5c53281e0f31e75280744"} Nov 22 08:50:41 crc kubenswrapper[4856]: I1122 08:50:41.721701 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"870368ad-d281-4f1a-a37f-2aa672506c81","Type":"ContainerStarted","Data":"8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d"} Nov 22 08:50:42 crc kubenswrapper[4856]: I1122 08:50:42.738344 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"870368ad-d281-4f1a-a37f-2aa672506c81","Type":"ContainerStarted","Data":"e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527"} Nov 22 08:50:45 crc kubenswrapper[4856]: E1122 08:50:45.117086 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" Nov 22 08:50:45 crc kubenswrapper[4856]: I1122 08:50:45.769743 4856 generic.go:334] "Generic (PLEG): container finished" podID="3690a9de-19a8-491f-bf84-3fff9a9d52b3" containerID="016c918ddb9d7ae1e23d0fb6cf28b21e2c9070f2b6a5c53281e0f31e75280744" exitCode=0 Nov 22 08:50:45 crc kubenswrapper[4856]: I1122 08:50:45.769772 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3690a9de-19a8-491f-bf84-3fff9a9d52b3","Type":"ContainerDied","Data":"016c918ddb9d7ae1e23d0fb6cf28b21e2c9070f2b6a5c53281e0f31e75280744"} Nov 22 08:50:45 crc kubenswrapper[4856]: I1122 08:50:45.775160 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"870368ad-d281-4f1a-a37f-2aa672506c81","Type":"ContainerStarted","Data":"3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327"} Nov 22 08:50:45 crc kubenswrapper[4856]: I1122 08:50:45.776053 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 08:50:45 crc kubenswrapper[4856]: E1122 08:50:45.779335 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-ceilometer-central:87d86758a49b8425a546c66207f21761\\\"\"" pod="openstack/ceilometer-0" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" Nov 22 08:50:46 crc kubenswrapper[4856]: I1122 08:50:46.787634 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3690a9de-19a8-491f-bf84-3fff9a9d52b3","Type":"ContainerStarted","Data":"d145b880388b0d25ad88c0a9bd11c6b626b222691293c39f474b9c960bf3e94e"} Nov 22 08:50:48 crc kubenswrapper[4856]: I1122 08:50:48.811629 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"870368ad-d281-4f1a-a37f-2aa672506c81","Type":"ContainerStarted","Data":"b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3"} Nov 22 08:50:48 crc kubenswrapper[4856]: I1122 08:50:48.841924 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.14789224 podStartE2EDuration="41.841898071s" podCreationTimestamp="2025-11-22 08:50:07 +0000 UTC" firstStartedPulling="2025-11-22 08:50:08.42257192 +0000 UTC m=+6450.835965178" lastFinishedPulling="2025-11-22 08:50:48.116577751 +0000 UTC m=+6490.529971009" observedRunningTime="2025-11-22 08:50:48.834438831 +0000 UTC m=+6491.247832099" watchObservedRunningTime="2025-11-22 08:50:48.841898071 +0000 UTC m=+6491.255291329" Nov 22 08:50:50 crc kubenswrapper[4856]: I1122 08:50:50.834569 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3690a9de-19a8-491f-bf84-3fff9a9d52b3","Type":"ContainerStarted","Data":"34f17f07d1677f086fe62934d7327ebdb6b2157d4075f2617f3c4aad5af28415"} Nov 22 08:50:50 crc kubenswrapper[4856]: I1122 08:50:50.834929 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3690a9de-19a8-491f-bf84-3fff9a9d52b3","Type":"ContainerStarted","Data":"7ff298ea014e68cf8f98090f4efb3c8e0996bd14362bd36ad44e14ee668eb35b"} Nov 22 08:50:50 crc kubenswrapper[4856]: I1122 08:50:50.864874 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=39.864841428 podStartE2EDuration="39.864841428s" podCreationTimestamp="2025-11-22 08:50:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:50:50.860584483 +0000 UTC m=+6493.273977741" watchObservedRunningTime="2025-11-22 08:50:50.864841428 +0000 UTC m=+6493.278234686" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.042691 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-wz8t9"] Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.051064 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-wz8t9"] Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.060415 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-c708-account-create-5pnzj"] Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.068319 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-c708-account-create-5pnzj"] Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.625547 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-nsv56"] Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.627643 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-nsv56" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.693195 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-nsv56"] Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.749969 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-bd84-account-create-pzl2m"] Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.751499 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-bd84-account-create-pzl2m" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.753866 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.766699 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-bd84-account-create-pzl2m"] Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.774721 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6755\" (UniqueName: \"kubernetes.io/projected/86563ff1-f26e-4490-b9d3-ebe7456ee633-kube-api-access-n6755\") pod \"aodh-db-create-nsv56\" (UID: \"86563ff1-f26e-4490-b9d3-ebe7456ee633\") " pod="openstack/aodh-db-create-nsv56" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.774893 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86563ff1-f26e-4490-b9d3-ebe7456ee633-operator-scripts\") pod \"aodh-db-create-nsv56\" (UID: \"86563ff1-f26e-4490-b9d3-ebe7456ee633\") " pod="openstack/aodh-db-create-nsv56" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.876399 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzmdm\" (UniqueName: \"kubernetes.io/projected/2159a754-b822-4794-aee2-1e2d51ddca60-kube-api-access-dzmdm\") pod \"aodh-bd84-account-create-pzl2m\" (UID: \"2159a754-b822-4794-aee2-1e2d51ddca60\") " pod="openstack/aodh-bd84-account-create-pzl2m" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.876472 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86563ff1-f26e-4490-b9d3-ebe7456ee633-operator-scripts\") pod \"aodh-db-create-nsv56\" (UID: \"86563ff1-f26e-4490-b9d3-ebe7456ee633\") " pod="openstack/aodh-db-create-nsv56" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.876548 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6755\" (UniqueName: \"kubernetes.io/projected/86563ff1-f26e-4490-b9d3-ebe7456ee633-kube-api-access-n6755\") pod \"aodh-db-create-nsv56\" (UID: \"86563ff1-f26e-4490-b9d3-ebe7456ee633\") " pod="openstack/aodh-db-create-nsv56" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.876751 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2159a754-b822-4794-aee2-1e2d51ddca60-operator-scripts\") pod \"aodh-bd84-account-create-pzl2m\" (UID: \"2159a754-b822-4794-aee2-1e2d51ddca60\") " pod="openstack/aodh-bd84-account-create-pzl2m" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.877936 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86563ff1-f26e-4490-b9d3-ebe7456ee633-operator-scripts\") pod \"aodh-db-create-nsv56\" (UID: \"86563ff1-f26e-4490-b9d3-ebe7456ee633\") " pod="openstack/aodh-db-create-nsv56" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.908295 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6755\" (UniqueName: \"kubernetes.io/projected/86563ff1-f26e-4490-b9d3-ebe7456ee633-kube-api-access-n6755\") pod \"aodh-db-create-nsv56\" (UID: \"86563ff1-f26e-4490-b9d3-ebe7456ee633\") " pod="openstack/aodh-db-create-nsv56" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.949245 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-nsv56" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.984750 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2159a754-b822-4794-aee2-1e2d51ddca60-operator-scripts\") pod \"aodh-bd84-account-create-pzl2m\" (UID: \"2159a754-b822-4794-aee2-1e2d51ddca60\") " pod="openstack/aodh-bd84-account-create-pzl2m" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.984867 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzmdm\" (UniqueName: \"kubernetes.io/projected/2159a754-b822-4794-aee2-1e2d51ddca60-kube-api-access-dzmdm\") pod \"aodh-bd84-account-create-pzl2m\" (UID: \"2159a754-b822-4794-aee2-1e2d51ddca60\") " pod="openstack/aodh-bd84-account-create-pzl2m" Nov 22 08:50:51 crc kubenswrapper[4856]: I1122 08:50:51.986444 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2159a754-b822-4794-aee2-1e2d51ddca60-operator-scripts\") pod \"aodh-bd84-account-create-pzl2m\" (UID: \"2159a754-b822-4794-aee2-1e2d51ddca60\") " pod="openstack/aodh-bd84-account-create-pzl2m" Nov 22 08:50:52 crc kubenswrapper[4856]: I1122 08:50:52.006572 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzmdm\" (UniqueName: \"kubernetes.io/projected/2159a754-b822-4794-aee2-1e2d51ddca60-kube-api-access-dzmdm\") pod \"aodh-bd84-account-create-pzl2m\" (UID: \"2159a754-b822-4794-aee2-1e2d51ddca60\") " pod="openstack/aodh-bd84-account-create-pzl2m" Nov 22 08:50:52 crc kubenswrapper[4856]: I1122 08:50:52.072880 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-bd84-account-create-pzl2m" Nov 22 08:50:52 crc kubenswrapper[4856]: I1122 08:50:52.163953 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:52 crc kubenswrapper[4856]: I1122 08:50:52.594577 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-nsv56"] Nov 22 08:50:52 crc kubenswrapper[4856]: I1122 08:50:52.733119 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70b4c046-0b3b-42ea-b75d-ee15442bc981" path="/var/lib/kubelet/pods/70b4c046-0b3b-42ea-b75d-ee15442bc981/volumes" Nov 22 08:50:52 crc kubenswrapper[4856]: I1122 08:50:52.748630 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eec3298d-6113-4ba7-84d9-61a961e8128d" path="/var/lib/kubelet/pods/eec3298d-6113-4ba7-84d9-61a961e8128d/volumes" Nov 22 08:50:52 crc kubenswrapper[4856]: I1122 08:50:52.749404 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-bd84-account-create-pzl2m"] Nov 22 08:50:52 crc kubenswrapper[4856]: I1122 08:50:52.857402 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-nsv56" event={"ID":"86563ff1-f26e-4490-b9d3-ebe7456ee633","Type":"ContainerStarted","Data":"5ef888470f8c6e1e4c932c7503701d0f06b8ce6ce3e1307449c45e4c2d3abbf2"} Nov 22 08:50:52 crc kubenswrapper[4856]: I1122 08:50:52.859218 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-bd84-account-create-pzl2m" event={"ID":"2159a754-b822-4794-aee2-1e2d51ddca60","Type":"ContainerStarted","Data":"b53d3f507e9f839ecdab80be343a850e15e27b9dee83140aed18137a0886bb02"} Nov 22 08:50:53 crc kubenswrapper[4856]: I1122 08:50:53.870403 4856 generic.go:334] "Generic (PLEG): container finished" podID="2159a754-b822-4794-aee2-1e2d51ddca60" containerID="46fbc23a6f52c136bea223cca57c9ce962ee3b91e3eb5220311bceb62393dc6f" exitCode=0 Nov 22 08:50:53 crc kubenswrapper[4856]: I1122 08:50:53.870566 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-bd84-account-create-pzl2m" event={"ID":"2159a754-b822-4794-aee2-1e2d51ddca60","Type":"ContainerDied","Data":"46fbc23a6f52c136bea223cca57c9ce962ee3b91e3eb5220311bceb62393dc6f"} Nov 22 08:50:53 crc kubenswrapper[4856]: I1122 08:50:53.872857 4856 generic.go:334] "Generic (PLEG): container finished" podID="86563ff1-f26e-4490-b9d3-ebe7456ee633" containerID="c69d77b37819d69865e14ee6ae032db05f663d0c2bb98b95b2cd154737f096f2" exitCode=0 Nov 22 08:50:53 crc kubenswrapper[4856]: I1122 08:50:53.872908 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-nsv56" event={"ID":"86563ff1-f26e-4490-b9d3-ebe7456ee633","Type":"ContainerDied","Data":"c69d77b37819d69865e14ee6ae032db05f663d0c2bb98b95b2cd154737f096f2"} Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.319371 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-nsv56" Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.330358 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-bd84-account-create-pzl2m" Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.370002 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6755\" (UniqueName: \"kubernetes.io/projected/86563ff1-f26e-4490-b9d3-ebe7456ee633-kube-api-access-n6755\") pod \"86563ff1-f26e-4490-b9d3-ebe7456ee633\" (UID: \"86563ff1-f26e-4490-b9d3-ebe7456ee633\") " Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.370308 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86563ff1-f26e-4490-b9d3-ebe7456ee633-operator-scripts\") pod \"86563ff1-f26e-4490-b9d3-ebe7456ee633\" (UID: \"86563ff1-f26e-4490-b9d3-ebe7456ee633\") " Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.371105 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86563ff1-f26e-4490-b9d3-ebe7456ee633-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86563ff1-f26e-4490-b9d3-ebe7456ee633" (UID: "86563ff1-f26e-4490-b9d3-ebe7456ee633"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.376617 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86563ff1-f26e-4490-b9d3-ebe7456ee633-kube-api-access-n6755" (OuterVolumeSpecName: "kube-api-access-n6755") pod "86563ff1-f26e-4490-b9d3-ebe7456ee633" (UID: "86563ff1-f26e-4490-b9d3-ebe7456ee633"). InnerVolumeSpecName "kube-api-access-n6755". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.472502 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzmdm\" (UniqueName: \"kubernetes.io/projected/2159a754-b822-4794-aee2-1e2d51ddca60-kube-api-access-dzmdm\") pod \"2159a754-b822-4794-aee2-1e2d51ddca60\" (UID: \"2159a754-b822-4794-aee2-1e2d51ddca60\") " Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.472597 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2159a754-b822-4794-aee2-1e2d51ddca60-operator-scripts\") pod \"2159a754-b822-4794-aee2-1e2d51ddca60\" (UID: \"2159a754-b822-4794-aee2-1e2d51ddca60\") " Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.473063 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6755\" (UniqueName: \"kubernetes.io/projected/86563ff1-f26e-4490-b9d3-ebe7456ee633-kube-api-access-n6755\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.473083 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86563ff1-f26e-4490-b9d3-ebe7456ee633-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.473364 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2159a754-b822-4794-aee2-1e2d51ddca60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2159a754-b822-4794-aee2-1e2d51ddca60" (UID: "2159a754-b822-4794-aee2-1e2d51ddca60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.475468 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2159a754-b822-4794-aee2-1e2d51ddca60-kube-api-access-dzmdm" (OuterVolumeSpecName: "kube-api-access-dzmdm") pod "2159a754-b822-4794-aee2-1e2d51ddca60" (UID: "2159a754-b822-4794-aee2-1e2d51ddca60"). InnerVolumeSpecName "kube-api-access-dzmdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.575327 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzmdm\" (UniqueName: \"kubernetes.io/projected/2159a754-b822-4794-aee2-1e2d51ddca60-kube-api-access-dzmdm\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.575362 4856 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2159a754-b822-4794-aee2-1e2d51ddca60-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.890699 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-bd84-account-create-pzl2m" event={"ID":"2159a754-b822-4794-aee2-1e2d51ddca60","Type":"ContainerDied","Data":"b53d3f507e9f839ecdab80be343a850e15e27b9dee83140aed18137a0886bb02"} Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.890737 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-bd84-account-create-pzl2m" Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.890755 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b53d3f507e9f839ecdab80be343a850e15e27b9dee83140aed18137a0886bb02" Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.895757 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-nsv56" event={"ID":"86563ff1-f26e-4490-b9d3-ebe7456ee633","Type":"ContainerDied","Data":"5ef888470f8c6e1e4c932c7503701d0f06b8ce6ce3e1307449c45e4c2d3abbf2"} Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.895804 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ef888470f8c6e1e4c932c7503701d0f06b8ce6ce3e1307449c45e4c2d3abbf2" Nov 22 08:50:55 crc kubenswrapper[4856]: I1122 08:50:55.895809 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-nsv56" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.085467 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-p699w"] Nov 22 08:50:57 crc kubenswrapper[4856]: E1122 08:50:57.086197 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2159a754-b822-4794-aee2-1e2d51ddca60" containerName="mariadb-account-create" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.086210 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2159a754-b822-4794-aee2-1e2d51ddca60" containerName="mariadb-account-create" Nov 22 08:50:57 crc kubenswrapper[4856]: E1122 08:50:57.086237 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86563ff1-f26e-4490-b9d3-ebe7456ee633" containerName="mariadb-database-create" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.086243 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="86563ff1-f26e-4490-b9d3-ebe7456ee633" containerName="mariadb-database-create" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.086406 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2159a754-b822-4794-aee2-1e2d51ddca60" containerName="mariadb-account-create" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.086441 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="86563ff1-f26e-4490-b9d3-ebe7456ee633" containerName="mariadb-database-create" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.087250 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.096237 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.096606 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.098106 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.099541 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-p699w"] Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.104274 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5c97j" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.161486 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.172352 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.216794 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-scripts\") pod \"aodh-db-sync-p699w\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.216984 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-config-data\") pod \"aodh-db-sync-p699w\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.217585 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-combined-ca-bundle\") pod \"aodh-db-sync-p699w\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.218029 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2w6q\" (UniqueName: \"kubernetes.io/projected/b2212e72-48ef-465e-9839-473d346956cf-kube-api-access-j2w6q\") pod \"aodh-db-sync-p699w\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.320056 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-scripts\") pod \"aodh-db-sync-p699w\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.320137 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-config-data\") pod \"aodh-db-sync-p699w\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.320164 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-combined-ca-bundle\") pod \"aodh-db-sync-p699w\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.320299 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2w6q\" (UniqueName: \"kubernetes.io/projected/b2212e72-48ef-465e-9839-473d346956cf-kube-api-access-j2w6q\") pod \"aodh-db-sync-p699w\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.325712 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-scripts\") pod \"aodh-db-sync-p699w\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.326192 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-config-data\") pod \"aodh-db-sync-p699w\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.326296 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-combined-ca-bundle\") pod \"aodh-db-sync-p699w\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.341576 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2w6q\" (UniqueName: \"kubernetes.io/projected/b2212e72-48ef-465e-9839-473d346956cf-kube-api-access-j2w6q\") pod \"aodh-db-sync-p699w\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.407259 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-p699w" Nov 22 08:50:57 crc kubenswrapper[4856]: I1122 08:50:57.917697 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 22 08:50:58 crc kubenswrapper[4856]: I1122 08:50:58.006541 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-p699w"] Nov 22 08:50:58 crc kubenswrapper[4856]: I1122 08:50:58.924939 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-p699w" event={"ID":"b2212e72-48ef-465e-9839-473d346956cf","Type":"ContainerStarted","Data":"842ca56084e2c0fd646c0dbf75ea50b3d743ae59bf5db7b3449062edb3928a2d"} Nov 22 08:50:59 crc kubenswrapper[4856]: I1122 08:50:59.754306 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:50:59 crc kubenswrapper[4856]: I1122 08:50:59.754366 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:50:59 crc kubenswrapper[4856]: I1122 08:50:59.754423 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 08:50:59 crc kubenswrapper[4856]: I1122 08:50:59.755324 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e4db6dfa0f8e0b89e30204c184a440910ad4ebbbe2c1f37db91bf8c459e660c"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:50:59 crc kubenswrapper[4856]: I1122 08:50:59.755393 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://2e4db6dfa0f8e0b89e30204c184a440910ad4ebbbe2c1f37db91bf8c459e660c" gracePeriod=600 Nov 22 08:50:59 crc kubenswrapper[4856]: I1122 08:50:59.966438 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"2e4db6dfa0f8e0b89e30204c184a440910ad4ebbbe2c1f37db91bf8c459e660c"} Nov 22 08:50:59 crc kubenswrapper[4856]: I1122 08:50:59.966422 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="2e4db6dfa0f8e0b89e30204c184a440910ad4ebbbe2c1f37db91bf8c459e660c" exitCode=0 Nov 22 08:50:59 crc kubenswrapper[4856]: I1122 08:50:59.967251 4856 scope.go:117] "RemoveContainer" containerID="873d640d58624ac0a05a1f9dc98c3cbfa68d002728374445c31442447bb98f76" Nov 22 08:51:01 crc kubenswrapper[4856]: I1122 08:51:01.993742 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e"} Nov 22 08:51:05 crc kubenswrapper[4856]: I1122 08:51:05.039375 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-p699w" event={"ID":"b2212e72-48ef-465e-9839-473d346956cf","Type":"ContainerStarted","Data":"919ac1cdbfababcff73b65739eb4a50d331f640a7b464e832589ede38abad25d"} Nov 22 08:51:05 crc kubenswrapper[4856]: I1122 08:51:05.059017 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-p699w" podStartSLOduration=1.635086107 podStartE2EDuration="8.059001777s" podCreationTimestamp="2025-11-22 08:50:57 +0000 UTC" firstStartedPulling="2025-11-22 08:50:58.008143479 +0000 UTC m=+6500.421536737" lastFinishedPulling="2025-11-22 08:51:04.432059149 +0000 UTC m=+6506.845452407" observedRunningTime="2025-11-22 08:51:05.057357653 +0000 UTC m=+6507.470750931" watchObservedRunningTime="2025-11-22 08:51:05.059001777 +0000 UTC m=+6507.472395035" Nov 22 08:51:08 crc kubenswrapper[4856]: I1122 08:51:08.496046 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 08:51:11 crc kubenswrapper[4856]: I1122 08:51:11.099764 4856 generic.go:334] "Generic (PLEG): container finished" podID="b2212e72-48ef-465e-9839-473d346956cf" containerID="919ac1cdbfababcff73b65739eb4a50d331f640a7b464e832589ede38abad25d" exitCode=0 Nov 22 08:51:11 crc kubenswrapper[4856]: I1122 08:51:11.100263 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-p699w" event={"ID":"b2212e72-48ef-465e-9839-473d346956cf","Type":"ContainerDied","Data":"919ac1cdbfababcff73b65739eb4a50d331f640a7b464e832589ede38abad25d"} Nov 22 08:51:12 crc kubenswrapper[4856]: I1122 08:51:12.534447 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-p699w" Nov 22 08:51:12 crc kubenswrapper[4856]: I1122 08:51:12.670398 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-config-data\") pod \"b2212e72-48ef-465e-9839-473d346956cf\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " Nov 22 08:51:12 crc kubenswrapper[4856]: I1122 08:51:12.670823 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2w6q\" (UniqueName: \"kubernetes.io/projected/b2212e72-48ef-465e-9839-473d346956cf-kube-api-access-j2w6q\") pod \"b2212e72-48ef-465e-9839-473d346956cf\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " Nov 22 08:51:12 crc kubenswrapper[4856]: I1122 08:51:12.670982 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-combined-ca-bundle\") pod \"b2212e72-48ef-465e-9839-473d346956cf\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " Nov 22 08:51:12 crc kubenswrapper[4856]: I1122 08:51:12.671009 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-scripts\") pod \"b2212e72-48ef-465e-9839-473d346956cf\" (UID: \"b2212e72-48ef-465e-9839-473d346956cf\") " Nov 22 08:51:12 crc kubenswrapper[4856]: I1122 08:51:12.695745 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-scripts" (OuterVolumeSpecName: "scripts") pod "b2212e72-48ef-465e-9839-473d346956cf" (UID: "b2212e72-48ef-465e-9839-473d346956cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:12 crc kubenswrapper[4856]: I1122 08:51:12.745556 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2212e72-48ef-465e-9839-473d346956cf-kube-api-access-j2w6q" (OuterVolumeSpecName: "kube-api-access-j2w6q") pod "b2212e72-48ef-465e-9839-473d346956cf" (UID: "b2212e72-48ef-465e-9839-473d346956cf"). InnerVolumeSpecName "kube-api-access-j2w6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:51:12 crc kubenswrapper[4856]: I1122 08:51:12.780710 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2w6q\" (UniqueName: \"kubernetes.io/projected/b2212e72-48ef-465e-9839-473d346956cf-kube-api-access-j2w6q\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:12 crc kubenswrapper[4856]: I1122 08:51:12.780772 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:12 crc kubenswrapper[4856]: I1122 08:51:12.794270 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-config-data" (OuterVolumeSpecName: "config-data") pod "b2212e72-48ef-465e-9839-473d346956cf" (UID: "b2212e72-48ef-465e-9839-473d346956cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:12 crc kubenswrapper[4856]: I1122 08:51:12.836427 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2212e72-48ef-465e-9839-473d346956cf" (UID: "b2212e72-48ef-465e-9839-473d346956cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:12 crc kubenswrapper[4856]: I1122 08:51:12.883878 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:12 crc kubenswrapper[4856]: I1122 08:51:12.883914 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2212e72-48ef-465e-9839-473d346956cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:13 crc kubenswrapper[4856]: I1122 08:51:13.127407 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-p699w" event={"ID":"b2212e72-48ef-465e-9839-473d346956cf","Type":"ContainerDied","Data":"842ca56084e2c0fd646c0dbf75ea50b3d743ae59bf5db7b3449062edb3928a2d"} Nov 22 08:51:13 crc kubenswrapper[4856]: I1122 08:51:13.127741 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="842ca56084e2c0fd646c0dbf75ea50b3d743ae59bf5db7b3449062edb3928a2d" Nov 22 08:51:13 crc kubenswrapper[4856]: I1122 08:51:13.127701 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-p699w" Nov 22 08:51:13 crc kubenswrapper[4856]: I1122 08:51:13.921624 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 08:51:13 crc kubenswrapper[4856]: I1122 08:51:13.921909 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0a74f189-af8a-4787-99d5-ec500950ccc8" containerName="kube-state-metrics" containerID="cri-o://a28aeefba809388cc4215a01b24777b3ab43655ded8ee0e351291026471d517f" gracePeriod=30 Nov 22 08:51:14 crc kubenswrapper[4856]: I1122 08:51:14.139472 4856 generic.go:334] "Generic (PLEG): container finished" podID="0a74f189-af8a-4787-99d5-ec500950ccc8" containerID="a28aeefba809388cc4215a01b24777b3ab43655ded8ee0e351291026471d517f" exitCode=2 Nov 22 08:51:14 crc kubenswrapper[4856]: I1122 08:51:14.139594 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a74f189-af8a-4787-99d5-ec500950ccc8","Type":"ContainerDied","Data":"a28aeefba809388cc4215a01b24777b3ab43655ded8ee0e351291026471d517f"} Nov 22 08:51:14 crc kubenswrapper[4856]: I1122 08:51:14.458456 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 08:51:14 crc kubenswrapper[4856]: I1122 08:51:14.517635 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9pf8\" (UniqueName: \"kubernetes.io/projected/0a74f189-af8a-4787-99d5-ec500950ccc8-kube-api-access-k9pf8\") pod \"0a74f189-af8a-4787-99d5-ec500950ccc8\" (UID: \"0a74f189-af8a-4787-99d5-ec500950ccc8\") " Nov 22 08:51:14 crc kubenswrapper[4856]: I1122 08:51:14.529146 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a74f189-af8a-4787-99d5-ec500950ccc8-kube-api-access-k9pf8" (OuterVolumeSpecName: "kube-api-access-k9pf8") pod "0a74f189-af8a-4787-99d5-ec500950ccc8" (UID: "0a74f189-af8a-4787-99d5-ec500950ccc8"). InnerVolumeSpecName "kube-api-access-k9pf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:51:14 crc kubenswrapper[4856]: I1122 08:51:14.619989 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9pf8\" (UniqueName: \"kubernetes.io/projected/0a74f189-af8a-4787-99d5-ec500950ccc8-kube-api-access-k9pf8\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.151591 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0a74f189-af8a-4787-99d5-ec500950ccc8","Type":"ContainerDied","Data":"57ae82c11425083f71976cd6740334b050873b03d59cc88a1feb23afa31e48e6"} Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.151908 4856 scope.go:117] "RemoveContainer" containerID="a28aeefba809388cc4215a01b24777b3ab43655ded8ee0e351291026471d517f" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.152024 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.191314 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.204357 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.217561 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 08:51:15 crc kubenswrapper[4856]: E1122 08:51:15.218117 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a74f189-af8a-4787-99d5-ec500950ccc8" containerName="kube-state-metrics" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.218134 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a74f189-af8a-4787-99d5-ec500950ccc8" containerName="kube-state-metrics" Nov 22 08:51:15 crc kubenswrapper[4856]: E1122 08:51:15.218150 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2212e72-48ef-465e-9839-473d346956cf" containerName="aodh-db-sync" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.218157 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2212e72-48ef-465e-9839-473d346956cf" containerName="aodh-db-sync" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.218395 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2212e72-48ef-465e-9839-473d346956cf" containerName="aodh-db-sync" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.218424 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a74f189-af8a-4787-99d5-ec500950ccc8" containerName="kube-state-metrics" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.219232 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.221957 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.222104 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.233185 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.336323 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/61795c46-ac49-454d-9ea8-36e6b921c1c5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"61795c46-ac49-454d-9ea8-36e6b921c1c5\") " pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.336386 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dtc6\" (UniqueName: \"kubernetes.io/projected/61795c46-ac49-454d-9ea8-36e6b921c1c5-kube-api-access-4dtc6\") pod \"kube-state-metrics-0\" (UID: \"61795c46-ac49-454d-9ea8-36e6b921c1c5\") " pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.336435 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61795c46-ac49-454d-9ea8-36e6b921c1c5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"61795c46-ac49-454d-9ea8-36e6b921c1c5\") " pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.336495 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/61795c46-ac49-454d-9ea8-36e6b921c1c5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"61795c46-ac49-454d-9ea8-36e6b921c1c5\") " pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.438136 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/61795c46-ac49-454d-9ea8-36e6b921c1c5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"61795c46-ac49-454d-9ea8-36e6b921c1c5\") " pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.438297 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/61795c46-ac49-454d-9ea8-36e6b921c1c5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"61795c46-ac49-454d-9ea8-36e6b921c1c5\") " pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.438327 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dtc6\" (UniqueName: \"kubernetes.io/projected/61795c46-ac49-454d-9ea8-36e6b921c1c5-kube-api-access-4dtc6\") pod \"kube-state-metrics-0\" (UID: \"61795c46-ac49-454d-9ea8-36e6b921c1c5\") " pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.438362 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61795c46-ac49-454d-9ea8-36e6b921c1c5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"61795c46-ac49-454d-9ea8-36e6b921c1c5\") " pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.445135 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61795c46-ac49-454d-9ea8-36e6b921c1c5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"61795c46-ac49-454d-9ea8-36e6b921c1c5\") " pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.445234 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/61795c46-ac49-454d-9ea8-36e6b921c1c5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"61795c46-ac49-454d-9ea8-36e6b921c1c5\") " pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.446731 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/61795c46-ac49-454d-9ea8-36e6b921c1c5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"61795c46-ac49-454d-9ea8-36e6b921c1c5\") " pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.460651 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dtc6\" (UniqueName: \"kubernetes.io/projected/61795c46-ac49-454d-9ea8-36e6b921c1c5-kube-api-access-4dtc6\") pod \"kube-state-metrics-0\" (UID: \"61795c46-ac49-454d-9ea8-36e6b921c1c5\") " pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.544085 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.776294 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.776871 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="proxy-httpd" containerID="cri-o://3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327" gracePeriod=30 Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.776941 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="ceilometer-central-agent" containerID="cri-o://b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3" gracePeriod=30 Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.776941 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="sg-core" containerID="cri-o://e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527" gracePeriod=30 Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.777590 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="ceilometer-notification-agent" containerID="cri-o://8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d" gracePeriod=30 Nov 22 08:51:15 crc kubenswrapper[4856]: I1122 08:51:15.999042 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 22 08:51:16 crc kubenswrapper[4856]: I1122 08:51:16.172911 4856 generic.go:334] "Generic (PLEG): container finished" podID="870368ad-d281-4f1a-a37f-2aa672506c81" containerID="3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327" exitCode=0 Nov 22 08:51:16 crc kubenswrapper[4856]: I1122 08:51:16.173226 4856 generic.go:334] "Generic (PLEG): container finished" podID="870368ad-d281-4f1a-a37f-2aa672506c81" containerID="e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527" exitCode=2 Nov 22 08:51:16 crc kubenswrapper[4856]: I1122 08:51:16.172993 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"870368ad-d281-4f1a-a37f-2aa672506c81","Type":"ContainerDied","Data":"3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327"} Nov 22 08:51:16 crc kubenswrapper[4856]: I1122 08:51:16.173269 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"870368ad-d281-4f1a-a37f-2aa672506c81","Type":"ContainerDied","Data":"e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527"} Nov 22 08:51:16 crc kubenswrapper[4856]: I1122 08:51:16.174808 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"61795c46-ac49-454d-9ea8-36e6b921c1c5","Type":"ContainerStarted","Data":"0f7f344886c46aaa74a88c1193c3bd53806fedde1f9682ffe37ca7bb7e3edbe9"} Nov 22 08:51:16 crc kubenswrapper[4856]: I1122 08:51:16.728056 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a74f189-af8a-4787-99d5-ec500950ccc8" path="/var/lib/kubelet/pods/0a74f189-af8a-4787-99d5-ec500950ccc8/volumes" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.097143 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.188596 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-config-data\") pod \"870368ad-d281-4f1a-a37f-2aa672506c81\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.188708 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj25r\" (UniqueName: \"kubernetes.io/projected/870368ad-d281-4f1a-a37f-2aa672506c81-kube-api-access-rj25r\") pod \"870368ad-d281-4f1a-a37f-2aa672506c81\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.188806 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-scripts\") pod \"870368ad-d281-4f1a-a37f-2aa672506c81\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.188923 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-sg-core-conf-yaml\") pod \"870368ad-d281-4f1a-a37f-2aa672506c81\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.188965 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-combined-ca-bundle\") pod \"870368ad-d281-4f1a-a37f-2aa672506c81\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.188993 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/870368ad-d281-4f1a-a37f-2aa672506c81-log-httpd\") pod \"870368ad-d281-4f1a-a37f-2aa672506c81\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.189034 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/870368ad-d281-4f1a-a37f-2aa672506c81-run-httpd\") pod \"870368ad-d281-4f1a-a37f-2aa672506c81\" (UID: \"870368ad-d281-4f1a-a37f-2aa672506c81\") " Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.190106 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/870368ad-d281-4f1a-a37f-2aa672506c81-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "870368ad-d281-4f1a-a37f-2aa672506c81" (UID: "870368ad-d281-4f1a-a37f-2aa672506c81"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.193472 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/870368ad-d281-4f1a-a37f-2aa672506c81-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "870368ad-d281-4f1a-a37f-2aa672506c81" (UID: "870368ad-d281-4f1a-a37f-2aa672506c81"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.199980 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-scripts" (OuterVolumeSpecName: "scripts") pod "870368ad-d281-4f1a-a37f-2aa672506c81" (UID: "870368ad-d281-4f1a-a37f-2aa672506c81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.206031 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870368ad-d281-4f1a-a37f-2aa672506c81-kube-api-access-rj25r" (OuterVolumeSpecName: "kube-api-access-rj25r") pod "870368ad-d281-4f1a-a37f-2aa672506c81" (UID: "870368ad-d281-4f1a-a37f-2aa672506c81"). InnerVolumeSpecName "kube-api-access-rj25r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.206092 4856 generic.go:334] "Generic (PLEG): container finished" podID="870368ad-d281-4f1a-a37f-2aa672506c81" containerID="b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3" exitCode=0 Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.206354 4856 generic.go:334] "Generic (PLEG): container finished" podID="870368ad-d281-4f1a-a37f-2aa672506c81" containerID="8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d" exitCode=0 Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.206122 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"870368ad-d281-4f1a-a37f-2aa672506c81","Type":"ContainerDied","Data":"b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3"} Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.206681 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"870368ad-d281-4f1a-a37f-2aa672506c81","Type":"ContainerDied","Data":"8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d"} Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.206769 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"870368ad-d281-4f1a-a37f-2aa672506c81","Type":"ContainerDied","Data":"7802f598813d7b36c59a93ace305ec22f87dd859f78e145d23e079d0f83e9282"} Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.206852 4856 scope.go:117] "RemoveContainer" containerID="b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.206207 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.225260 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"61795c46-ac49-454d-9ea8-36e6b921c1c5","Type":"ContainerStarted","Data":"b3962cf874a1fda92b9e0d07667994ca83f6b1ff81d5b76130f838b003793af0"} Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.226729 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.249538 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 22 08:51:17 crc kubenswrapper[4856]: E1122 08:51:17.249975 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="ceilometer-notification-agent" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.249988 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="ceilometer-notification-agent" Nov 22 08:51:17 crc kubenswrapper[4856]: E1122 08:51:17.250011 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="proxy-httpd" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.250017 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="proxy-httpd" Nov 22 08:51:17 crc kubenswrapper[4856]: E1122 08:51:17.250031 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="ceilometer-central-agent" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.250037 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="ceilometer-central-agent" Nov 22 08:51:17 crc kubenswrapper[4856]: E1122 08:51:17.250053 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="sg-core" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.250059 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="sg-core" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.250235 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="ceilometer-central-agent" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.250268 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="sg-core" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.250282 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="ceilometer-notification-agent" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.250289 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" containerName="proxy-httpd" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.252113 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.262159 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.262800 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.268718 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5c97j" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.292008 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj25r\" (UniqueName: \"kubernetes.io/projected/870368ad-d281-4f1a-a37f-2aa672506c81-kube-api-access-rj25r\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.292320 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.292331 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/870368ad-d281-4f1a-a37f-2aa672506c81-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.292341 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/870368ad-d281-4f1a-a37f-2aa672506c81-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.298785 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "870368ad-d281-4f1a-a37f-2aa672506c81" (UID: "870368ad-d281-4f1a-a37f-2aa672506c81"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.334788 4856 scope.go:117] "RemoveContainer" containerID="3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.335062 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.337647 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.975705369 podStartE2EDuration="2.337620804s" podCreationTimestamp="2025-11-22 08:51:15 +0000 UTC" firstStartedPulling="2025-11-22 08:51:16.004947433 +0000 UTC m=+6518.418340691" lastFinishedPulling="2025-11-22 08:51:16.366862878 +0000 UTC m=+6518.780256126" observedRunningTime="2025-11-22 08:51:17.279238221 +0000 UTC m=+6519.692631479" watchObservedRunningTime="2025-11-22 08:51:17.337620804 +0000 UTC m=+6519.751014072" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.395471 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-scripts\") pod \"aodh-0\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.395782 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.396037 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg7h6\" (UniqueName: \"kubernetes.io/projected/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-kube-api-access-gg7h6\") pod \"aodh-0\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.396176 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-config-data\") pod \"aodh-0\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.396355 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.452732 4856 scope.go:117] "RemoveContainer" containerID="e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.497927 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg7h6\" (UniqueName: \"kubernetes.io/projected/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-kube-api-access-gg7h6\") pod \"aodh-0\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.497994 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-config-data\") pod \"aodh-0\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.498047 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-scripts\") pod \"aodh-0\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.498085 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.504155 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-scripts\") pod \"aodh-0\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.504479 4856 scope.go:117] "RemoveContainer" containerID="8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.506274 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.510140 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "870368ad-d281-4f1a-a37f-2aa672506c81" (UID: "870368ad-d281-4f1a-a37f-2aa672506c81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.511196 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-config-data\") pod \"aodh-0\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.522122 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg7h6\" (UniqueName: \"kubernetes.io/projected/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-kube-api-access-gg7h6\") pod \"aodh-0\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.540591 4856 scope.go:117] "RemoveContainer" containerID="b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3" Nov 22 08:51:17 crc kubenswrapper[4856]: E1122 08:51:17.541301 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3\": container with ID starting with b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3 not found: ID does not exist" containerID="b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.541356 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3"} err="failed to get container status \"b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3\": rpc error: code = NotFound desc = could not find container \"b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3\": container with ID starting with b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3 not found: ID does not exist" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.541388 4856 scope.go:117] "RemoveContainer" containerID="3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327" Nov 22 08:51:17 crc kubenswrapper[4856]: E1122 08:51:17.542595 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327\": container with ID starting with 3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327 not found: ID does not exist" containerID="3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.542631 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327"} err="failed to get container status \"3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327\": rpc error: code = NotFound desc = could not find container \"3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327\": container with ID starting with 3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327 not found: ID does not exist" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.542651 4856 scope.go:117] "RemoveContainer" containerID="e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527" Nov 22 08:51:17 crc kubenswrapper[4856]: E1122 08:51:17.543166 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527\": container with ID starting with e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527 not found: ID does not exist" containerID="e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.543193 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527"} err="failed to get container status \"e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527\": rpc error: code = NotFound desc = could not find container \"e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527\": container with ID starting with e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527 not found: ID does not exist" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.543209 4856 scope.go:117] "RemoveContainer" containerID="8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d" Nov 22 08:51:17 crc kubenswrapper[4856]: E1122 08:51:17.543895 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d\": container with ID starting with 8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d not found: ID does not exist" containerID="8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.543926 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d"} err="failed to get container status \"8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d\": rpc error: code = NotFound desc = could not find container \"8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d\": container with ID starting with 8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d not found: ID does not exist" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.543945 4856 scope.go:117] "RemoveContainer" containerID="b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.544521 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3"} err="failed to get container status \"b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3\": rpc error: code = NotFound desc = could not find container \"b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3\": container with ID starting with b5ece35eba6ddf84b9783b5f1fbebbd31b2833a45443c8e8c8763ac68ab953d3 not found: ID does not exist" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.544542 4856 scope.go:117] "RemoveContainer" containerID="3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.545331 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327"} err="failed to get container status \"3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327\": rpc error: code = NotFound desc = could not find container \"3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327\": container with ID starting with 3c3d43689f2164b6a7e8aecce53d2d84d54710e66910190b35f347bae3ddb327 not found: ID does not exist" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.545354 4856 scope.go:117] "RemoveContainer" containerID="e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.546245 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527"} err="failed to get container status \"e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527\": rpc error: code = NotFound desc = could not find container \"e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527\": container with ID starting with e8982adb8ee944aa461ff5cfbd121673b06973fa40a0b1c37b13cfad4f70e527 not found: ID does not exist" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.546267 4856 scope.go:117] "RemoveContainer" containerID="8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.546771 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d"} err="failed to get container status \"8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d\": rpc error: code = NotFound desc = could not find container \"8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d\": container with ID starting with 8715cc68f2c67f0dec6df7aaff54d0176fcd2e8411aa769a4b257ca18b58233d not found: ID does not exist" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.563354 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-config-data" (OuterVolumeSpecName: "config-data") pod "870368ad-d281-4f1a-a37f-2aa672506c81" (UID: "870368ad-d281-4f1a-a37f-2aa672506c81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.599728 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.599756 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/870368ad-d281-4f1a-a37f-2aa672506c81-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.656205 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.852190 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.871344 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.898420 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.901027 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.902958 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.904294 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.904544 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 08:51:17 crc kubenswrapper[4856]: I1122 08:51:17.908649 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.007490 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dvft\" (UniqueName: \"kubernetes.io/projected/727bb4c7-d4ba-419a-95bb-f5544209e45c-kube-api-access-5dvft\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.007607 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-scripts\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.007692 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.007719 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727bb4c7-d4ba-419a-95bb-f5544209e45c-log-httpd\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.007945 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727bb4c7-d4ba-419a-95bb-f5544209e45c-run-httpd\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.007981 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.008031 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-config-data\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.008081 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.110056 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dvft\" (UniqueName: \"kubernetes.io/projected/727bb4c7-d4ba-419a-95bb-f5544209e45c-kube-api-access-5dvft\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.110106 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-scripts\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.110133 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.110155 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727bb4c7-d4ba-419a-95bb-f5544209e45c-log-httpd\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.110263 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727bb4c7-d4ba-419a-95bb-f5544209e45c-run-httpd\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.110288 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.110307 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-config-data\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.110331 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.110753 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727bb4c7-d4ba-419a-95bb-f5544209e45c-log-httpd\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.110799 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727bb4c7-d4ba-419a-95bb-f5544209e45c-run-httpd\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.116455 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-config-data\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.122096 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.124796 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-scripts\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.127088 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.129207 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.138494 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dvft\" (UniqueName: \"kubernetes.io/projected/727bb4c7-d4ba-419a-95bb-f5544209e45c-kube-api-access-5dvft\") pod \"ceilometer-0\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.152861 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 08:51:18 crc kubenswrapper[4856]: W1122 08:51:18.153140 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fafe7e9_2c4b_4692_8b21_e22065d9e60b.slice/crio-a075d4af2ee911597bfd0d19b925d3a766c7fe49b4103fdfd8763596f0446d50 WatchSource:0}: Error finding container a075d4af2ee911597bfd0d19b925d3a766c7fe49b4103fdfd8763596f0446d50: Status 404 returned error can't find the container with id a075d4af2ee911597bfd0d19b925d3a766c7fe49b4103fdfd8763596f0446d50 Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.227724 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.237049 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8fafe7e9-2c4b-4692-8b21-e22065d9e60b","Type":"ContainerStarted","Data":"a075d4af2ee911597bfd0d19b925d3a766c7fe49b4103fdfd8763596f0446d50"} Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.687234 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:51:18 crc kubenswrapper[4856]: I1122 08:51:18.720547 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="870368ad-d281-4f1a-a37f-2aa672506c81" path="/var/lib/kubelet/pods/870368ad-d281-4f1a-a37f-2aa672506c81/volumes" Nov 22 08:51:19 crc kubenswrapper[4856]: I1122 08:51:19.255646 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727bb4c7-d4ba-419a-95bb-f5544209e45c","Type":"ContainerStarted","Data":"28741a4d30536661164de1ab67b0e02f98ad8d492e3f805803d4c785b22fac1a"} Nov 22 08:51:19 crc kubenswrapper[4856]: I1122 08:51:19.256060 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727bb4c7-d4ba-419a-95bb-f5544209e45c","Type":"ContainerStarted","Data":"9b511df7183a92a71fd3e01e74e3745ccfcf7734975d861374c2831a9a30788c"} Nov 22 08:51:19 crc kubenswrapper[4856]: I1122 08:51:19.260696 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8fafe7e9-2c4b-4692-8b21-e22065d9e60b","Type":"ContainerStarted","Data":"e5d71177ac0b40b8f705bd3d55712ed5fed448a225ee4904813b977497f1dda8"} Nov 22 08:51:20 crc kubenswrapper[4856]: I1122 08:51:20.070116 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-2zs2k"] Nov 22 08:51:20 crc kubenswrapper[4856]: I1122 08:51:20.080011 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-2zs2k"] Nov 22 08:51:20 crc kubenswrapper[4856]: I1122 08:51:20.284055 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8fafe7e9-2c4b-4692-8b21-e22065d9e60b","Type":"ContainerStarted","Data":"2163b601979b55cdd631b86bc8cde918ceb7dc725359927809bd563f0a3f3b51"} Nov 22 08:51:20 crc kubenswrapper[4856]: I1122 08:51:20.287141 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727bb4c7-d4ba-419a-95bb-f5544209e45c","Type":"ContainerStarted","Data":"7f92f9f89936edc8b5ee46d2af567b83ed6ad1456d127a6268a19ca7c04003cf"} Nov 22 08:51:20 crc kubenswrapper[4856]: I1122 08:51:20.647500 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:51:20 crc kubenswrapper[4856]: I1122 08:51:20.724998 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="267851c9-b132-4a55-a827-0844d57af030" path="/var/lib/kubelet/pods/267851c9-b132-4a55-a827-0844d57af030/volumes" Nov 22 08:51:20 crc kubenswrapper[4856]: I1122 08:51:20.913309 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 22 08:51:21 crc kubenswrapper[4856]: I1122 08:51:21.303039 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727bb4c7-d4ba-419a-95bb-f5544209e45c","Type":"ContainerStarted","Data":"72994d1814c4623b0229806242aca063e87a358f2064001b1862b99e650cf868"} Nov 22 08:51:22 crc kubenswrapper[4856]: I1122 08:51:22.317656 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8fafe7e9-2c4b-4692-8b21-e22065d9e60b","Type":"ContainerStarted","Data":"c5fda2422d05938550c2c1329aa7490933fc13d03787160792e306692e0e15af"} Nov 22 08:51:23 crc kubenswrapper[4856]: I1122 08:51:23.365431 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8fafe7e9-2c4b-4692-8b21-e22065d9e60b","Type":"ContainerStarted","Data":"1969bf5603985faa1c891cfcdbc32c03aff8a77b994ac0974ba9bfafd0dace71"} Nov 22 08:51:23 crc kubenswrapper[4856]: I1122 08:51:23.366915 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-listener" containerID="cri-o://1969bf5603985faa1c891cfcdbc32c03aff8a77b994ac0974ba9bfafd0dace71" gracePeriod=30 Nov 22 08:51:23 crc kubenswrapper[4856]: I1122 08:51:23.366999 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-notifier" containerID="cri-o://c5fda2422d05938550c2c1329aa7490933fc13d03787160792e306692e0e15af" gracePeriod=30 Nov 22 08:51:23 crc kubenswrapper[4856]: I1122 08:51:23.367048 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-evaluator" containerID="cri-o://2163b601979b55cdd631b86bc8cde918ceb7dc725359927809bd563f0a3f3b51" gracePeriod=30 Nov 22 08:51:23 crc kubenswrapper[4856]: I1122 08:51:23.367089 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-api" containerID="cri-o://e5d71177ac0b40b8f705bd3d55712ed5fed448a225ee4904813b977497f1dda8" gracePeriod=30 Nov 22 08:51:23 crc kubenswrapper[4856]: I1122 08:51:23.382084 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727bb4c7-d4ba-419a-95bb-f5544209e45c","Type":"ContainerStarted","Data":"64fb2adf61ae4803fabc53e7b88ef6589c728e68ecb424c6637cd9eae9215d07"} Nov 22 08:51:23 crc kubenswrapper[4856]: I1122 08:51:23.382274 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="ceilometer-central-agent" containerID="cri-o://28741a4d30536661164de1ab67b0e02f98ad8d492e3f805803d4c785b22fac1a" gracePeriod=30 Nov 22 08:51:23 crc kubenswrapper[4856]: I1122 08:51:23.382385 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 08:51:23 crc kubenswrapper[4856]: I1122 08:51:23.382428 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="proxy-httpd" containerID="cri-o://64fb2adf61ae4803fabc53e7b88ef6589c728e68ecb424c6637cd9eae9215d07" gracePeriod=30 Nov 22 08:51:23 crc kubenswrapper[4856]: I1122 08:51:23.382482 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="sg-core" containerID="cri-o://72994d1814c4623b0229806242aca063e87a358f2064001b1862b99e650cf868" gracePeriod=30 Nov 22 08:51:23 crc kubenswrapper[4856]: I1122 08:51:23.382583 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="ceilometer-notification-agent" containerID="cri-o://7f92f9f89936edc8b5ee46d2af567b83ed6ad1456d127a6268a19ca7c04003cf" gracePeriod=30 Nov 22 08:51:23 crc kubenswrapper[4856]: I1122 08:51:23.404400 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.941017462 podStartE2EDuration="6.404383827s" podCreationTimestamp="2025-11-22 08:51:17 +0000 UTC" firstStartedPulling="2025-11-22 08:51:18.158262953 +0000 UTC m=+6520.571656211" lastFinishedPulling="2025-11-22 08:51:22.621629318 +0000 UTC m=+6525.035022576" observedRunningTime="2025-11-22 08:51:23.397999135 +0000 UTC m=+6525.811392403" watchObservedRunningTime="2025-11-22 08:51:23.404383827 +0000 UTC m=+6525.817777085" Nov 22 08:51:24 crc kubenswrapper[4856]: I1122 08:51:24.392728 4856 generic.go:334] "Generic (PLEG): container finished" podID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerID="64fb2adf61ae4803fabc53e7b88ef6589c728e68ecb424c6637cd9eae9215d07" exitCode=0 Nov 22 08:51:24 crc kubenswrapper[4856]: I1122 08:51:24.393193 4856 generic.go:334] "Generic (PLEG): container finished" podID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerID="72994d1814c4623b0229806242aca063e87a358f2064001b1862b99e650cf868" exitCode=2 Nov 22 08:51:24 crc kubenswrapper[4856]: I1122 08:51:24.392805 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727bb4c7-d4ba-419a-95bb-f5544209e45c","Type":"ContainerDied","Data":"64fb2adf61ae4803fabc53e7b88ef6589c728e68ecb424c6637cd9eae9215d07"} Nov 22 08:51:24 crc kubenswrapper[4856]: I1122 08:51:24.393240 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727bb4c7-d4ba-419a-95bb-f5544209e45c","Type":"ContainerDied","Data":"72994d1814c4623b0229806242aca063e87a358f2064001b1862b99e650cf868"} Nov 22 08:51:24 crc kubenswrapper[4856]: I1122 08:51:24.393257 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727bb4c7-d4ba-419a-95bb-f5544209e45c","Type":"ContainerDied","Data":"7f92f9f89936edc8b5ee46d2af567b83ed6ad1456d127a6268a19ca7c04003cf"} Nov 22 08:51:24 crc kubenswrapper[4856]: I1122 08:51:24.393207 4856 generic.go:334] "Generic (PLEG): container finished" podID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerID="7f92f9f89936edc8b5ee46d2af567b83ed6ad1456d127a6268a19ca7c04003cf" exitCode=0 Nov 22 08:51:24 crc kubenswrapper[4856]: I1122 08:51:24.396588 4856 generic.go:334] "Generic (PLEG): container finished" podID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerID="c5fda2422d05938550c2c1329aa7490933fc13d03787160792e306692e0e15af" exitCode=0 Nov 22 08:51:24 crc kubenswrapper[4856]: I1122 08:51:24.396620 4856 generic.go:334] "Generic (PLEG): container finished" podID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerID="2163b601979b55cdd631b86bc8cde918ceb7dc725359927809bd563f0a3f3b51" exitCode=0 Nov 22 08:51:24 crc kubenswrapper[4856]: I1122 08:51:24.396628 4856 generic.go:334] "Generic (PLEG): container finished" podID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerID="e5d71177ac0b40b8f705bd3d55712ed5fed448a225ee4904813b977497f1dda8" exitCode=0 Nov 22 08:51:24 crc kubenswrapper[4856]: I1122 08:51:24.396646 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8fafe7e9-2c4b-4692-8b21-e22065d9e60b","Type":"ContainerDied","Data":"c5fda2422d05938550c2c1329aa7490933fc13d03787160792e306692e0e15af"} Nov 22 08:51:24 crc kubenswrapper[4856]: I1122 08:51:24.396670 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8fafe7e9-2c4b-4692-8b21-e22065d9e60b","Type":"ContainerDied","Data":"2163b601979b55cdd631b86bc8cde918ceb7dc725359927809bd563f0a3f3b51"} Nov 22 08:51:24 crc kubenswrapper[4856]: I1122 08:51:24.396680 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8fafe7e9-2c4b-4692-8b21-e22065d9e60b","Type":"ContainerDied","Data":"e5d71177ac0b40b8f705bd3d55712ed5fed448a225ee4904813b977497f1dda8"} Nov 22 08:51:25 crc kubenswrapper[4856]: I1122 08:51:25.558355 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 22 08:51:25 crc kubenswrapper[4856]: I1122 08:51:25.580292 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.665368454 podStartE2EDuration="8.580270906s" podCreationTimestamp="2025-11-22 08:51:17 +0000 UTC" firstStartedPulling="2025-11-22 08:51:18.70721581 +0000 UTC m=+6521.120609068" lastFinishedPulling="2025-11-22 08:51:22.622118262 +0000 UTC m=+6525.035511520" observedRunningTime="2025-11-22 08:51:23.432912436 +0000 UTC m=+6525.846305694" watchObservedRunningTime="2025-11-22 08:51:25.580270906 +0000 UTC m=+6527.993664154" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.433768 4856 generic.go:334] "Generic (PLEG): container finished" podID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerID="28741a4d30536661164de1ab67b0e02f98ad8d492e3f805803d4c785b22fac1a" exitCode=0 Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.433846 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727bb4c7-d4ba-419a-95bb-f5544209e45c","Type":"ContainerDied","Data":"28741a4d30536661164de1ab67b0e02f98ad8d492e3f805803d4c785b22fac1a"} Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.587987 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.738109 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-sg-core-conf-yaml\") pod \"727bb4c7-d4ba-419a-95bb-f5544209e45c\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.738257 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727bb4c7-d4ba-419a-95bb-f5544209e45c-log-httpd\") pod \"727bb4c7-d4ba-419a-95bb-f5544209e45c\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.739038 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/727bb4c7-d4ba-419a-95bb-f5544209e45c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "727bb4c7-d4ba-419a-95bb-f5544209e45c" (UID: "727bb4c7-d4ba-419a-95bb-f5544209e45c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.739204 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-config-data\") pod \"727bb4c7-d4ba-419a-95bb-f5544209e45c\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.739370 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-combined-ca-bundle\") pod \"727bb4c7-d4ba-419a-95bb-f5544209e45c\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.739538 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dvft\" (UniqueName: \"kubernetes.io/projected/727bb4c7-d4ba-419a-95bb-f5544209e45c-kube-api-access-5dvft\") pod \"727bb4c7-d4ba-419a-95bb-f5544209e45c\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.739644 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-ceilometer-tls-certs\") pod \"727bb4c7-d4ba-419a-95bb-f5544209e45c\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.739751 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727bb4c7-d4ba-419a-95bb-f5544209e45c-run-httpd\") pod \"727bb4c7-d4ba-419a-95bb-f5544209e45c\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.739923 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-scripts\") pod \"727bb4c7-d4ba-419a-95bb-f5544209e45c\" (UID: \"727bb4c7-d4ba-419a-95bb-f5544209e45c\") " Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.740264 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/727bb4c7-d4ba-419a-95bb-f5544209e45c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "727bb4c7-d4ba-419a-95bb-f5544209e45c" (UID: "727bb4c7-d4ba-419a-95bb-f5544209e45c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.740964 4856 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727bb4c7-d4ba-419a-95bb-f5544209e45c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.741038 4856 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/727bb4c7-d4ba-419a-95bb-f5544209e45c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.744901 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727bb4c7-d4ba-419a-95bb-f5544209e45c-kube-api-access-5dvft" (OuterVolumeSpecName: "kube-api-access-5dvft") pod "727bb4c7-d4ba-419a-95bb-f5544209e45c" (UID: "727bb4c7-d4ba-419a-95bb-f5544209e45c"). InnerVolumeSpecName "kube-api-access-5dvft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.745843 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-scripts" (OuterVolumeSpecName: "scripts") pod "727bb4c7-d4ba-419a-95bb-f5544209e45c" (UID: "727bb4c7-d4ba-419a-95bb-f5544209e45c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.773377 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "727bb4c7-d4ba-419a-95bb-f5544209e45c" (UID: "727bb4c7-d4ba-419a-95bb-f5544209e45c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.795121 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "727bb4c7-d4ba-419a-95bb-f5544209e45c" (UID: "727bb4c7-d4ba-419a-95bb-f5544209e45c"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.841037 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "727bb4c7-d4ba-419a-95bb-f5544209e45c" (UID: "727bb4c7-d4ba-419a-95bb-f5544209e45c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.842653 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.842670 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dvft\" (UniqueName: \"kubernetes.io/projected/727bb4c7-d4ba-419a-95bb-f5544209e45c-kube-api-access-5dvft\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.842682 4856 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.842690 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.842700 4856 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.865907 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-config-data" (OuterVolumeSpecName: "config-data") pod "727bb4c7-d4ba-419a-95bb-f5544209e45c" (UID: "727bb4c7-d4ba-419a-95bb-f5544209e45c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:27 crc kubenswrapper[4856]: I1122 08:51:27.944829 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/727bb4c7-d4ba-419a-95bb-f5544209e45c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.446702 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"727bb4c7-d4ba-419a-95bb-f5544209e45c","Type":"ContainerDied","Data":"9b511df7183a92a71fd3e01e74e3745ccfcf7734975d861374c2831a9a30788c"} Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.446757 4856 scope.go:117] "RemoveContainer" containerID="64fb2adf61ae4803fabc53e7b88ef6589c728e68ecb424c6637cd9eae9215d07" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.446924 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.479718 4856 scope.go:117] "RemoveContainer" containerID="72994d1814c4623b0229806242aca063e87a358f2064001b1862b99e650cf868" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.489186 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.501463 4856 scope.go:117] "RemoveContainer" containerID="7f92f9f89936edc8b5ee46d2af567b83ed6ad1456d127a6268a19ca7c04003cf" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.503082 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.520212 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:51:28 crc kubenswrapper[4856]: E1122 08:51:28.520700 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="sg-core" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.520726 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="sg-core" Nov 22 08:51:28 crc kubenswrapper[4856]: E1122 08:51:28.520756 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="ceilometer-central-agent" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.520764 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="ceilometer-central-agent" Nov 22 08:51:28 crc kubenswrapper[4856]: E1122 08:51:28.520788 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="ceilometer-notification-agent" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.520796 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="ceilometer-notification-agent" Nov 22 08:51:28 crc kubenswrapper[4856]: E1122 08:51:28.520809 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="proxy-httpd" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.520815 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="proxy-httpd" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.520999 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="proxy-httpd" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.521017 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="sg-core" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.521029 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="ceilometer-notification-agent" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.521044 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" containerName="ceilometer-central-agent" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.523146 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.527177 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.527495 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.528749 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.528965 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.535325 4856 scope.go:117] "RemoveContainer" containerID="28741a4d30536661164de1ab67b0e02f98ad8d492e3f805803d4c785b22fac1a" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.659961 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d442d81d-f24e-4a27-bbb5-f25a1792bfca-run-httpd\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.660036 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.660077 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.660151 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-scripts\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.660187 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4zrq\" (UniqueName: \"kubernetes.io/projected/d442d81d-f24e-4a27-bbb5-f25a1792bfca-kube-api-access-d4zrq\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.660211 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.660243 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d442d81d-f24e-4a27-bbb5-f25a1792bfca-log-httpd\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.660314 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-config-data\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.734178 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="727bb4c7-d4ba-419a-95bb-f5544209e45c" path="/var/lib/kubelet/pods/727bb4c7-d4ba-419a-95bb-f5544209e45c/volumes" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.762809 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-scripts\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.762899 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4zrq\" (UniqueName: \"kubernetes.io/projected/d442d81d-f24e-4a27-bbb5-f25a1792bfca-kube-api-access-d4zrq\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.763007 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.763054 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d442d81d-f24e-4a27-bbb5-f25a1792bfca-log-httpd\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.763131 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-config-data\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.763178 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d442d81d-f24e-4a27-bbb5-f25a1792bfca-run-httpd\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.763198 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.763229 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.764338 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d442d81d-f24e-4a27-bbb5-f25a1792bfca-run-httpd\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.764454 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d442d81d-f24e-4a27-bbb5-f25a1792bfca-log-httpd\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.768083 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-scripts\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.770124 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.770175 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.770190 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-config-data\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.783225 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d442d81d-f24e-4a27-bbb5-f25a1792bfca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.797005 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4zrq\" (UniqueName: \"kubernetes.io/projected/d442d81d-f24e-4a27-bbb5-f25a1792bfca-kube-api-access-d4zrq\") pod \"ceilometer-0\" (UID: \"d442d81d-f24e-4a27-bbb5-f25a1792bfca\") " pod="openstack/ceilometer-0" Nov 22 08:51:28 crc kubenswrapper[4856]: I1122 08:51:28.848906 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 22 08:51:29 crc kubenswrapper[4856]: I1122 08:51:29.321961 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 22 08:51:29 crc kubenswrapper[4856]: W1122 08:51:29.332930 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd442d81d_f24e_4a27_bbb5_f25a1792bfca.slice/crio-bdf348a255474f9cd6019dade2f8c44fac60a9e8b049a29ca8da0eaaac4b63c1 WatchSource:0}: Error finding container bdf348a255474f9cd6019dade2f8c44fac60a9e8b049a29ca8da0eaaac4b63c1: Status 404 returned error can't find the container with id bdf348a255474f9cd6019dade2f8c44fac60a9e8b049a29ca8da0eaaac4b63c1 Nov 22 08:51:29 crc kubenswrapper[4856]: I1122 08:51:29.461058 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d442d81d-f24e-4a27-bbb5-f25a1792bfca","Type":"ContainerStarted","Data":"bdf348a255474f9cd6019dade2f8c44fac60a9e8b049a29ca8da0eaaac4b63c1"} Nov 22 08:51:30 crc kubenswrapper[4856]: I1122 08:51:30.474569 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d442d81d-f24e-4a27-bbb5-f25a1792bfca","Type":"ContainerStarted","Data":"649db818c4cd7152fa19458da7e6ce8bc03d5a11d57ed5c2a38b8c1fcc749034"} Nov 22 08:51:31 crc kubenswrapper[4856]: I1122 08:51:31.488106 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d442d81d-f24e-4a27-bbb5-f25a1792bfca","Type":"ContainerStarted","Data":"a1b8b08304d0c6c372eca2979c4c4fda7ffd3ea9e4361a4228dd4333ff6c451a"} Nov 22 08:51:32 crc kubenswrapper[4856]: I1122 08:51:32.509723 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d442d81d-f24e-4a27-bbb5-f25a1792bfca","Type":"ContainerStarted","Data":"56f5414776cab40481dbf4d9e96e85b6d432ae900ebac52dd31557ae7fbb5cee"} Nov 22 08:51:33 crc kubenswrapper[4856]: I1122 08:51:33.524798 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d442d81d-f24e-4a27-bbb5-f25a1792bfca","Type":"ContainerStarted","Data":"382538ed6fe04584c0f8b57e6f54b3d8d64a14698af8b21ef99fe4480f62092a"} Nov 22 08:51:33 crc kubenswrapper[4856]: I1122 08:51:33.526062 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 22 08:51:35 crc kubenswrapper[4856]: I1122 08:51:35.545418 4856 scope.go:117] "RemoveContainer" containerID="76617c03f9b62e7148875343c74a91123a984fbed254d762e837e281d8f51154" Nov 22 08:51:35 crc kubenswrapper[4856]: I1122 08:51:35.583051 4856 scope.go:117] "RemoveContainer" containerID="3d08a6b17d0c267b751afcdc61abccf0504150187cf5a6260aa7a0934df37c3f" Nov 22 08:51:35 crc kubenswrapper[4856]: I1122 08:51:35.623358 4856 scope.go:117] "RemoveContainer" containerID="4b068997da0976c377390de0aa6b56639cf2efa55f68565090c895741512b7b6" Nov 22 08:51:51 crc kubenswrapper[4856]: I1122 08:51:51.031155 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=19.811254291 podStartE2EDuration="23.031137569s" podCreationTimestamp="2025-11-22 08:51:28 +0000 UTC" firstStartedPulling="2025-11-22 08:51:29.335831133 +0000 UTC m=+6531.749224391" lastFinishedPulling="2025-11-22 08:51:32.555714411 +0000 UTC m=+6534.969107669" observedRunningTime="2025-11-22 08:51:33.55293901 +0000 UTC m=+6535.966332268" watchObservedRunningTime="2025-11-22 08:51:51.031137569 +0000 UTC m=+6553.444530827" Nov 22 08:51:51 crc kubenswrapper[4856]: I1122 08:51:51.039207 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-j6kvd"] Nov 22 08:51:51 crc kubenswrapper[4856]: I1122 08:51:51.048085 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-j6kvd"] Nov 22 08:51:52 crc kubenswrapper[4856]: I1122 08:51:52.025625 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-38fd-account-create-vmlhx"] Nov 22 08:51:52 crc kubenswrapper[4856]: I1122 08:51:52.033454 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-38fd-account-create-vmlhx"] Nov 22 08:51:52 crc kubenswrapper[4856]: I1122 08:51:52.719592 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1c0375c-7ae6-478c-a7a4-501faf59190c" path="/var/lib/kubelet/pods/b1c0375c-7ae6-478c-a7a4-501faf59190c/volumes" Nov 22 08:51:52 crc kubenswrapper[4856]: I1122 08:51:52.720658 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dac198ef-b907-472d-8004-7c5f07fd55f9" path="/var/lib/kubelet/pods/dac198ef-b907-472d-8004-7c5f07fd55f9/volumes" Nov 22 08:51:54 crc kubenswrapper[4856]: I1122 08:51:54.730929 4856 generic.go:334] "Generic (PLEG): container finished" podID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerID="1969bf5603985faa1c891cfcdbc32c03aff8a77b994ac0974ba9bfafd0dace71" exitCode=137 Nov 22 08:51:54 crc kubenswrapper[4856]: I1122 08:51:54.730970 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8fafe7e9-2c4b-4692-8b21-e22065d9e60b","Type":"ContainerDied","Data":"1969bf5603985faa1c891cfcdbc32c03aff8a77b994ac0974ba9bfafd0dace71"} Nov 22 08:51:54 crc kubenswrapper[4856]: I1122 08:51:54.911464 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.064451 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-combined-ca-bundle\") pod \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.064533 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-scripts\") pod \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.064600 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg7h6\" (UniqueName: \"kubernetes.io/projected/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-kube-api-access-gg7h6\") pod \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.064725 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-config-data\") pod \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\" (UID: \"8fafe7e9-2c4b-4692-8b21-e22065d9e60b\") " Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.072131 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-scripts" (OuterVolumeSpecName: "scripts") pod "8fafe7e9-2c4b-4692-8b21-e22065d9e60b" (UID: "8fafe7e9-2c4b-4692-8b21-e22065d9e60b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.073754 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-kube-api-access-gg7h6" (OuterVolumeSpecName: "kube-api-access-gg7h6") pod "8fafe7e9-2c4b-4692-8b21-e22065d9e60b" (UID: "8fafe7e9-2c4b-4692-8b21-e22065d9e60b"). InnerVolumeSpecName "kube-api-access-gg7h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.167691 4856 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-scripts\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.167725 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg7h6\" (UniqueName: \"kubernetes.io/projected/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-kube-api-access-gg7h6\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.193282 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fafe7e9-2c4b-4692-8b21-e22065d9e60b" (UID: "8fafe7e9-2c4b-4692-8b21-e22065d9e60b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.212973 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-config-data" (OuterVolumeSpecName: "config-data") pod "8fafe7e9-2c4b-4692-8b21-e22065d9e60b" (UID: "8fafe7e9-2c4b-4692-8b21-e22065d9e60b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.270695 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.271315 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fafe7e9-2c4b-4692-8b21-e22065d9e60b-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.744520 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"8fafe7e9-2c4b-4692-8b21-e22065d9e60b","Type":"ContainerDied","Data":"a075d4af2ee911597bfd0d19b925d3a766c7fe49b4103fdfd8763596f0446d50"} Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.744592 4856 scope.go:117] "RemoveContainer" containerID="1969bf5603985faa1c891cfcdbc32c03aff8a77b994ac0974ba9bfafd0dace71" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.744634 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.773821 4856 scope.go:117] "RemoveContainer" containerID="c5fda2422d05938550c2c1329aa7490933fc13d03787160792e306692e0e15af" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.777279 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.790236 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.798100 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 22 08:51:55 crc kubenswrapper[4856]: E1122 08:51:55.798619 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-listener" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.798643 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-listener" Nov 22 08:51:55 crc kubenswrapper[4856]: E1122 08:51:55.798676 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-api" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.798685 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-api" Nov 22 08:51:55 crc kubenswrapper[4856]: E1122 08:51:55.798703 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-notifier" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.798711 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-notifier" Nov 22 08:51:55 crc kubenswrapper[4856]: E1122 08:51:55.798745 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-evaluator" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.798753 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-evaluator" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.798999 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-notifier" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.799021 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-evaluator" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.799041 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-api" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.799051 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" containerName="aodh-listener" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.800647 4856 scope.go:117] "RemoveContainer" containerID="2163b601979b55cdd631b86bc8cde918ceb7dc725359927809bd563f0a3f3b51" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.801584 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.806241 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.806274 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.806474 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-5c97j" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.807275 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.807456 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.834612 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.845383 4856 scope.go:117] "RemoveContainer" containerID="e5d71177ac0b40b8f705bd3d55712ed5fed448a225ee4904813b977497f1dda8" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.884177 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-scripts\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.884250 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mccts\" (UniqueName: \"kubernetes.io/projected/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-kube-api-access-mccts\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.884296 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-internal-tls-certs\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.884331 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-public-tls-certs\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.884392 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-config-data\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.884448 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.987032 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-public-tls-certs\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.987400 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-config-data\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.987447 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.987619 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-scripts\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.987642 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mccts\" (UniqueName: \"kubernetes.io/projected/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-kube-api-access-mccts\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.987679 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-internal-tls-certs\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.992281 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.992295 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-scripts\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.992477 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-config-data\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.992979 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-internal-tls-certs\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:55 crc kubenswrapper[4856]: I1122 08:51:55.993948 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-public-tls-certs\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:56 crc kubenswrapper[4856]: I1122 08:51:56.005906 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mccts\" (UniqueName: \"kubernetes.io/projected/b3bdd433-fc71-456a-8e71-69b05aa2f6c9-kube-api-access-mccts\") pod \"aodh-0\" (UID: \"b3bdd433-fc71-456a-8e71-69b05aa2f6c9\") " pod="openstack/aodh-0" Nov 22 08:51:56 crc kubenswrapper[4856]: I1122 08:51:56.144657 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 22 08:51:56 crc kubenswrapper[4856]: I1122 08:51:56.630617 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 22 08:51:56 crc kubenswrapper[4856]: I1122 08:51:56.720937 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fafe7e9-2c4b-4692-8b21-e22065d9e60b" path="/var/lib/kubelet/pods/8fafe7e9-2c4b-4692-8b21-e22065d9e60b/volumes" Nov 22 08:51:56 crc kubenswrapper[4856]: I1122 08:51:56.753551 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b3bdd433-fc71-456a-8e71-69b05aa2f6c9","Type":"ContainerStarted","Data":"9153455119a32dff448d62e36c082851c9702d03411d293f25b5169eacaed955"} Nov 22 08:51:57 crc kubenswrapper[4856]: I1122 08:51:57.765996 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b3bdd433-fc71-456a-8e71-69b05aa2f6c9","Type":"ContainerStarted","Data":"685baa9ca1560f9891d504ae7377dee74992d5f1a57187d174698b0a45bc255f"} Nov 22 08:51:57 crc kubenswrapper[4856]: I1122 08:51:57.766417 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b3bdd433-fc71-456a-8e71-69b05aa2f6c9","Type":"ContainerStarted","Data":"7a0710f4cefc950b6e0146d8a3e307f579e1d659fadad27796e84ebb3f8a2f93"} Nov 22 08:51:58 crc kubenswrapper[4856]: I1122 08:51:58.782179 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b3bdd433-fc71-456a-8e71-69b05aa2f6c9","Type":"ContainerStarted","Data":"2e85ad01ffdd4989caa0b423565286e72b44d515a0cae1b0073805d93915f6fd"} Nov 22 08:51:59 crc kubenswrapper[4856]: I1122 08:51:59.205147 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 22 08:52:00 crc kubenswrapper[4856]: I1122 08:52:00.809103 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b3bdd433-fc71-456a-8e71-69b05aa2f6c9","Type":"ContainerStarted","Data":"df6a27a72fa9095ff59bfa723c531ad5735506258e3811994b4ae6d75d94bd39"} Nov 22 08:52:01 crc kubenswrapper[4856]: I1122 08:52:01.025183 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.442983267 podStartE2EDuration="6.025161587s" podCreationTimestamp="2025-11-22 08:51:55 +0000 UTC" firstStartedPulling="2025-11-22 08:51:56.632261821 +0000 UTC m=+6559.045655069" lastFinishedPulling="2025-11-22 08:51:59.214440131 +0000 UTC m=+6561.627833389" observedRunningTime="2025-11-22 08:52:00.839668307 +0000 UTC m=+6563.253061565" watchObservedRunningTime="2025-11-22 08:52:01.025161587 +0000 UTC m=+6563.438554875" Nov 22 08:52:01 crc kubenswrapper[4856]: I1122 08:52:01.030652 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-cbcx6"] Nov 22 08:52:01 crc kubenswrapper[4856]: I1122 08:52:01.041099 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-cbcx6"] Nov 22 08:52:02 crc kubenswrapper[4856]: I1122 08:52:02.733947 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eccf4778-135b-45e6-958d-2ecd55a79d70" path="/var/lib/kubelet/pods/eccf4778-135b-45e6-958d-2ecd55a79d70/volumes" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.282380 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-756c696cf7-mjwln"] Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.284785 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.293840 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-756c696cf7-mjwln"] Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.303892 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.363062 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-openstack-cell1\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.363123 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf8gv\" (UniqueName: \"kubernetes.io/projected/e4a5140a-d9f8-435d-a9fd-9385591e44fc-kube-api-access-pf8gv\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.363152 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-ovsdbserver-sb\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.363205 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-ovsdbserver-nb\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.363235 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-dns-svc\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.363368 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-config\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.464984 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-openstack-cell1\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.465036 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf8gv\" (UniqueName: \"kubernetes.io/projected/e4a5140a-d9f8-435d-a9fd-9385591e44fc-kube-api-access-pf8gv\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.465064 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-ovsdbserver-sb\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.465121 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-ovsdbserver-nb\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.465148 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-dns-svc\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.465240 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-config\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.466067 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-openstack-cell1\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.466243 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-ovsdbserver-sb\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.466407 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-ovsdbserver-nb\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.466741 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-config\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.466752 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-dns-svc\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.485247 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf8gv\" (UniqueName: \"kubernetes.io/projected/e4a5140a-d9f8-435d-a9fd-9385591e44fc-kube-api-access-pf8gv\") pod \"dnsmasq-dns-756c696cf7-mjwln\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:03 crc kubenswrapper[4856]: I1122 08:52:03.623163 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:04 crc kubenswrapper[4856]: I1122 08:52:04.084855 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-756c696cf7-mjwln"] Nov 22 08:52:05 crc kubenswrapper[4856]: I1122 08:52:05.325317 4856 generic.go:334] "Generic (PLEG): container finished" podID="e4a5140a-d9f8-435d-a9fd-9385591e44fc" containerID="95242f022ff74249ab8bf4fe6346d8c857fea011a3d37fa6d1b8c11df410d6b3" exitCode=0 Nov 22 08:52:05 crc kubenswrapper[4856]: I1122 08:52:05.325450 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" event={"ID":"e4a5140a-d9f8-435d-a9fd-9385591e44fc","Type":"ContainerDied","Data":"95242f022ff74249ab8bf4fe6346d8c857fea011a3d37fa6d1b8c11df410d6b3"} Nov 22 08:52:05 crc kubenswrapper[4856]: I1122 08:52:05.325958 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" event={"ID":"e4a5140a-d9f8-435d-a9fd-9385591e44fc","Type":"ContainerStarted","Data":"65e7e98e668039d972350aadcbaccd14e0f538ac9eda3f55f412e1fc71621798"} Nov 22 08:52:06 crc kubenswrapper[4856]: I1122 08:52:06.337291 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" event={"ID":"e4a5140a-d9f8-435d-a9fd-9385591e44fc","Type":"ContainerStarted","Data":"5a21c14563b994f5531b9ca00d2b7c834a57e1b6ff57f2bb2099394f029fcd7b"} Nov 22 08:52:06 crc kubenswrapper[4856]: I1122 08:52:06.337670 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:06 crc kubenswrapper[4856]: I1122 08:52:06.361752 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" podStartSLOduration=3.3617333990000002 podStartE2EDuration="3.361733399s" podCreationTimestamp="2025-11-22 08:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:52:06.353976729 +0000 UTC m=+6568.767369997" watchObservedRunningTime="2025-11-22 08:52:06.361733399 +0000 UTC m=+6568.775126657" Nov 22 08:52:13 crc kubenswrapper[4856]: I1122 08:52:13.625336 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:13 crc kubenswrapper[4856]: I1122 08:52:13.715408 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f89c44cf-dxqq7"] Nov 22 08:52:13 crc kubenswrapper[4856]: I1122 08:52:13.715640 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" podUID="7d1ca7c1-892b-402a-a523-407168b2deb8" containerName="dnsmasq-dns" containerID="cri-o://c49cd7874d707676c448d2df1af807f11f40c863f6ff01ac56d2c1b9066605d0" gracePeriod=10 Nov 22 08:52:13 crc kubenswrapper[4856]: I1122 08:52:13.875004 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-779cdcc5bf-chvxh"] Nov 22 08:52:13 crc kubenswrapper[4856]: I1122 08:52:13.885092 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:13 crc kubenswrapper[4856]: I1122 08:52:13.901077 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-779cdcc5bf-chvxh"] Nov 22 08:52:13 crc kubenswrapper[4856]: I1122 08:52:13.996783 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-ovsdbserver-sb\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:13 crc kubenswrapper[4856]: I1122 08:52:13.996883 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-openstack-cell1\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:13 crc kubenswrapper[4856]: I1122 08:52:13.996925 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-config\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:13 crc kubenswrapper[4856]: I1122 08:52:13.996969 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-ovsdbserver-nb\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:13 crc kubenswrapper[4856]: I1122 08:52:13.997009 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-dns-svc\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:13 crc kubenswrapper[4856]: I1122 08:52:13.997042 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q58q2\" (UniqueName: \"kubernetes.io/projected/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-kube-api-access-q58q2\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.098448 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-openstack-cell1\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.098534 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-config\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.098589 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-ovsdbserver-nb\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.098631 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-dns-svc\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.098663 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q58q2\" (UniqueName: \"kubernetes.io/projected/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-kube-api-access-q58q2\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.098695 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-ovsdbserver-sb\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.099692 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-ovsdbserver-sb\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.100707 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-ovsdbserver-nb\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.110865 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-dns-svc\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.110877 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-openstack-cell1\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.111201 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-config\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.139459 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q58q2\" (UniqueName: \"kubernetes.io/projected/7fdfcc7b-57a0-42bc-9ee5-df8530a53345-kube-api-access-q58q2\") pod \"dnsmasq-dns-779cdcc5bf-chvxh\" (UID: \"7fdfcc7b-57a0-42bc-9ee5-df8530a53345\") " pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.234098 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.342122 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.427011 4856 generic.go:334] "Generic (PLEG): container finished" podID="7d1ca7c1-892b-402a-a523-407168b2deb8" containerID="c49cd7874d707676c448d2df1af807f11f40c863f6ff01ac56d2c1b9066605d0" exitCode=0 Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.427070 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" event={"ID":"7d1ca7c1-892b-402a-a523-407168b2deb8","Type":"ContainerDied","Data":"c49cd7874d707676c448d2df1af807f11f40c863f6ff01ac56d2c1b9066605d0"} Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.427109 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" event={"ID":"7d1ca7c1-892b-402a-a523-407168b2deb8","Type":"ContainerDied","Data":"89efa9ec7a3b43ccff55bd5b4a0b920ed97c6c54837f76df3402a0c80e1767bf"} Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.427134 4856 scope.go:117] "RemoveContainer" containerID="c49cd7874d707676c448d2df1af807f11f40c863f6ff01ac56d2c1b9066605d0" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.427301 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.464448 4856 scope.go:117] "RemoveContainer" containerID="e88d04cdaa6bedf5c5dcde32dec614a1753553d28e9edad23987ed3d0e8bc2b2" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.506826 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-config\") pod \"7d1ca7c1-892b-402a-a523-407168b2deb8\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.506934 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-ovsdbserver-sb\") pod \"7d1ca7c1-892b-402a-a523-407168b2deb8\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.507014 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-dns-svc\") pod \"7d1ca7c1-892b-402a-a523-407168b2deb8\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.507060 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-ovsdbserver-nb\") pod \"7d1ca7c1-892b-402a-a523-407168b2deb8\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.507220 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drdq6\" (UniqueName: \"kubernetes.io/projected/7d1ca7c1-892b-402a-a523-407168b2deb8-kube-api-access-drdq6\") pod \"7d1ca7c1-892b-402a-a523-407168b2deb8\" (UID: \"7d1ca7c1-892b-402a-a523-407168b2deb8\") " Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.515868 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d1ca7c1-892b-402a-a523-407168b2deb8-kube-api-access-drdq6" (OuterVolumeSpecName: "kube-api-access-drdq6") pod "7d1ca7c1-892b-402a-a523-407168b2deb8" (UID: "7d1ca7c1-892b-402a-a523-407168b2deb8"). InnerVolumeSpecName "kube-api-access-drdq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.548484 4856 scope.go:117] "RemoveContainer" containerID="c49cd7874d707676c448d2df1af807f11f40c863f6ff01ac56d2c1b9066605d0" Nov 22 08:52:14 crc kubenswrapper[4856]: E1122 08:52:14.548920 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c49cd7874d707676c448d2df1af807f11f40c863f6ff01ac56d2c1b9066605d0\": container with ID starting with c49cd7874d707676c448d2df1af807f11f40c863f6ff01ac56d2c1b9066605d0 not found: ID does not exist" containerID="c49cd7874d707676c448d2df1af807f11f40c863f6ff01ac56d2c1b9066605d0" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.548970 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c49cd7874d707676c448d2df1af807f11f40c863f6ff01ac56d2c1b9066605d0"} err="failed to get container status \"c49cd7874d707676c448d2df1af807f11f40c863f6ff01ac56d2c1b9066605d0\": rpc error: code = NotFound desc = could not find container \"c49cd7874d707676c448d2df1af807f11f40c863f6ff01ac56d2c1b9066605d0\": container with ID starting with c49cd7874d707676c448d2df1af807f11f40c863f6ff01ac56d2c1b9066605d0 not found: ID does not exist" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.549002 4856 scope.go:117] "RemoveContainer" containerID="e88d04cdaa6bedf5c5dcde32dec614a1753553d28e9edad23987ed3d0e8bc2b2" Nov 22 08:52:14 crc kubenswrapper[4856]: E1122 08:52:14.549255 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e88d04cdaa6bedf5c5dcde32dec614a1753553d28e9edad23987ed3d0e8bc2b2\": container with ID starting with e88d04cdaa6bedf5c5dcde32dec614a1753553d28e9edad23987ed3d0e8bc2b2 not found: ID does not exist" containerID="e88d04cdaa6bedf5c5dcde32dec614a1753553d28e9edad23987ed3d0e8bc2b2" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.549283 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e88d04cdaa6bedf5c5dcde32dec614a1753553d28e9edad23987ed3d0e8bc2b2"} err="failed to get container status \"e88d04cdaa6bedf5c5dcde32dec614a1753553d28e9edad23987ed3d0e8bc2b2\": rpc error: code = NotFound desc = could not find container \"e88d04cdaa6bedf5c5dcde32dec614a1753553d28e9edad23987ed3d0e8bc2b2\": container with ID starting with e88d04cdaa6bedf5c5dcde32dec614a1753553d28e9edad23987ed3d0e8bc2b2 not found: ID does not exist" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.565050 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-config" (OuterVolumeSpecName: "config") pod "7d1ca7c1-892b-402a-a523-407168b2deb8" (UID: "7d1ca7c1-892b-402a-a523-407168b2deb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.565163 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7d1ca7c1-892b-402a-a523-407168b2deb8" (UID: "7d1ca7c1-892b-402a-a523-407168b2deb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.578291 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7d1ca7c1-892b-402a-a523-407168b2deb8" (UID: "7d1ca7c1-892b-402a-a523-407168b2deb8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.586572 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7d1ca7c1-892b-402a-a523-407168b2deb8" (UID: "7d1ca7c1-892b-402a-a523-407168b2deb8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.609291 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drdq6\" (UniqueName: \"kubernetes.io/projected/7d1ca7c1-892b-402a-a523-407168b2deb8-kube-api-access-drdq6\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.609323 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.609336 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.609350 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.609361 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7d1ca7c1-892b-402a-a523-407168b2deb8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.733830 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-779cdcc5bf-chvxh"] Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.791617 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f89c44cf-dxqq7"] Nov 22 08:52:14 crc kubenswrapper[4856]: I1122 08:52:14.825021 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f89c44cf-dxqq7"] Nov 22 08:52:15 crc kubenswrapper[4856]: I1122 08:52:15.444922 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" event={"ID":"7fdfcc7b-57a0-42bc-9ee5-df8530a53345","Type":"ContainerStarted","Data":"244d492946b5c04eb586b54ef1b3ae51e6c52d6a107bb1176e1e0c923c87aa73"} Nov 22 08:52:15 crc kubenswrapper[4856]: I1122 08:52:15.445478 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" event={"ID":"7fdfcc7b-57a0-42bc-9ee5-df8530a53345","Type":"ContainerStarted","Data":"4ff5d5a7f4a31db09bccc1dc040390abbdfc04633d8c6fd8d5da560ffc361f86"} Nov 22 08:52:16 crc kubenswrapper[4856]: I1122 08:52:16.473238 4856 generic.go:334] "Generic (PLEG): container finished" podID="7fdfcc7b-57a0-42bc-9ee5-df8530a53345" containerID="244d492946b5c04eb586b54ef1b3ae51e6c52d6a107bb1176e1e0c923c87aa73" exitCode=0 Nov 22 08:52:16 crc kubenswrapper[4856]: I1122 08:52:16.473335 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" event={"ID":"7fdfcc7b-57a0-42bc-9ee5-df8530a53345","Type":"ContainerDied","Data":"244d492946b5c04eb586b54ef1b3ae51e6c52d6a107bb1176e1e0c923c87aa73"} Nov 22 08:52:17 crc kubenswrapper[4856]: I1122 08:52:17.377709 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d1ca7c1-892b-402a-a523-407168b2deb8" path="/var/lib/kubelet/pods/7d1ca7c1-892b-402a-a523-407168b2deb8/volumes" Nov 22 08:52:18 crc kubenswrapper[4856]: I1122 08:52:18.495070 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" event={"ID":"7fdfcc7b-57a0-42bc-9ee5-df8530a53345","Type":"ContainerStarted","Data":"112f9add4c60b13c680e00629124aaf706efe8271f206832e9f111c7fa362ccd"} Nov 22 08:52:18 crc kubenswrapper[4856]: I1122 08:52:18.495364 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:18 crc kubenswrapper[4856]: I1122 08:52:18.524563 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" podStartSLOduration=5.524539655 podStartE2EDuration="5.524539655s" podCreationTimestamp="2025-11-22 08:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 08:52:18.519628672 +0000 UTC m=+6580.933021960" watchObservedRunningTime="2025-11-22 08:52:18.524539655 +0000 UTC m=+6580.937932933" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.125585 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f89c44cf-dxqq7" podUID="7d1ca7c1-892b-402a-a523-407168b2deb8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.102:5353: i/o timeout" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.819814 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9"] Nov 22 08:52:19 crc kubenswrapper[4856]: E1122 08:52:19.820222 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1ca7c1-892b-402a-a523-407168b2deb8" containerName="init" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.820234 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1ca7c1-892b-402a-a523-407168b2deb8" containerName="init" Nov 22 08:52:19 crc kubenswrapper[4856]: E1122 08:52:19.820248 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1ca7c1-892b-402a-a523-407168b2deb8" containerName="dnsmasq-dns" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.820254 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1ca7c1-892b-402a-a523-407168b2deb8" containerName="dnsmasq-dns" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.820503 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d1ca7c1-892b-402a-a523-407168b2deb8" containerName="dnsmasq-dns" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.821236 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.823544 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.823588 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.824846 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.826234 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.899180 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9"] Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.938646 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stb6g\" (UniqueName: \"kubernetes.io/projected/21bb02ee-d25f-4c9d-95a8-84f642661787-kube-api-access-stb6g\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.938728 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.938786 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:19 crc kubenswrapper[4856]: I1122 08:52:19.939125 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:20 crc kubenswrapper[4856]: I1122 08:52:20.040443 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:20 crc kubenswrapper[4856]: I1122 08:52:20.040859 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stb6g\" (UniqueName: \"kubernetes.io/projected/21bb02ee-d25f-4c9d-95a8-84f642661787-kube-api-access-stb6g\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:20 crc kubenswrapper[4856]: I1122 08:52:20.041036 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:20 crc kubenswrapper[4856]: I1122 08:52:20.041175 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:20 crc kubenswrapper[4856]: I1122 08:52:20.050949 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:20 crc kubenswrapper[4856]: I1122 08:52:20.051327 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:20 crc kubenswrapper[4856]: I1122 08:52:20.051363 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:20 crc kubenswrapper[4856]: I1122 08:52:20.064415 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stb6g\" (UniqueName: \"kubernetes.io/projected/21bb02ee-d25f-4c9d-95a8-84f642661787-kube-api-access-stb6g\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:20 crc kubenswrapper[4856]: I1122 08:52:20.142149 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:52:20 crc kubenswrapper[4856]: I1122 08:52:20.767015 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9"] Nov 22 08:52:21 crc kubenswrapper[4856]: I1122 08:52:21.523264 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" event={"ID":"21bb02ee-d25f-4c9d-95a8-84f642661787","Type":"ContainerStarted","Data":"c0744e644b2b9efe0bece7f043ed4e2cf40320ab53ced83b74e120855c26aa12"} Nov 22 08:52:24 crc kubenswrapper[4856]: I1122 08:52:24.235679 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-779cdcc5bf-chvxh" Nov 22 08:52:24 crc kubenswrapper[4856]: I1122 08:52:24.336768 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-756c696cf7-mjwln"] Nov 22 08:52:24 crc kubenswrapper[4856]: I1122 08:52:24.337099 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" podUID="e4a5140a-d9f8-435d-a9fd-9385591e44fc" containerName="dnsmasq-dns" containerID="cri-o://5a21c14563b994f5531b9ca00d2b7c834a57e1b6ff57f2bb2099394f029fcd7b" gracePeriod=10 Nov 22 08:52:25 crc kubenswrapper[4856]: I1122 08:52:25.569179 4856 generic.go:334] "Generic (PLEG): container finished" podID="e4a5140a-d9f8-435d-a9fd-9385591e44fc" containerID="5a21c14563b994f5531b9ca00d2b7c834a57e1b6ff57f2bb2099394f029fcd7b" exitCode=0 Nov 22 08:52:25 crc kubenswrapper[4856]: I1122 08:52:25.569268 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" event={"ID":"e4a5140a-d9f8-435d-a9fd-9385591e44fc","Type":"ContainerDied","Data":"5a21c14563b994f5531b9ca00d2b7c834a57e1b6ff57f2bb2099394f029fcd7b"} Nov 22 08:52:28 crc kubenswrapper[4856]: I1122 08:52:28.623822 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" podUID="e4a5140a-d9f8-435d-a9fd-9385591e44fc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.144:5353: connect: connection refused" Nov 22 08:52:33 crc kubenswrapper[4856]: I1122 08:52:33.624405 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" podUID="e4a5140a-d9f8-435d-a9fd-9385591e44fc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.144:5353: connect: connection refused" Nov 22 08:52:35 crc kubenswrapper[4856]: I1122 08:52:35.813599 4856 scope.go:117] "RemoveContainer" containerID="58575bb00fe7a50af245f127b6ba46e7696c6107a76c3495c0f683146111a042" Nov 22 08:52:38 crc kubenswrapper[4856]: I1122 08:52:38.624621 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" podUID="e4a5140a-d9f8-435d-a9fd-9385591e44fc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.144:5353: connect: connection refused" Nov 22 08:52:38 crc kubenswrapper[4856]: I1122 08:52:38.624997 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:38 crc kubenswrapper[4856]: I1122 08:52:38.894133 4856 scope.go:117] "RemoveContainer" containerID="d5900607e67700e179cab68677f87421590f3b86544f521ada53239be99af627" Nov 22 08:52:38 crc kubenswrapper[4856]: I1122 08:52:38.981913 4856 scope.go:117] "RemoveContainer" containerID="c01e1218def5500b9e8246aad5992250187408a00326616c53a3cf2e08346b5b" Nov 22 08:52:39 crc kubenswrapper[4856]: E1122 08:52:39.019699 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Nov 22 08:52:39 crc kubenswrapper[4856]: E1122 08:52:39.020129 4856 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 22 08:52:39 crc kubenswrapper[4856]: container &Container{Name:pre-adoption-validation-openstack-pre-adoption-openstack-cell1,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p osp.edpm.pre_adoption_validation -i pre-adoption-validation-openstack-pre-adoption-openstack-cell1],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_CALLBACKS_ENABLED,Value:ansible.posix.profile_tasks,ValueFrom:nil,},EnvVar{Name:ANSIBLE_CALLBACK_RESULT_FORMAT,Value:yaml,ValueFrom:nil,},EnvVar{Name:ANSIBLE_FORCE_COLOR,Value:True,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DISPLAY_ARGS_TO_STDOUT,Value:True,ValueFrom:nil,},EnvVar{Name:ANSIBLE_SSH_ARGS,Value:-C -o ControlMaster=auto -o ControlPersist=80s,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY,Value:1,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Nov 22 08:52:39 crc kubenswrapper[4856]: osp.edpm.pre_adoption_validation Nov 22 08:52:39 crc kubenswrapper[4856]: Nov 22 08:52:39 crc kubenswrapper[4856]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Nov 22 08:52:39 crc kubenswrapper[4856]: edpm_override_hosts: openstack-cell1 Nov 22 08:52:39 crc kubenswrapper[4856]: edpm_service_type: pre-adoption-validation Nov 22 08:52:39 crc kubenswrapper[4856]: edpm_services_override: [pre-adoption-validation] Nov 22 08:52:39 crc kubenswrapper[4856]: Nov 22 08:52:39 crc kubenswrapper[4856]: Nov 22 08:52:39 crc kubenswrapper[4856]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:pre-adoption-validation-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/pre-adoption-validation,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/runner/env/ssh_key,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-stb6g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9_openstack(21bb02ee-d25f-4c9d-95a8-84f642661787): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Nov 22 08:52:39 crc kubenswrapper[4856]: > logger="UnhandledError" Nov 22 08:52:39 crc kubenswrapper[4856]: E1122 08:52:39.021957 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pre-adoption-validation-openstack-pre-adoption-openstack-cell1\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" podUID="21bb02ee-d25f-4c9d-95a8-84f642661787" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.256896 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.292616 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-dns-svc\") pod \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.292749 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-openstack-cell1\") pod \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.292817 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf8gv\" (UniqueName: \"kubernetes.io/projected/e4a5140a-d9f8-435d-a9fd-9385591e44fc-kube-api-access-pf8gv\") pod \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.292865 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-config\") pod \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.292883 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-ovsdbserver-nb\") pod \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.292927 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-ovsdbserver-sb\") pod \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\" (UID: \"e4a5140a-d9f8-435d-a9fd-9385591e44fc\") " Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.302690 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a5140a-d9f8-435d-a9fd-9385591e44fc-kube-api-access-pf8gv" (OuterVolumeSpecName: "kube-api-access-pf8gv") pod "e4a5140a-d9f8-435d-a9fd-9385591e44fc" (UID: "e4a5140a-d9f8-435d-a9fd-9385591e44fc"). InnerVolumeSpecName "kube-api-access-pf8gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.350054 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "e4a5140a-d9f8-435d-a9fd-9385591e44fc" (UID: "e4a5140a-d9f8-435d-a9fd-9385591e44fc"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.356881 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-config" (OuterVolumeSpecName: "config") pod "e4a5140a-d9f8-435d-a9fd-9385591e44fc" (UID: "e4a5140a-d9f8-435d-a9fd-9385591e44fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.359134 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e4a5140a-d9f8-435d-a9fd-9385591e44fc" (UID: "e4a5140a-d9f8-435d-a9fd-9385591e44fc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.360854 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e4a5140a-d9f8-435d-a9fd-9385591e44fc" (UID: "e4a5140a-d9f8-435d-a9fd-9385591e44fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.362271 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e4a5140a-d9f8-435d-a9fd-9385591e44fc" (UID: "e4a5140a-d9f8-435d-a9fd-9385591e44fc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.395396 4856 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.395432 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.395442 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf8gv\" (UniqueName: \"kubernetes.io/projected/e4a5140a-d9f8-435d-a9fd-9385591e44fc-kube-api-access-pf8gv\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.395453 4856 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-config\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.395461 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.395469 4856 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e4a5140a-d9f8-435d-a9fd-9385591e44fc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.815534 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.815551 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-756c696cf7-mjwln" event={"ID":"e4a5140a-d9f8-435d-a9fd-9385591e44fc","Type":"ContainerDied","Data":"65e7e98e668039d972350aadcbaccd14e0f538ac9eda3f55f412e1fc71621798"} Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.815641 4856 scope.go:117] "RemoveContainer" containerID="5a21c14563b994f5531b9ca00d2b7c834a57e1b6ff57f2bb2099394f029fcd7b" Nov 22 08:52:39 crc kubenswrapper[4856]: E1122 08:52:39.818302 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"pre-adoption-validation-openstack-pre-adoption-openstack-cell1\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" podUID="21bb02ee-d25f-4c9d-95a8-84f642661787" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.847964 4856 scope.go:117] "RemoveContainer" containerID="95242f022ff74249ab8bf4fe6346d8c857fea011a3d37fa6d1b8c11df410d6b3" Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.855892 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-756c696cf7-mjwln"] Nov 22 08:52:39 crc kubenswrapper[4856]: I1122 08:52:39.866163 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-756c696cf7-mjwln"] Nov 22 08:52:40 crc kubenswrapper[4856]: I1122 08:52:40.722899 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a5140a-d9f8-435d-a9fd-9385591e44fc" path="/var/lib/kubelet/pods/e4a5140a-d9f8-435d-a9fd-9385591e44fc/volumes" Nov 22 08:52:53 crc kubenswrapper[4856]: I1122 08:52:53.214776 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:52:53 crc kubenswrapper[4856]: I1122 08:52:53.967126 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" event={"ID":"21bb02ee-d25f-4c9d-95a8-84f642661787","Type":"ContainerStarted","Data":"63d2bac61b3aa817748e6bcb6fd33e9937cbb8c3312f1e44888adc5a5386b12b"} Nov 22 08:52:53 crc kubenswrapper[4856]: I1122 08:52:53.996501 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" podStartSLOduration=2.560351101 podStartE2EDuration="34.996468063s" podCreationTimestamp="2025-11-22 08:52:19 +0000 UTC" firstStartedPulling="2025-11-22 08:52:20.775438005 +0000 UTC m=+6583.188831283" lastFinishedPulling="2025-11-22 08:52:53.211554987 +0000 UTC m=+6615.624948245" observedRunningTime="2025-11-22 08:52:53.990211145 +0000 UTC m=+6616.403604403" watchObservedRunningTime="2025-11-22 08:52:53.996468063 +0000 UTC m=+6616.409861321" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.059297 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-s9znq"] Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.069238 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-7vrtm"] Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.079630 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-s9znq"] Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.089948 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-7vrtm"] Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.117022 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-swhtm"] Nov 22 08:53:02 crc kubenswrapper[4856]: E1122 08:53:02.117572 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a5140a-d9f8-435d-a9fd-9385591e44fc" containerName="init" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.117596 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a5140a-d9f8-435d-a9fd-9385591e44fc" containerName="init" Nov 22 08:53:02 crc kubenswrapper[4856]: E1122 08:53:02.117623 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a5140a-d9f8-435d-a9fd-9385591e44fc" containerName="dnsmasq-dns" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.117632 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a5140a-d9f8-435d-a9fd-9385591e44fc" containerName="dnsmasq-dns" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.117915 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a5140a-d9f8-435d-a9fd-9385591e44fc" containerName="dnsmasq-dns" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.119760 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.129172 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swhtm"] Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.194264 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6812afd-ef72-4724-bc4c-f67df344dbca-catalog-content\") pod \"certified-operators-swhtm\" (UID: \"c6812afd-ef72-4724-bc4c-f67df344dbca\") " pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.194349 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6812afd-ef72-4724-bc4c-f67df344dbca-utilities\") pod \"certified-operators-swhtm\" (UID: \"c6812afd-ef72-4724-bc4c-f67df344dbca\") " pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.194404 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfrwh\" (UniqueName: \"kubernetes.io/projected/c6812afd-ef72-4724-bc4c-f67df344dbca-kube-api-access-gfrwh\") pod \"certified-operators-swhtm\" (UID: \"c6812afd-ef72-4724-bc4c-f67df344dbca\") " pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.301055 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6812afd-ef72-4724-bc4c-f67df344dbca-catalog-content\") pod \"certified-operators-swhtm\" (UID: \"c6812afd-ef72-4724-bc4c-f67df344dbca\") " pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.301228 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6812afd-ef72-4724-bc4c-f67df344dbca-utilities\") pod \"certified-operators-swhtm\" (UID: \"c6812afd-ef72-4724-bc4c-f67df344dbca\") " pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.301356 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfrwh\" (UniqueName: \"kubernetes.io/projected/c6812afd-ef72-4724-bc4c-f67df344dbca-kube-api-access-gfrwh\") pod \"certified-operators-swhtm\" (UID: \"c6812afd-ef72-4724-bc4c-f67df344dbca\") " pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.303048 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6812afd-ef72-4724-bc4c-f67df344dbca-catalog-content\") pod \"certified-operators-swhtm\" (UID: \"c6812afd-ef72-4724-bc4c-f67df344dbca\") " pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.303383 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6812afd-ef72-4724-bc4c-f67df344dbca-utilities\") pod \"certified-operators-swhtm\" (UID: \"c6812afd-ef72-4724-bc4c-f67df344dbca\") " pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.334805 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfrwh\" (UniqueName: \"kubernetes.io/projected/c6812afd-ef72-4724-bc4c-f67df344dbca-kube-api-access-gfrwh\") pod \"certified-operators-swhtm\" (UID: \"c6812afd-ef72-4724-bc4c-f67df344dbca\") " pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.449442 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.731453 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be" path="/var/lib/kubelet/pods/1cfd0ea5-a10e-488e-aa95-25ca2fbaa3be/volumes" Nov 22 08:53:02 crc kubenswrapper[4856]: I1122 08:53:02.732449 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa3b1c6-8a5e-4182-8cdb-6a229c647fe0" path="/var/lib/kubelet/pods/afa3b1c6-8a5e-4182-8cdb-6a229c647fe0/volumes" Nov 22 08:53:03 crc kubenswrapper[4856]: I1122 08:53:02.954871 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-swhtm"] Nov 22 08:53:03 crc kubenswrapper[4856]: I1122 08:53:03.054104 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3e38-account-create-5xm72"] Nov 22 08:53:03 crc kubenswrapper[4856]: I1122 08:53:03.072575 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1d70-account-create-r64jw"] Nov 22 08:53:03 crc kubenswrapper[4856]: I1122 08:53:03.084305 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3e38-account-create-5xm72"] Nov 22 08:53:03 crc kubenswrapper[4856]: I1122 08:53:03.092345 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1d70-account-create-r64jw"] Nov 22 08:53:03 crc kubenswrapper[4856]: I1122 08:53:03.101001 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ef37-account-create-hdh4q"] Nov 22 08:53:03 crc kubenswrapper[4856]: I1122 08:53:03.109491 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-92ttx"] Nov 22 08:53:03 crc kubenswrapper[4856]: I1122 08:53:03.119759 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-92ttx"] Nov 22 08:53:03 crc kubenswrapper[4856]: I1122 08:53:03.128737 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ef37-account-create-hdh4q"] Nov 22 08:53:04 crc kubenswrapper[4856]: I1122 08:53:04.075806 4856 generic.go:334] "Generic (PLEG): container finished" podID="c6812afd-ef72-4724-bc4c-f67df344dbca" containerID="e11c667563596035a2ade6fe4a1f25d26540f59173c8cf7362855a446708b970" exitCode=0 Nov 22 08:53:04 crc kubenswrapper[4856]: I1122 08:53:04.075900 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swhtm" event={"ID":"c6812afd-ef72-4724-bc4c-f67df344dbca","Type":"ContainerDied","Data":"e11c667563596035a2ade6fe4a1f25d26540f59173c8cf7362855a446708b970"} Nov 22 08:53:04 crc kubenswrapper[4856]: I1122 08:53:04.076098 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swhtm" event={"ID":"c6812afd-ef72-4724-bc4c-f67df344dbca","Type":"ContainerStarted","Data":"894b4a5026680090e9da55df3492b5af23db9e65f30de6d295daf21b4e3a99b9"} Nov 22 08:53:04 crc kubenswrapper[4856]: I1122 08:53:04.722591 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08390c68-8119-42e6-a654-44b0ccd422ad" path="/var/lib/kubelet/pods/08390c68-8119-42e6-a654-44b0ccd422ad/volumes" Nov 22 08:53:04 crc kubenswrapper[4856]: I1122 08:53:04.723621 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20510931-5b0d-4be7-beec-83051479beb3" path="/var/lib/kubelet/pods/20510931-5b0d-4be7-beec-83051479beb3/volumes" Nov 22 08:53:04 crc kubenswrapper[4856]: I1122 08:53:04.724308 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f20c18-ea7b-4018-a74d-3e18bcd85250" path="/var/lib/kubelet/pods/b5f20c18-ea7b-4018-a74d-3e18bcd85250/volumes" Nov 22 08:53:04 crc kubenswrapper[4856]: I1122 08:53:04.724996 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c755cad0-e196-4b7a-ba18-c10722c9b550" path="/var/lib/kubelet/pods/c755cad0-e196-4b7a-ba18-c10722c9b550/volumes" Nov 22 08:53:05 crc kubenswrapper[4856]: I1122 08:53:05.090754 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swhtm" event={"ID":"c6812afd-ef72-4724-bc4c-f67df344dbca","Type":"ContainerStarted","Data":"1bf4ba4108c570e843ba6fa211b3cce828d706078accff00b5a13e8c837f3396"} Nov 22 08:53:07 crc kubenswrapper[4856]: I1122 08:53:07.119202 4856 generic.go:334] "Generic (PLEG): container finished" podID="21bb02ee-d25f-4c9d-95a8-84f642661787" containerID="63d2bac61b3aa817748e6bcb6fd33e9937cbb8c3312f1e44888adc5a5386b12b" exitCode=0 Nov 22 08:53:07 crc kubenswrapper[4856]: I1122 08:53:07.119281 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" event={"ID":"21bb02ee-d25f-4c9d-95a8-84f642661787","Type":"ContainerDied","Data":"63d2bac61b3aa817748e6bcb6fd33e9937cbb8c3312f1e44888adc5a5386b12b"} Nov 22 08:53:07 crc kubenswrapper[4856]: I1122 08:53:07.122794 4856 generic.go:334] "Generic (PLEG): container finished" podID="c6812afd-ef72-4724-bc4c-f67df344dbca" containerID="1bf4ba4108c570e843ba6fa211b3cce828d706078accff00b5a13e8c837f3396" exitCode=0 Nov 22 08:53:07 crc kubenswrapper[4856]: I1122 08:53:07.122845 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swhtm" event={"ID":"c6812afd-ef72-4724-bc4c-f67df344dbca","Type":"ContainerDied","Data":"1bf4ba4108c570e843ba6fa211b3cce828d706078accff00b5a13e8c837f3396"} Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.135453 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swhtm" event={"ID":"c6812afd-ef72-4724-bc4c-f67df344dbca","Type":"ContainerStarted","Data":"cdcef903f86305b8658e28cd6253e59b43d478cebf21d92b987bf12194ff1448"} Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.170370 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-swhtm" podStartSLOduration=2.529151969 podStartE2EDuration="6.170330884s" podCreationTimestamp="2025-11-22 08:53:02 +0000 UTC" firstStartedPulling="2025-11-22 08:53:04.078929205 +0000 UTC m=+6626.492322463" lastFinishedPulling="2025-11-22 08:53:07.72010812 +0000 UTC m=+6630.133501378" observedRunningTime="2025-11-22 08:53:08.156871612 +0000 UTC m=+6630.570264880" watchObservedRunningTime="2025-11-22 08:53:08.170330884 +0000 UTC m=+6630.583724142" Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.616984 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.740321 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-inventory\") pod \"21bb02ee-d25f-4c9d-95a8-84f642661787\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.740397 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stb6g\" (UniqueName: \"kubernetes.io/projected/21bb02ee-d25f-4c9d-95a8-84f642661787-kube-api-access-stb6g\") pod \"21bb02ee-d25f-4c9d-95a8-84f642661787\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.740815 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-pre-adoption-validation-combined-ca-bundle\") pod \"21bb02ee-d25f-4c9d-95a8-84f642661787\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.740948 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-ssh-key\") pod \"21bb02ee-d25f-4c9d-95a8-84f642661787\" (UID: \"21bb02ee-d25f-4c9d-95a8-84f642661787\") " Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.753569 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21bb02ee-d25f-4c9d-95a8-84f642661787-kube-api-access-stb6g" (OuterVolumeSpecName: "kube-api-access-stb6g") pod "21bb02ee-d25f-4c9d-95a8-84f642661787" (UID: "21bb02ee-d25f-4c9d-95a8-84f642661787"). InnerVolumeSpecName "kube-api-access-stb6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.753998 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "21bb02ee-d25f-4c9d-95a8-84f642661787" (UID: "21bb02ee-d25f-4c9d-95a8-84f642661787"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.771177 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-inventory" (OuterVolumeSpecName: "inventory") pod "21bb02ee-d25f-4c9d-95a8-84f642661787" (UID: "21bb02ee-d25f-4c9d-95a8-84f642661787"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.772448 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "21bb02ee-d25f-4c9d-95a8-84f642661787" (UID: "21bb02ee-d25f-4c9d-95a8-84f642661787"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.844175 4856 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.844235 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.844248 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21bb02ee-d25f-4c9d-95a8-84f642661787-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 08:53:08 crc kubenswrapper[4856]: I1122 08:53:08.844259 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stb6g\" (UniqueName: \"kubernetes.io/projected/21bb02ee-d25f-4c9d-95a8-84f642661787-kube-api-access-stb6g\") on node \"crc\" DevicePath \"\"" Nov 22 08:53:09 crc kubenswrapper[4856]: I1122 08:53:09.148632 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" event={"ID":"21bb02ee-d25f-4c9d-95a8-84f642661787","Type":"ContainerDied","Data":"c0744e644b2b9efe0bece7f043ed4e2cf40320ab53ced83b74e120855c26aa12"} Nov 22 08:53:09 crc kubenswrapper[4856]: I1122 08:53:09.149666 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0744e644b2b9efe0bece7f043ed4e2cf40320ab53ced83b74e120855c26aa12" Nov 22 08:53:09 crc kubenswrapper[4856]: I1122 08:53:09.148850 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9" Nov 22 08:53:12 crc kubenswrapper[4856]: I1122 08:53:12.450208 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:12 crc kubenswrapper[4856]: I1122 08:53:12.450852 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:12 crc kubenswrapper[4856]: I1122 08:53:12.523425 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:13 crc kubenswrapper[4856]: I1122 08:53:13.247737 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:13 crc kubenswrapper[4856]: I1122 08:53:13.298691 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swhtm"] Nov 22 08:53:15 crc kubenswrapper[4856]: I1122 08:53:15.217936 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-swhtm" podUID="c6812afd-ef72-4724-bc4c-f67df344dbca" containerName="registry-server" containerID="cri-o://cdcef903f86305b8658e28cd6253e59b43d478cebf21d92b987bf12194ff1448" gracePeriod=2 Nov 22 08:53:15 crc kubenswrapper[4856]: I1122 08:53:15.711416 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:15 crc kubenswrapper[4856]: I1122 08:53:15.817062 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6812afd-ef72-4724-bc4c-f67df344dbca-utilities\") pod \"c6812afd-ef72-4724-bc4c-f67df344dbca\" (UID: \"c6812afd-ef72-4724-bc4c-f67df344dbca\") " Nov 22 08:53:15 crc kubenswrapper[4856]: I1122 08:53:15.817162 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfrwh\" (UniqueName: \"kubernetes.io/projected/c6812afd-ef72-4724-bc4c-f67df344dbca-kube-api-access-gfrwh\") pod \"c6812afd-ef72-4724-bc4c-f67df344dbca\" (UID: \"c6812afd-ef72-4724-bc4c-f67df344dbca\") " Nov 22 08:53:15 crc kubenswrapper[4856]: I1122 08:53:15.817187 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6812afd-ef72-4724-bc4c-f67df344dbca-catalog-content\") pod \"c6812afd-ef72-4724-bc4c-f67df344dbca\" (UID: \"c6812afd-ef72-4724-bc4c-f67df344dbca\") " Nov 22 08:53:15 crc kubenswrapper[4856]: I1122 08:53:15.817940 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6812afd-ef72-4724-bc4c-f67df344dbca-utilities" (OuterVolumeSpecName: "utilities") pod "c6812afd-ef72-4724-bc4c-f67df344dbca" (UID: "c6812afd-ef72-4724-bc4c-f67df344dbca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:53:15 crc kubenswrapper[4856]: I1122 08:53:15.825668 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6812afd-ef72-4724-bc4c-f67df344dbca-kube-api-access-gfrwh" (OuterVolumeSpecName: "kube-api-access-gfrwh") pod "c6812afd-ef72-4724-bc4c-f67df344dbca" (UID: "c6812afd-ef72-4724-bc4c-f67df344dbca"). InnerVolumeSpecName "kube-api-access-gfrwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:53:15 crc kubenswrapper[4856]: I1122 08:53:15.868463 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6812afd-ef72-4724-bc4c-f67df344dbca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6812afd-ef72-4724-bc4c-f67df344dbca" (UID: "c6812afd-ef72-4724-bc4c-f67df344dbca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:53:15 crc kubenswrapper[4856]: I1122 08:53:15.919713 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6812afd-ef72-4724-bc4c-f67df344dbca-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:53:15 crc kubenswrapper[4856]: I1122 08:53:15.919750 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfrwh\" (UniqueName: \"kubernetes.io/projected/c6812afd-ef72-4724-bc4c-f67df344dbca-kube-api-access-gfrwh\") on node \"crc\" DevicePath \"\"" Nov 22 08:53:15 crc kubenswrapper[4856]: I1122 08:53:15.919763 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6812afd-ef72-4724-bc4c-f67df344dbca-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.229120 4856 generic.go:334] "Generic (PLEG): container finished" podID="c6812afd-ef72-4724-bc4c-f67df344dbca" containerID="cdcef903f86305b8658e28cd6253e59b43d478cebf21d92b987bf12194ff1448" exitCode=0 Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.229181 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swhtm" event={"ID":"c6812afd-ef72-4724-bc4c-f67df344dbca","Type":"ContainerDied","Data":"cdcef903f86305b8658e28cd6253e59b43d478cebf21d92b987bf12194ff1448"} Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.229216 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-swhtm" event={"ID":"c6812afd-ef72-4724-bc4c-f67df344dbca","Type":"ContainerDied","Data":"894b4a5026680090e9da55df3492b5af23db9e65f30de6d295daf21b4e3a99b9"} Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.229219 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-swhtm" Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.229235 4856 scope.go:117] "RemoveContainer" containerID="cdcef903f86305b8658e28cd6253e59b43d478cebf21d92b987bf12194ff1448" Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.264780 4856 scope.go:117] "RemoveContainer" containerID="1bf4ba4108c570e843ba6fa211b3cce828d706078accff00b5a13e8c837f3396" Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.289724 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-swhtm"] Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.310450 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-swhtm"] Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.314984 4856 scope.go:117] "RemoveContainer" containerID="e11c667563596035a2ade6fe4a1f25d26540f59173c8cf7362855a446708b970" Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.358930 4856 scope.go:117] "RemoveContainer" containerID="cdcef903f86305b8658e28cd6253e59b43d478cebf21d92b987bf12194ff1448" Nov 22 08:53:16 crc kubenswrapper[4856]: E1122 08:53:16.359339 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdcef903f86305b8658e28cd6253e59b43d478cebf21d92b987bf12194ff1448\": container with ID starting with cdcef903f86305b8658e28cd6253e59b43d478cebf21d92b987bf12194ff1448 not found: ID does not exist" containerID="cdcef903f86305b8658e28cd6253e59b43d478cebf21d92b987bf12194ff1448" Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.359376 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdcef903f86305b8658e28cd6253e59b43d478cebf21d92b987bf12194ff1448"} err="failed to get container status \"cdcef903f86305b8658e28cd6253e59b43d478cebf21d92b987bf12194ff1448\": rpc error: code = NotFound desc = could not find container \"cdcef903f86305b8658e28cd6253e59b43d478cebf21d92b987bf12194ff1448\": container with ID starting with cdcef903f86305b8658e28cd6253e59b43d478cebf21d92b987bf12194ff1448 not found: ID does not exist" Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.359397 4856 scope.go:117] "RemoveContainer" containerID="1bf4ba4108c570e843ba6fa211b3cce828d706078accff00b5a13e8c837f3396" Nov 22 08:53:16 crc kubenswrapper[4856]: E1122 08:53:16.359800 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bf4ba4108c570e843ba6fa211b3cce828d706078accff00b5a13e8c837f3396\": container with ID starting with 1bf4ba4108c570e843ba6fa211b3cce828d706078accff00b5a13e8c837f3396 not found: ID does not exist" containerID="1bf4ba4108c570e843ba6fa211b3cce828d706078accff00b5a13e8c837f3396" Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.359849 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bf4ba4108c570e843ba6fa211b3cce828d706078accff00b5a13e8c837f3396"} err="failed to get container status \"1bf4ba4108c570e843ba6fa211b3cce828d706078accff00b5a13e8c837f3396\": rpc error: code = NotFound desc = could not find container \"1bf4ba4108c570e843ba6fa211b3cce828d706078accff00b5a13e8c837f3396\": container with ID starting with 1bf4ba4108c570e843ba6fa211b3cce828d706078accff00b5a13e8c837f3396 not found: ID does not exist" Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.359899 4856 scope.go:117] "RemoveContainer" containerID="e11c667563596035a2ade6fe4a1f25d26540f59173c8cf7362855a446708b970" Nov 22 08:53:16 crc kubenswrapper[4856]: E1122 08:53:16.360192 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e11c667563596035a2ade6fe4a1f25d26540f59173c8cf7362855a446708b970\": container with ID starting with e11c667563596035a2ade6fe4a1f25d26540f59173c8cf7362855a446708b970 not found: ID does not exist" containerID="e11c667563596035a2ade6fe4a1f25d26540f59173c8cf7362855a446708b970" Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.360246 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e11c667563596035a2ade6fe4a1f25d26540f59173c8cf7362855a446708b970"} err="failed to get container status \"e11c667563596035a2ade6fe4a1f25d26540f59173c8cf7362855a446708b970\": rpc error: code = NotFound desc = could not find container \"e11c667563596035a2ade6fe4a1f25d26540f59173c8cf7362855a446708b970\": container with ID starting with e11c667563596035a2ade6fe4a1f25d26540f59173c8cf7362855a446708b970 not found: ID does not exist" Nov 22 08:53:16 crc kubenswrapper[4856]: I1122 08:53:16.720652 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6812afd-ef72-4724-bc4c-f67df344dbca" path="/var/lib/kubelet/pods/c6812afd-ef72-4724-bc4c-f67df344dbca/volumes" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.595118 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx"] Nov 22 08:53:18 crc kubenswrapper[4856]: E1122 08:53:18.595929 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21bb02ee-d25f-4c9d-95a8-84f642661787" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.595949 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="21bb02ee-d25f-4c9d-95a8-84f642661787" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 22 08:53:18 crc kubenswrapper[4856]: E1122 08:53:18.596016 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6812afd-ef72-4724-bc4c-f67df344dbca" containerName="registry-server" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.596026 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6812afd-ef72-4724-bc4c-f67df344dbca" containerName="registry-server" Nov 22 08:53:18 crc kubenswrapper[4856]: E1122 08:53:18.596038 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6812afd-ef72-4724-bc4c-f67df344dbca" containerName="extract-utilities" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.596045 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6812afd-ef72-4724-bc4c-f67df344dbca" containerName="extract-utilities" Nov 22 08:53:18 crc kubenswrapper[4856]: E1122 08:53:18.596060 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6812afd-ef72-4724-bc4c-f67df344dbca" containerName="extract-content" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.596068 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6812afd-ef72-4724-bc4c-f67df344dbca" containerName="extract-content" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.596331 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6812afd-ef72-4724-bc4c-f67df344dbca" containerName="registry-server" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.596358 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="21bb02ee-d25f-4c9d-95a8-84f642661787" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.597363 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.599674 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.599971 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.599989 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.600015 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.610361 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx"] Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.679618 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.679707 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7n76\" (UniqueName: \"kubernetes.io/projected/bf97b43b-e761-42f4-bd6b-837f60e9598c-kube-api-access-f7n76\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.679830 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.679889 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.782073 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.782151 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7n76\" (UniqueName: \"kubernetes.io/projected/bf97b43b-e761-42f4-bd6b-837f60e9598c-kube-api-access-f7n76\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.782270 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.782351 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.788829 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.789176 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.790065 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.799177 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7n76\" (UniqueName: \"kubernetes.io/projected/bf97b43b-e761-42f4-bd6b-837f60e9598c-kube-api-access-f7n76\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:18 crc kubenswrapper[4856]: I1122 08:53:18.919370 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 08:53:19 crc kubenswrapper[4856]: I1122 08:53:19.458236 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx"] Nov 22 08:53:20 crc kubenswrapper[4856]: I1122 08:53:20.266099 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" event={"ID":"bf97b43b-e761-42f4-bd6b-837f60e9598c","Type":"ContainerStarted","Data":"50954fe8ab97cf54c1006ecafe298485d83798b8e70db30b717b68315834a68a"} Nov 22 08:53:21 crc kubenswrapper[4856]: I1122 08:53:21.275093 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" event={"ID":"bf97b43b-e761-42f4-bd6b-837f60e9598c","Type":"ContainerStarted","Data":"b3f8c28b1ff122fe9b8c8b2a8f693263ab0d75407a6bb2b176fcc67ddfb1e100"} Nov 22 08:53:21 crc kubenswrapper[4856]: I1122 08:53:21.307535 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" podStartSLOduration=2.665294413 podStartE2EDuration="3.307492543s" podCreationTimestamp="2025-11-22 08:53:18 +0000 UTC" firstStartedPulling="2025-11-22 08:53:19.460464518 +0000 UTC m=+6641.873857776" lastFinishedPulling="2025-11-22 08:53:20.102662648 +0000 UTC m=+6642.516055906" observedRunningTime="2025-11-22 08:53:21.300897575 +0000 UTC m=+6643.714290843" watchObservedRunningTime="2025-11-22 08:53:21.307492543 +0000 UTC m=+6643.720885801" Nov 22 08:53:22 crc kubenswrapper[4856]: I1122 08:53:22.060442 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qvrrw"] Nov 22 08:53:22 crc kubenswrapper[4856]: I1122 08:53:22.071264 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qvrrw"] Nov 22 08:53:22 crc kubenswrapper[4856]: I1122 08:53:22.726914 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53c4a37a-d990-4b77-bed7-8537e9d9a0ad" path="/var/lib/kubelet/pods/53c4a37a-d990-4b77-bed7-8537e9d9a0ad/volumes" Nov 22 08:53:29 crc kubenswrapper[4856]: I1122 08:53:29.754604 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:53:29 crc kubenswrapper[4856]: I1122 08:53:29.755178 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:53:32 crc kubenswrapper[4856]: I1122 08:53:32.330637 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-clhx2"] Nov 22 08:53:32 crc kubenswrapper[4856]: I1122 08:53:32.333376 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:32 crc kubenswrapper[4856]: I1122 08:53:32.343908 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-clhx2"] Nov 22 08:53:32 crc kubenswrapper[4856]: I1122 08:53:32.476490 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691672a0-1d7e-4136-946c-4d965e3b88b8-utilities\") pod \"redhat-marketplace-clhx2\" (UID: \"691672a0-1d7e-4136-946c-4d965e3b88b8\") " pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:32 crc kubenswrapper[4856]: I1122 08:53:32.476632 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf6b5\" (UniqueName: \"kubernetes.io/projected/691672a0-1d7e-4136-946c-4d965e3b88b8-kube-api-access-vf6b5\") pod \"redhat-marketplace-clhx2\" (UID: \"691672a0-1d7e-4136-946c-4d965e3b88b8\") " pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:32 crc kubenswrapper[4856]: I1122 08:53:32.476756 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691672a0-1d7e-4136-946c-4d965e3b88b8-catalog-content\") pod \"redhat-marketplace-clhx2\" (UID: \"691672a0-1d7e-4136-946c-4d965e3b88b8\") " pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:32 crc kubenswrapper[4856]: I1122 08:53:32.578536 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691672a0-1d7e-4136-946c-4d965e3b88b8-catalog-content\") pod \"redhat-marketplace-clhx2\" (UID: \"691672a0-1d7e-4136-946c-4d965e3b88b8\") " pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:32 crc kubenswrapper[4856]: I1122 08:53:32.578683 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691672a0-1d7e-4136-946c-4d965e3b88b8-utilities\") pod \"redhat-marketplace-clhx2\" (UID: \"691672a0-1d7e-4136-946c-4d965e3b88b8\") " pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:32 crc kubenswrapper[4856]: I1122 08:53:32.578747 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf6b5\" (UniqueName: \"kubernetes.io/projected/691672a0-1d7e-4136-946c-4d965e3b88b8-kube-api-access-vf6b5\") pod \"redhat-marketplace-clhx2\" (UID: \"691672a0-1d7e-4136-946c-4d965e3b88b8\") " pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:32 crc kubenswrapper[4856]: I1122 08:53:32.579776 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691672a0-1d7e-4136-946c-4d965e3b88b8-utilities\") pod \"redhat-marketplace-clhx2\" (UID: \"691672a0-1d7e-4136-946c-4d965e3b88b8\") " pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:32 crc kubenswrapper[4856]: I1122 08:53:32.580015 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691672a0-1d7e-4136-946c-4d965e3b88b8-catalog-content\") pod \"redhat-marketplace-clhx2\" (UID: \"691672a0-1d7e-4136-946c-4d965e3b88b8\") " pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:32 crc kubenswrapper[4856]: I1122 08:53:32.600294 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf6b5\" (UniqueName: \"kubernetes.io/projected/691672a0-1d7e-4136-946c-4d965e3b88b8-kube-api-access-vf6b5\") pod \"redhat-marketplace-clhx2\" (UID: \"691672a0-1d7e-4136-946c-4d965e3b88b8\") " pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:32 crc kubenswrapper[4856]: I1122 08:53:32.671109 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:33 crc kubenswrapper[4856]: I1122 08:53:33.137497 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-clhx2"] Nov 22 08:53:33 crc kubenswrapper[4856]: I1122 08:53:33.388131 4856 generic.go:334] "Generic (PLEG): container finished" podID="691672a0-1d7e-4136-946c-4d965e3b88b8" containerID="74f03fda9ac87237aed230a66c5cd7d4958e4814f5bb97dfaf00cdca6ae44693" exitCode=0 Nov 22 08:53:33 crc kubenswrapper[4856]: I1122 08:53:33.388221 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clhx2" event={"ID":"691672a0-1d7e-4136-946c-4d965e3b88b8","Type":"ContainerDied","Data":"74f03fda9ac87237aed230a66c5cd7d4958e4814f5bb97dfaf00cdca6ae44693"} Nov 22 08:53:33 crc kubenswrapper[4856]: I1122 08:53:33.388471 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clhx2" event={"ID":"691672a0-1d7e-4136-946c-4d965e3b88b8","Type":"ContainerStarted","Data":"1b76e4a67234115c8f78aa247dc8092c657fd98a657b801bee07aed9dfb0a234"} Nov 22 08:53:34 crc kubenswrapper[4856]: I1122 08:53:34.401098 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clhx2" event={"ID":"691672a0-1d7e-4136-946c-4d965e3b88b8","Type":"ContainerStarted","Data":"1acacc47c24fe0420a57da7b7cba1c68fa9a93c4738cbef2497f49c062d133ff"} Nov 22 08:53:34 crc kubenswrapper[4856]: I1122 08:53:34.732817 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-twtgv"] Nov 22 08:53:34 crc kubenswrapper[4856]: I1122 08:53:34.736762 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:34 crc kubenswrapper[4856]: I1122 08:53:34.751038 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twtgv"] Nov 22 08:53:34 crc kubenswrapper[4856]: I1122 08:53:34.825073 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b744c7bd-177e-41b4-8693-06a14300bb22-catalog-content\") pod \"redhat-operators-twtgv\" (UID: \"b744c7bd-177e-41b4-8693-06a14300bb22\") " pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:34 crc kubenswrapper[4856]: I1122 08:53:34.825195 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v9tn\" (UniqueName: \"kubernetes.io/projected/b744c7bd-177e-41b4-8693-06a14300bb22-kube-api-access-6v9tn\") pod \"redhat-operators-twtgv\" (UID: \"b744c7bd-177e-41b4-8693-06a14300bb22\") " pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:34 crc kubenswrapper[4856]: I1122 08:53:34.825525 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b744c7bd-177e-41b4-8693-06a14300bb22-utilities\") pod \"redhat-operators-twtgv\" (UID: \"b744c7bd-177e-41b4-8693-06a14300bb22\") " pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:34 crc kubenswrapper[4856]: I1122 08:53:34.927181 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b744c7bd-177e-41b4-8693-06a14300bb22-catalog-content\") pod \"redhat-operators-twtgv\" (UID: \"b744c7bd-177e-41b4-8693-06a14300bb22\") " pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:34 crc kubenswrapper[4856]: I1122 08:53:34.927310 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v9tn\" (UniqueName: \"kubernetes.io/projected/b744c7bd-177e-41b4-8693-06a14300bb22-kube-api-access-6v9tn\") pod \"redhat-operators-twtgv\" (UID: \"b744c7bd-177e-41b4-8693-06a14300bb22\") " pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:34 crc kubenswrapper[4856]: I1122 08:53:34.927413 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b744c7bd-177e-41b4-8693-06a14300bb22-utilities\") pod \"redhat-operators-twtgv\" (UID: \"b744c7bd-177e-41b4-8693-06a14300bb22\") " pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:34 crc kubenswrapper[4856]: I1122 08:53:34.927796 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b744c7bd-177e-41b4-8693-06a14300bb22-catalog-content\") pod \"redhat-operators-twtgv\" (UID: \"b744c7bd-177e-41b4-8693-06a14300bb22\") " pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:34 crc kubenswrapper[4856]: I1122 08:53:34.927850 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b744c7bd-177e-41b4-8693-06a14300bb22-utilities\") pod \"redhat-operators-twtgv\" (UID: \"b744c7bd-177e-41b4-8693-06a14300bb22\") " pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:34 crc kubenswrapper[4856]: I1122 08:53:34.948199 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v9tn\" (UniqueName: \"kubernetes.io/projected/b744c7bd-177e-41b4-8693-06a14300bb22-kube-api-access-6v9tn\") pod \"redhat-operators-twtgv\" (UID: \"b744c7bd-177e-41b4-8693-06a14300bb22\") " pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:35 crc kubenswrapper[4856]: I1122 08:53:35.067548 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:35 crc kubenswrapper[4856]: I1122 08:53:35.412869 4856 generic.go:334] "Generic (PLEG): container finished" podID="691672a0-1d7e-4136-946c-4d965e3b88b8" containerID="1acacc47c24fe0420a57da7b7cba1c68fa9a93c4738cbef2497f49c062d133ff" exitCode=0 Nov 22 08:53:35 crc kubenswrapper[4856]: I1122 08:53:35.412949 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clhx2" event={"ID":"691672a0-1d7e-4136-946c-4d965e3b88b8","Type":"ContainerDied","Data":"1acacc47c24fe0420a57da7b7cba1c68fa9a93c4738cbef2497f49c062d133ff"} Nov 22 08:53:35 crc kubenswrapper[4856]: I1122 08:53:35.538374 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-twtgv"] Nov 22 08:53:35 crc kubenswrapper[4856]: W1122 08:53:35.551878 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb744c7bd_177e_41b4_8693_06a14300bb22.slice/crio-3a73f340513377fd52c64c07b0023713e33064699522bd0716e7136edeb4727f WatchSource:0}: Error finding container 3a73f340513377fd52c64c07b0023713e33064699522bd0716e7136edeb4727f: Status 404 returned error can't find the container with id 3a73f340513377fd52c64c07b0023713e33064699522bd0716e7136edeb4727f Nov 22 08:53:36 crc kubenswrapper[4856]: I1122 08:53:36.424954 4856 generic.go:334] "Generic (PLEG): container finished" podID="b744c7bd-177e-41b4-8693-06a14300bb22" containerID="367a4f2cc970ea1bc94557c04ea50dde97ae825b958c6a5063ef6d954a94f7e5" exitCode=0 Nov 22 08:53:36 crc kubenswrapper[4856]: I1122 08:53:36.425061 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twtgv" event={"ID":"b744c7bd-177e-41b4-8693-06a14300bb22","Type":"ContainerDied","Data":"367a4f2cc970ea1bc94557c04ea50dde97ae825b958c6a5063ef6d954a94f7e5"} Nov 22 08:53:36 crc kubenswrapper[4856]: I1122 08:53:36.425375 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twtgv" event={"ID":"b744c7bd-177e-41b4-8693-06a14300bb22","Type":"ContainerStarted","Data":"3a73f340513377fd52c64c07b0023713e33064699522bd0716e7136edeb4727f"} Nov 22 08:53:36 crc kubenswrapper[4856]: I1122 08:53:36.429181 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clhx2" event={"ID":"691672a0-1d7e-4136-946c-4d965e3b88b8","Type":"ContainerStarted","Data":"6ba81db23a13720990b605c010985fd7af99cd42fcfbbaab4da0c4987fdd2722"} Nov 22 08:53:36 crc kubenswrapper[4856]: I1122 08:53:36.459934 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-clhx2" podStartSLOduration=2.044762361 podStartE2EDuration="4.45991486s" podCreationTimestamp="2025-11-22 08:53:32 +0000 UTC" firstStartedPulling="2025-11-22 08:53:33.390063116 +0000 UTC m=+6655.803456374" lastFinishedPulling="2025-11-22 08:53:35.805215615 +0000 UTC m=+6658.218608873" observedRunningTime="2025-11-22 08:53:36.458194944 +0000 UTC m=+6658.871588222" watchObservedRunningTime="2025-11-22 08:53:36.45991486 +0000 UTC m=+6658.873308118" Nov 22 08:53:37 crc kubenswrapper[4856]: I1122 08:53:37.441671 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twtgv" event={"ID":"b744c7bd-177e-41b4-8693-06a14300bb22","Type":"ContainerStarted","Data":"613762f855c7b3757dadd233e3bfe21b3b6b5e97f0a1bf6cd3ea666e1b139b5a"} Nov 22 08:53:39 crc kubenswrapper[4856]: I1122 08:53:39.156145 4856 scope.go:117] "RemoveContainer" containerID="5a7a126b36eddf6debb9120e04d42173a79b2102b757179ef35d1eab58bbdb2a" Nov 22 08:53:39 crc kubenswrapper[4856]: I1122 08:53:39.182283 4856 scope.go:117] "RemoveContainer" containerID="15e0aaa1e96ae564811931d1e8608e46203e4e5379a0f50605507df045701c2b" Nov 22 08:53:39 crc kubenswrapper[4856]: I1122 08:53:39.234848 4856 scope.go:117] "RemoveContainer" containerID="8907f44112bc37b7415ca0cd25d1319ed3f57d93a6ae48f72004cfdf1f6d8b73" Nov 22 08:53:39 crc kubenswrapper[4856]: I1122 08:53:39.301935 4856 scope.go:117] "RemoveContainer" containerID="6d7485cf8959d8d16fa37e46d28d9362ae449ea080df9a35b78475fe456e4e5a" Nov 22 08:53:39 crc kubenswrapper[4856]: I1122 08:53:39.354999 4856 scope.go:117] "RemoveContainer" containerID="5791ae700212e39b9b30454ad838493176357c551bc6ba3c5ed35232de81d9a3" Nov 22 08:53:39 crc kubenswrapper[4856]: I1122 08:53:39.419078 4856 scope.go:117] "RemoveContainer" containerID="d6746db713e2a7719f2bb82079706c80a7c92bb7919249692dc1e608cf514e78" Nov 22 08:53:39 crc kubenswrapper[4856]: I1122 08:53:39.479901 4856 scope.go:117] "RemoveContainer" containerID="31d8c15268959b5a7a6965b9e26b6f903b5d5ede686115db1f6458d04ffeebaa" Nov 22 08:53:42 crc kubenswrapper[4856]: I1122 08:53:42.044438 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-k7dt8"] Nov 22 08:53:42 crc kubenswrapper[4856]: I1122 08:53:42.055907 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5bk7q"] Nov 22 08:53:42 crc kubenswrapper[4856]: I1122 08:53:42.064069 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5bk7q"] Nov 22 08:53:42 crc kubenswrapper[4856]: I1122 08:53:42.073958 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-k7dt8"] Nov 22 08:53:42 crc kubenswrapper[4856]: I1122 08:53:42.671387 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:42 crc kubenswrapper[4856]: I1122 08:53:42.671941 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:42 crc kubenswrapper[4856]: I1122 08:53:42.741078 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e1a60af-c38b-436e-99aa-e3140fb55829" path="/var/lib/kubelet/pods/0e1a60af-c38b-436e-99aa-e3140fb55829/volumes" Nov 22 08:53:42 crc kubenswrapper[4856]: I1122 08:53:42.742130 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6b23a7e-3095-43b9-846f-48d7a5b9b628" path="/var/lib/kubelet/pods/e6b23a7e-3095-43b9-846f-48d7a5b9b628/volumes" Nov 22 08:53:42 crc kubenswrapper[4856]: I1122 08:53:42.749342 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:43 crc kubenswrapper[4856]: I1122 08:53:43.518875 4856 generic.go:334] "Generic (PLEG): container finished" podID="b744c7bd-177e-41b4-8693-06a14300bb22" containerID="613762f855c7b3757dadd233e3bfe21b3b6b5e97f0a1bf6cd3ea666e1b139b5a" exitCode=0 Nov 22 08:53:43 crc kubenswrapper[4856]: I1122 08:53:43.518946 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twtgv" event={"ID":"b744c7bd-177e-41b4-8693-06a14300bb22","Type":"ContainerDied","Data":"613762f855c7b3757dadd233e3bfe21b3b6b5e97f0a1bf6cd3ea666e1b139b5a"} Nov 22 08:53:43 crc kubenswrapper[4856]: I1122 08:53:43.571047 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:43 crc kubenswrapper[4856]: I1122 08:53:43.985917 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-clhx2"] Nov 22 08:53:44 crc kubenswrapper[4856]: I1122 08:53:44.532215 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twtgv" event={"ID":"b744c7bd-177e-41b4-8693-06a14300bb22","Type":"ContainerStarted","Data":"0f01e73e9ebce021bc73e8307068cd5e01a0c72104cb43d449ff870e3955d4de"} Nov 22 08:53:44 crc kubenswrapper[4856]: I1122 08:53:44.553904 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-twtgv" podStartSLOduration=2.778000784 podStartE2EDuration="10.553882415s" podCreationTimestamp="2025-11-22 08:53:34 +0000 UTC" firstStartedPulling="2025-11-22 08:53:36.426936612 +0000 UTC m=+6658.840329870" lastFinishedPulling="2025-11-22 08:53:44.202818243 +0000 UTC m=+6666.616211501" observedRunningTime="2025-11-22 08:53:44.547894084 +0000 UTC m=+6666.961287342" watchObservedRunningTime="2025-11-22 08:53:44.553882415 +0000 UTC m=+6666.967275683" Nov 22 08:53:45 crc kubenswrapper[4856]: I1122 08:53:45.068211 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:45 crc kubenswrapper[4856]: I1122 08:53:45.068558 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:45 crc kubenswrapper[4856]: I1122 08:53:45.541061 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-clhx2" podUID="691672a0-1d7e-4136-946c-4d965e3b88b8" containerName="registry-server" containerID="cri-o://6ba81db23a13720990b605c010985fd7af99cd42fcfbbaab4da0c4987fdd2722" gracePeriod=2 Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.016671 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.069205 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691672a0-1d7e-4136-946c-4d965e3b88b8-utilities\") pod \"691672a0-1d7e-4136-946c-4d965e3b88b8\" (UID: \"691672a0-1d7e-4136-946c-4d965e3b88b8\") " Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.069269 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf6b5\" (UniqueName: \"kubernetes.io/projected/691672a0-1d7e-4136-946c-4d965e3b88b8-kube-api-access-vf6b5\") pod \"691672a0-1d7e-4136-946c-4d965e3b88b8\" (UID: \"691672a0-1d7e-4136-946c-4d965e3b88b8\") " Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.069306 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691672a0-1d7e-4136-946c-4d965e3b88b8-catalog-content\") pod \"691672a0-1d7e-4136-946c-4d965e3b88b8\" (UID: \"691672a0-1d7e-4136-946c-4d965e3b88b8\") " Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.070127 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/691672a0-1d7e-4136-946c-4d965e3b88b8-utilities" (OuterVolumeSpecName: "utilities") pod "691672a0-1d7e-4136-946c-4d965e3b88b8" (UID: "691672a0-1d7e-4136-946c-4d965e3b88b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.077958 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/691672a0-1d7e-4136-946c-4d965e3b88b8-kube-api-access-vf6b5" (OuterVolumeSpecName: "kube-api-access-vf6b5") pod "691672a0-1d7e-4136-946c-4d965e3b88b8" (UID: "691672a0-1d7e-4136-946c-4d965e3b88b8"). InnerVolumeSpecName "kube-api-access-vf6b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.084953 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/691672a0-1d7e-4136-946c-4d965e3b88b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "691672a0-1d7e-4136-946c-4d965e3b88b8" (UID: "691672a0-1d7e-4136-946c-4d965e3b88b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.117832 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-twtgv" podUID="b744c7bd-177e-41b4-8693-06a14300bb22" containerName="registry-server" probeResult="failure" output=< Nov 22 08:53:46 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 08:53:46 crc kubenswrapper[4856]: > Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.171708 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691672a0-1d7e-4136-946c-4d965e3b88b8-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.171754 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf6b5\" (UniqueName: \"kubernetes.io/projected/691672a0-1d7e-4136-946c-4d965e3b88b8-kube-api-access-vf6b5\") on node \"crc\" DevicePath \"\"" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.171769 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691672a0-1d7e-4136-946c-4d965e3b88b8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.552044 4856 generic.go:334] "Generic (PLEG): container finished" podID="691672a0-1d7e-4136-946c-4d965e3b88b8" containerID="6ba81db23a13720990b605c010985fd7af99cd42fcfbbaab4da0c4987fdd2722" exitCode=0 Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.552088 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clhx2" event={"ID":"691672a0-1d7e-4136-946c-4d965e3b88b8","Type":"ContainerDied","Data":"6ba81db23a13720990b605c010985fd7af99cd42fcfbbaab4da0c4987fdd2722"} Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.552142 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-clhx2" event={"ID":"691672a0-1d7e-4136-946c-4d965e3b88b8","Type":"ContainerDied","Data":"1b76e4a67234115c8f78aa247dc8092c657fd98a657b801bee07aed9dfb0a234"} Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.552171 4856 scope.go:117] "RemoveContainer" containerID="6ba81db23a13720990b605c010985fd7af99cd42fcfbbaab4da0c4987fdd2722" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.552105 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-clhx2" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.578945 4856 scope.go:117] "RemoveContainer" containerID="1acacc47c24fe0420a57da7b7cba1c68fa9a93c4738cbef2497f49c062d133ff" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.590237 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-clhx2"] Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.598618 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-clhx2"] Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.613208 4856 scope.go:117] "RemoveContainer" containerID="74f03fda9ac87237aed230a66c5cd7d4958e4814f5bb97dfaf00cdca6ae44693" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.647924 4856 scope.go:117] "RemoveContainer" containerID="6ba81db23a13720990b605c010985fd7af99cd42fcfbbaab4da0c4987fdd2722" Nov 22 08:53:46 crc kubenswrapper[4856]: E1122 08:53:46.648426 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ba81db23a13720990b605c010985fd7af99cd42fcfbbaab4da0c4987fdd2722\": container with ID starting with 6ba81db23a13720990b605c010985fd7af99cd42fcfbbaab4da0c4987fdd2722 not found: ID does not exist" containerID="6ba81db23a13720990b605c010985fd7af99cd42fcfbbaab4da0c4987fdd2722" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.648476 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ba81db23a13720990b605c010985fd7af99cd42fcfbbaab4da0c4987fdd2722"} err="failed to get container status \"6ba81db23a13720990b605c010985fd7af99cd42fcfbbaab4da0c4987fdd2722\": rpc error: code = NotFound desc = could not find container \"6ba81db23a13720990b605c010985fd7af99cd42fcfbbaab4da0c4987fdd2722\": container with ID starting with 6ba81db23a13720990b605c010985fd7af99cd42fcfbbaab4da0c4987fdd2722 not found: ID does not exist" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.648528 4856 scope.go:117] "RemoveContainer" containerID="1acacc47c24fe0420a57da7b7cba1c68fa9a93c4738cbef2497f49c062d133ff" Nov 22 08:53:46 crc kubenswrapper[4856]: E1122 08:53:46.648971 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1acacc47c24fe0420a57da7b7cba1c68fa9a93c4738cbef2497f49c062d133ff\": container with ID starting with 1acacc47c24fe0420a57da7b7cba1c68fa9a93c4738cbef2497f49c062d133ff not found: ID does not exist" containerID="1acacc47c24fe0420a57da7b7cba1c68fa9a93c4738cbef2497f49c062d133ff" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.648994 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1acacc47c24fe0420a57da7b7cba1c68fa9a93c4738cbef2497f49c062d133ff"} err="failed to get container status \"1acacc47c24fe0420a57da7b7cba1c68fa9a93c4738cbef2497f49c062d133ff\": rpc error: code = NotFound desc = could not find container \"1acacc47c24fe0420a57da7b7cba1c68fa9a93c4738cbef2497f49c062d133ff\": container with ID starting with 1acacc47c24fe0420a57da7b7cba1c68fa9a93c4738cbef2497f49c062d133ff not found: ID does not exist" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.649011 4856 scope.go:117] "RemoveContainer" containerID="74f03fda9ac87237aed230a66c5cd7d4958e4814f5bb97dfaf00cdca6ae44693" Nov 22 08:53:46 crc kubenswrapper[4856]: E1122 08:53:46.649384 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74f03fda9ac87237aed230a66c5cd7d4958e4814f5bb97dfaf00cdca6ae44693\": container with ID starting with 74f03fda9ac87237aed230a66c5cd7d4958e4814f5bb97dfaf00cdca6ae44693 not found: ID does not exist" containerID="74f03fda9ac87237aed230a66c5cd7d4958e4814f5bb97dfaf00cdca6ae44693" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.649408 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74f03fda9ac87237aed230a66c5cd7d4958e4814f5bb97dfaf00cdca6ae44693"} err="failed to get container status \"74f03fda9ac87237aed230a66c5cd7d4958e4814f5bb97dfaf00cdca6ae44693\": rpc error: code = NotFound desc = could not find container \"74f03fda9ac87237aed230a66c5cd7d4958e4814f5bb97dfaf00cdca6ae44693\": container with ID starting with 74f03fda9ac87237aed230a66c5cd7d4958e4814f5bb97dfaf00cdca6ae44693 not found: ID does not exist" Nov 22 08:53:46 crc kubenswrapper[4856]: I1122 08:53:46.731715 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="691672a0-1d7e-4136-946c-4d965e3b88b8" path="/var/lib/kubelet/pods/691672a0-1d7e-4136-946c-4d965e3b88b8/volumes" Nov 22 08:53:55 crc kubenswrapper[4856]: I1122 08:53:55.115138 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:55 crc kubenswrapper[4856]: I1122 08:53:55.168774 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:55 crc kubenswrapper[4856]: I1122 08:53:55.352850 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twtgv"] Nov 22 08:53:56 crc kubenswrapper[4856]: I1122 08:53:56.647058 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-twtgv" podUID="b744c7bd-177e-41b4-8693-06a14300bb22" containerName="registry-server" containerID="cri-o://0f01e73e9ebce021bc73e8307068cd5e01a0c72104cb43d449ff870e3955d4de" gracePeriod=2 Nov 22 08:53:57 crc kubenswrapper[4856]: I1122 08:53:57.657191 4856 generic.go:334] "Generic (PLEG): container finished" podID="b744c7bd-177e-41b4-8693-06a14300bb22" containerID="0f01e73e9ebce021bc73e8307068cd5e01a0c72104cb43d449ff870e3955d4de" exitCode=0 Nov 22 08:53:57 crc kubenswrapper[4856]: I1122 08:53:57.657256 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twtgv" event={"ID":"b744c7bd-177e-41b4-8693-06a14300bb22","Type":"ContainerDied","Data":"0f01e73e9ebce021bc73e8307068cd5e01a0c72104cb43d449ff870e3955d4de"} Nov 22 08:53:57 crc kubenswrapper[4856]: I1122 08:53:57.902829 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.015624 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b744c7bd-177e-41b4-8693-06a14300bb22-catalog-content\") pod \"b744c7bd-177e-41b4-8693-06a14300bb22\" (UID: \"b744c7bd-177e-41b4-8693-06a14300bb22\") " Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.015708 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b744c7bd-177e-41b4-8693-06a14300bb22-utilities\") pod \"b744c7bd-177e-41b4-8693-06a14300bb22\" (UID: \"b744c7bd-177e-41b4-8693-06a14300bb22\") " Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.015880 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v9tn\" (UniqueName: \"kubernetes.io/projected/b744c7bd-177e-41b4-8693-06a14300bb22-kube-api-access-6v9tn\") pod \"b744c7bd-177e-41b4-8693-06a14300bb22\" (UID: \"b744c7bd-177e-41b4-8693-06a14300bb22\") " Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.017016 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b744c7bd-177e-41b4-8693-06a14300bb22-utilities" (OuterVolumeSpecName: "utilities") pod "b744c7bd-177e-41b4-8693-06a14300bb22" (UID: "b744c7bd-177e-41b4-8693-06a14300bb22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.022316 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b744c7bd-177e-41b4-8693-06a14300bb22-kube-api-access-6v9tn" (OuterVolumeSpecName: "kube-api-access-6v9tn") pod "b744c7bd-177e-41b4-8693-06a14300bb22" (UID: "b744c7bd-177e-41b4-8693-06a14300bb22"). InnerVolumeSpecName "kube-api-access-6v9tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.118790 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b744c7bd-177e-41b4-8693-06a14300bb22-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.118825 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v9tn\" (UniqueName: \"kubernetes.io/projected/b744c7bd-177e-41b4-8693-06a14300bb22-kube-api-access-6v9tn\") on node \"crc\" DevicePath \"\"" Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.120567 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b744c7bd-177e-41b4-8693-06a14300bb22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b744c7bd-177e-41b4-8693-06a14300bb22" (UID: "b744c7bd-177e-41b4-8693-06a14300bb22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.221265 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b744c7bd-177e-41b4-8693-06a14300bb22-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.669803 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-twtgv" event={"ID":"b744c7bd-177e-41b4-8693-06a14300bb22","Type":"ContainerDied","Data":"3a73f340513377fd52c64c07b0023713e33064699522bd0716e7136edeb4727f"} Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.669848 4856 scope.go:117] "RemoveContainer" containerID="0f01e73e9ebce021bc73e8307068cd5e01a0c72104cb43d449ff870e3955d4de" Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.669861 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-twtgv" Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.698636 4856 scope.go:117] "RemoveContainer" containerID="613762f855c7b3757dadd233e3bfe21b3b6b5e97f0a1bf6cd3ea666e1b139b5a" Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.703573 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-twtgv"] Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.730911 4856 scope.go:117] "RemoveContainer" containerID="367a4f2cc970ea1bc94557c04ea50dde97ae825b958c6a5063ef6d954a94f7e5" Nov 22 08:53:58 crc kubenswrapper[4856]: I1122 08:53:58.738344 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-twtgv"] Nov 22 08:53:59 crc kubenswrapper[4856]: I1122 08:53:59.754675 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:53:59 crc kubenswrapper[4856]: I1122 08:53:59.755033 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:54:00 crc kubenswrapper[4856]: I1122 08:54:00.722466 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b744c7bd-177e-41b4-8693-06a14300bb22" path="/var/lib/kubelet/pods/b744c7bd-177e-41b4-8693-06a14300bb22/volumes" Nov 22 08:54:29 crc kubenswrapper[4856]: I1122 08:54:29.754795 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 08:54:29 crc kubenswrapper[4856]: I1122 08:54:29.755438 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 08:54:29 crc kubenswrapper[4856]: I1122 08:54:29.755640 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 08:54:29 crc kubenswrapper[4856]: I1122 08:54:29.756494 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 08:54:29 crc kubenswrapper[4856]: I1122 08:54:29.756632 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" gracePeriod=600 Nov 22 08:54:30 crc kubenswrapper[4856]: E1122 08:54:30.485924 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:54:30 crc kubenswrapper[4856]: I1122 08:54:30.971385 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" exitCode=0 Nov 22 08:54:30 crc kubenswrapper[4856]: I1122 08:54:30.971553 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e"} Nov 22 08:54:30 crc kubenswrapper[4856]: I1122 08:54:30.971756 4856 scope.go:117] "RemoveContainer" containerID="2e4db6dfa0f8e0b89e30204c184a440910ad4ebbbe2c1f37db91bf8c459e660c" Nov 22 08:54:30 crc kubenswrapper[4856]: I1122 08:54:30.972493 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:54:30 crc kubenswrapper[4856]: E1122 08:54:30.972798 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:54:39 crc kubenswrapper[4856]: I1122 08:54:39.705362 4856 scope.go:117] "RemoveContainer" containerID="f97c7bafcf30231b92f955dcf99fccbff8f3409dad368d1c1dad01eb82dbf7b5" Nov 22 08:54:39 crc kubenswrapper[4856]: I1122 08:54:39.749081 4856 scope.go:117] "RemoveContainer" containerID="62b2c3be1a42f5cf20f78093fcb1e391742316e277d244bcbf3be0ac71523056" Nov 22 08:54:41 crc kubenswrapper[4856]: I1122 08:54:41.710234 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:54:41 crc kubenswrapper[4856]: E1122 08:54:41.710794 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:54:42 crc kubenswrapper[4856]: I1122 08:54:42.047028 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bb2b6"] Nov 22 08:54:42 crc kubenswrapper[4856]: I1122 08:54:42.055053 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bb2b6"] Nov 22 08:54:42 crc kubenswrapper[4856]: I1122 08:54:42.722276 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="482880bb-c065-4bed-be16-bad626eac7ed" path="/var/lib/kubelet/pods/482880bb-c065-4bed-be16-bad626eac7ed/volumes" Nov 22 08:54:52 crc kubenswrapper[4856]: I1122 08:54:52.710261 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:54:52 crc kubenswrapper[4856]: E1122 08:54:52.713118 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:55:04 crc kubenswrapper[4856]: I1122 08:55:04.709564 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:55:04 crc kubenswrapper[4856]: E1122 08:55:04.710384 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:55:15 crc kubenswrapper[4856]: I1122 08:55:15.710848 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:55:15 crc kubenswrapper[4856]: E1122 08:55:15.711658 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:55:30 crc kubenswrapper[4856]: I1122 08:55:30.710924 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:55:30 crc kubenswrapper[4856]: E1122 08:55:30.711967 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:55:39 crc kubenswrapper[4856]: I1122 08:55:39.882201 4856 scope.go:117] "RemoveContainer" containerID="f2d7f4d17daf3a8a1de39e5b6d2335f98e57082c3c0ace101a71b483b1ecba36" Nov 22 08:55:41 crc kubenswrapper[4856]: I1122 08:55:41.710206 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:55:41 crc kubenswrapper[4856]: E1122 08:55:41.710774 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:55:52 crc kubenswrapper[4856]: I1122 08:55:52.712115 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:55:52 crc kubenswrapper[4856]: E1122 08:55:52.713353 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:56:06 crc kubenswrapper[4856]: I1122 08:56:06.711157 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:56:06 crc kubenswrapper[4856]: E1122 08:56:06.712161 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:56:18 crc kubenswrapper[4856]: I1122 08:56:18.724842 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:56:18 crc kubenswrapper[4856]: E1122 08:56:18.726316 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:56:29 crc kubenswrapper[4856]: I1122 08:56:29.710229 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:56:29 crc kubenswrapper[4856]: E1122 08:56:29.711213 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:56:40 crc kubenswrapper[4856]: I1122 08:56:40.710737 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:56:40 crc kubenswrapper[4856]: E1122 08:56:40.711460 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:56:55 crc kubenswrapper[4856]: I1122 08:56:55.709488 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:56:55 crc kubenswrapper[4856]: E1122 08:56:55.710366 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:57:09 crc kubenswrapper[4856]: I1122 08:57:09.710310 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:57:09 crc kubenswrapper[4856]: E1122 08:57:09.711055 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:57:20 crc kubenswrapper[4856]: I1122 08:57:20.710117 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:57:20 crc kubenswrapper[4856]: E1122 08:57:20.710864 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:57:21 crc kubenswrapper[4856]: I1122 08:57:21.038947 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-bd20-account-create-ncmhw"] Nov 22 08:57:21 crc kubenswrapper[4856]: I1122 08:57:21.046690 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-s8hnn"] Nov 22 08:57:21 crc kubenswrapper[4856]: I1122 08:57:21.054674 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-bd20-account-create-ncmhw"] Nov 22 08:57:21 crc kubenswrapper[4856]: I1122 08:57:21.061852 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-s8hnn"] Nov 22 08:57:22 crc kubenswrapper[4856]: I1122 08:57:22.723453 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="211ac788-84c0-47d0-a08f-574892036281" path="/var/lib/kubelet/pods/211ac788-84c0-47d0-a08f-574892036281/volumes" Nov 22 08:57:22 crc kubenswrapper[4856]: I1122 08:57:22.724504 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b13002-f7ec-482b-81b3-dc6297a4ebc9" path="/var/lib/kubelet/pods/83b13002-f7ec-482b-81b3-dc6297a4ebc9/volumes" Nov 22 08:57:35 crc kubenswrapper[4856]: I1122 08:57:35.710262 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:57:35 crc kubenswrapper[4856]: E1122 08:57:35.711041 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:57:39 crc kubenswrapper[4856]: I1122 08:57:39.988792 4856 scope.go:117] "RemoveContainer" containerID="a765d7e8805ea40f32ceccbd95c3df40bdcf9b38525f446649a4091937e0e994" Nov 22 08:57:40 crc kubenswrapper[4856]: I1122 08:57:40.022328 4856 scope.go:117] "RemoveContainer" containerID="4b00f624b93df7f63e47150f91f0621140bb717d4499c2d780aea46428ac582b" Nov 22 08:57:47 crc kubenswrapper[4856]: I1122 08:57:47.709822 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:57:47 crc kubenswrapper[4856]: E1122 08:57:47.710442 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:57:58 crc kubenswrapper[4856]: I1122 08:57:58.717245 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:57:58 crc kubenswrapper[4856]: E1122 08:57:58.718142 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:58:01 crc kubenswrapper[4856]: I1122 08:58:01.039400 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-4prz8"] Nov 22 08:58:01 crc kubenswrapper[4856]: I1122 08:58:01.047493 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-4prz8"] Nov 22 08:58:02 crc kubenswrapper[4856]: I1122 08:58:02.721787 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b079f0f9-7b51-4800-b13f-f8d23132560f" path="/var/lib/kubelet/pods/b079f0f9-7b51-4800-b13f-f8d23132560f/volumes" Nov 22 08:58:12 crc kubenswrapper[4856]: I1122 08:58:12.709765 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:58:12 crc kubenswrapper[4856]: E1122 08:58:12.711311 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:58:25 crc kubenswrapper[4856]: I1122 08:58:25.710068 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:58:25 crc kubenswrapper[4856]: E1122 08:58:25.711105 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:58:36 crc kubenswrapper[4856]: I1122 08:58:36.710407 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:58:36 crc kubenswrapper[4856]: E1122 08:58:36.711729 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:58:40 crc kubenswrapper[4856]: I1122 08:58:40.227190 4856 scope.go:117] "RemoveContainer" containerID="e88d889e242c35376c936f9d02c0a603110a25b482418a75d4ea482a7124d628" Nov 22 08:58:47 crc kubenswrapper[4856]: I1122 08:58:47.709990 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:58:47 crc kubenswrapper[4856]: E1122 08:58:47.710708 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:59:02 crc kubenswrapper[4856]: I1122 08:59:02.717175 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:59:02 crc kubenswrapper[4856]: E1122 08:59:02.718424 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:59:04 crc kubenswrapper[4856]: I1122 08:59:04.865138 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vk2bd"] Nov 22 08:59:04 crc kubenswrapper[4856]: E1122 08:59:04.865911 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b744c7bd-177e-41b4-8693-06a14300bb22" containerName="extract-content" Nov 22 08:59:04 crc kubenswrapper[4856]: I1122 08:59:04.866449 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b744c7bd-177e-41b4-8693-06a14300bb22" containerName="extract-content" Nov 22 08:59:04 crc kubenswrapper[4856]: E1122 08:59:04.866496 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691672a0-1d7e-4136-946c-4d965e3b88b8" containerName="extract-content" Nov 22 08:59:04 crc kubenswrapper[4856]: I1122 08:59:04.866642 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="691672a0-1d7e-4136-946c-4d965e3b88b8" containerName="extract-content" Nov 22 08:59:04 crc kubenswrapper[4856]: E1122 08:59:04.866684 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691672a0-1d7e-4136-946c-4d965e3b88b8" containerName="registry-server" Nov 22 08:59:04 crc kubenswrapper[4856]: I1122 08:59:04.866696 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="691672a0-1d7e-4136-946c-4d965e3b88b8" containerName="registry-server" Nov 22 08:59:04 crc kubenswrapper[4856]: E1122 08:59:04.866725 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="691672a0-1d7e-4136-946c-4d965e3b88b8" containerName="extract-utilities" Nov 22 08:59:04 crc kubenswrapper[4856]: I1122 08:59:04.866734 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="691672a0-1d7e-4136-946c-4d965e3b88b8" containerName="extract-utilities" Nov 22 08:59:04 crc kubenswrapper[4856]: E1122 08:59:04.866762 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b744c7bd-177e-41b4-8693-06a14300bb22" containerName="extract-utilities" Nov 22 08:59:04 crc kubenswrapper[4856]: I1122 08:59:04.866770 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b744c7bd-177e-41b4-8693-06a14300bb22" containerName="extract-utilities" Nov 22 08:59:04 crc kubenswrapper[4856]: E1122 08:59:04.866784 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b744c7bd-177e-41b4-8693-06a14300bb22" containerName="registry-server" Nov 22 08:59:04 crc kubenswrapper[4856]: I1122 08:59:04.866793 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="b744c7bd-177e-41b4-8693-06a14300bb22" containerName="registry-server" Nov 22 08:59:04 crc kubenswrapper[4856]: I1122 08:59:04.867069 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="691672a0-1d7e-4136-946c-4d965e3b88b8" containerName="registry-server" Nov 22 08:59:04 crc kubenswrapper[4856]: I1122 08:59:04.867099 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="b744c7bd-177e-41b4-8693-06a14300bb22" containerName="registry-server" Nov 22 08:59:04 crc kubenswrapper[4856]: I1122 08:59:04.869342 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:04 crc kubenswrapper[4856]: I1122 08:59:04.878903 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vk2bd"] Nov 22 08:59:05 crc kubenswrapper[4856]: I1122 08:59:05.015327 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-utilities\") pod \"community-operators-vk2bd\" (UID: \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\") " pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:05 crc kubenswrapper[4856]: I1122 08:59:05.015426 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpmrz\" (UniqueName: \"kubernetes.io/projected/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-kube-api-access-gpmrz\") pod \"community-operators-vk2bd\" (UID: \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\") " pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:05 crc kubenswrapper[4856]: I1122 08:59:05.015555 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-catalog-content\") pod \"community-operators-vk2bd\" (UID: \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\") " pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:05 crc kubenswrapper[4856]: I1122 08:59:05.116918 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpmrz\" (UniqueName: \"kubernetes.io/projected/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-kube-api-access-gpmrz\") pod \"community-operators-vk2bd\" (UID: \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\") " pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:05 crc kubenswrapper[4856]: I1122 08:59:05.117063 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-catalog-content\") pod \"community-operators-vk2bd\" (UID: \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\") " pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:05 crc kubenswrapper[4856]: I1122 08:59:05.117176 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-utilities\") pod \"community-operators-vk2bd\" (UID: \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\") " pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:05 crc kubenswrapper[4856]: I1122 08:59:05.117726 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-utilities\") pod \"community-operators-vk2bd\" (UID: \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\") " pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:05 crc kubenswrapper[4856]: I1122 08:59:05.118119 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-catalog-content\") pod \"community-operators-vk2bd\" (UID: \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\") " pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:05 crc kubenswrapper[4856]: I1122 08:59:05.146290 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpmrz\" (UniqueName: \"kubernetes.io/projected/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-kube-api-access-gpmrz\") pod \"community-operators-vk2bd\" (UID: \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\") " pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:05 crc kubenswrapper[4856]: I1122 08:59:05.204052 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:05 crc kubenswrapper[4856]: I1122 08:59:05.769408 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vk2bd"] Nov 22 08:59:06 crc kubenswrapper[4856]: I1122 08:59:06.479721 4856 generic.go:334] "Generic (PLEG): container finished" podID="ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" containerID="4e1907b6a0843ad1ac8a4a49d15cba35587eb859eaf055b1dba7e88798d05bbd" exitCode=0 Nov 22 08:59:06 crc kubenswrapper[4856]: I1122 08:59:06.479787 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vk2bd" event={"ID":"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7","Type":"ContainerDied","Data":"4e1907b6a0843ad1ac8a4a49d15cba35587eb859eaf055b1dba7e88798d05bbd"} Nov 22 08:59:06 crc kubenswrapper[4856]: I1122 08:59:06.480037 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vk2bd" event={"ID":"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7","Type":"ContainerStarted","Data":"b54d44661dbe889df4b690101aab495e951b4812ec661d1d6f67681070be716c"} Nov 22 08:59:06 crc kubenswrapper[4856]: I1122 08:59:06.482251 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 08:59:08 crc kubenswrapper[4856]: I1122 08:59:08.499707 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vk2bd" event={"ID":"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7","Type":"ContainerStarted","Data":"06cf0b36118c3fa84296ec42001506fa9a83c5d6e0d964b8194e34e304b801a4"} Nov 22 08:59:15 crc kubenswrapper[4856]: I1122 08:59:15.709906 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:59:15 crc kubenswrapper[4856]: E1122 08:59:15.711044 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:59:16 crc kubenswrapper[4856]: I1122 08:59:16.569593 4856 generic.go:334] "Generic (PLEG): container finished" podID="ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" containerID="06cf0b36118c3fa84296ec42001506fa9a83c5d6e0d964b8194e34e304b801a4" exitCode=0 Nov 22 08:59:16 crc kubenswrapper[4856]: I1122 08:59:16.569644 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vk2bd" event={"ID":"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7","Type":"ContainerDied","Data":"06cf0b36118c3fa84296ec42001506fa9a83c5d6e0d964b8194e34e304b801a4"} Nov 22 08:59:18 crc kubenswrapper[4856]: I1122 08:59:18.586410 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vk2bd" event={"ID":"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7","Type":"ContainerStarted","Data":"c6cc9d3741d27c7e272e3533cc60e700090127baf51159bbecfc85d886060a67"} Nov 22 08:59:18 crc kubenswrapper[4856]: I1122 08:59:18.605744 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vk2bd" podStartSLOduration=3.682231291 podStartE2EDuration="14.605723133s" podCreationTimestamp="2025-11-22 08:59:04 +0000 UTC" firstStartedPulling="2025-11-22 08:59:06.48196717 +0000 UTC m=+6988.895360428" lastFinishedPulling="2025-11-22 08:59:17.405459002 +0000 UTC m=+6999.818852270" observedRunningTime="2025-11-22 08:59:18.602895447 +0000 UTC m=+7001.016288715" watchObservedRunningTime="2025-11-22 08:59:18.605723133 +0000 UTC m=+7001.019116391" Nov 22 08:59:25 crc kubenswrapper[4856]: I1122 08:59:25.204677 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:25 crc kubenswrapper[4856]: I1122 08:59:25.205342 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:25 crc kubenswrapper[4856]: I1122 08:59:25.249787 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:25 crc kubenswrapper[4856]: I1122 08:59:25.690301 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:25 crc kubenswrapper[4856]: I1122 08:59:25.732991 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vk2bd"] Nov 22 08:59:26 crc kubenswrapper[4856]: I1122 08:59:26.710407 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:59:26 crc kubenswrapper[4856]: E1122 08:59:26.711076 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 08:59:27 crc kubenswrapper[4856]: I1122 08:59:27.665208 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vk2bd" podUID="ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" containerName="registry-server" containerID="cri-o://c6cc9d3741d27c7e272e3533cc60e700090127baf51159bbecfc85d886060a67" gracePeriod=2 Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.656726 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.694564 4856 generic.go:334] "Generic (PLEG): container finished" podID="ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" containerID="c6cc9d3741d27c7e272e3533cc60e700090127baf51159bbecfc85d886060a67" exitCode=0 Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.694618 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vk2bd" event={"ID":"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7","Type":"ContainerDied","Data":"c6cc9d3741d27c7e272e3533cc60e700090127baf51159bbecfc85d886060a67"} Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.694632 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vk2bd" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.694649 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vk2bd" event={"ID":"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7","Type":"ContainerDied","Data":"b54d44661dbe889df4b690101aab495e951b4812ec661d1d6f67681070be716c"} Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.694670 4856 scope.go:117] "RemoveContainer" containerID="c6cc9d3741d27c7e272e3533cc60e700090127baf51159bbecfc85d886060a67" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.717108 4856 scope.go:117] "RemoveContainer" containerID="06cf0b36118c3fa84296ec42001506fa9a83c5d6e0d964b8194e34e304b801a4" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.743868 4856 scope.go:117] "RemoveContainer" containerID="4e1907b6a0843ad1ac8a4a49d15cba35587eb859eaf055b1dba7e88798d05bbd" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.769606 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-utilities\") pod \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\" (UID: \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\") " Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.769771 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpmrz\" (UniqueName: \"kubernetes.io/projected/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-kube-api-access-gpmrz\") pod \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\" (UID: \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\") " Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.770069 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-catalog-content\") pod \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\" (UID: \"ebb117bc-cb08-47ab-b63d-2f3cf39bdec7\") " Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.772279 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-utilities" (OuterVolumeSpecName: "utilities") pod "ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" (UID: "ebb117bc-cb08-47ab-b63d-2f3cf39bdec7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.776679 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-kube-api-access-gpmrz" (OuterVolumeSpecName: "kube-api-access-gpmrz") pod "ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" (UID: "ebb117bc-cb08-47ab-b63d-2f3cf39bdec7"). InnerVolumeSpecName "kube-api-access-gpmrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.796905 4856 scope.go:117] "RemoveContainer" containerID="c6cc9d3741d27c7e272e3533cc60e700090127baf51159bbecfc85d886060a67" Nov 22 08:59:28 crc kubenswrapper[4856]: E1122 08:59:28.797407 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6cc9d3741d27c7e272e3533cc60e700090127baf51159bbecfc85d886060a67\": container with ID starting with c6cc9d3741d27c7e272e3533cc60e700090127baf51159bbecfc85d886060a67 not found: ID does not exist" containerID="c6cc9d3741d27c7e272e3533cc60e700090127baf51159bbecfc85d886060a67" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.797526 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6cc9d3741d27c7e272e3533cc60e700090127baf51159bbecfc85d886060a67"} err="failed to get container status \"c6cc9d3741d27c7e272e3533cc60e700090127baf51159bbecfc85d886060a67\": rpc error: code = NotFound desc = could not find container \"c6cc9d3741d27c7e272e3533cc60e700090127baf51159bbecfc85d886060a67\": container with ID starting with c6cc9d3741d27c7e272e3533cc60e700090127baf51159bbecfc85d886060a67 not found: ID does not exist" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.797617 4856 scope.go:117] "RemoveContainer" containerID="06cf0b36118c3fa84296ec42001506fa9a83c5d6e0d964b8194e34e304b801a4" Nov 22 08:59:28 crc kubenswrapper[4856]: E1122 08:59:28.798033 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06cf0b36118c3fa84296ec42001506fa9a83c5d6e0d964b8194e34e304b801a4\": container with ID starting with 06cf0b36118c3fa84296ec42001506fa9a83c5d6e0d964b8194e34e304b801a4 not found: ID does not exist" containerID="06cf0b36118c3fa84296ec42001506fa9a83c5d6e0d964b8194e34e304b801a4" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.798070 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06cf0b36118c3fa84296ec42001506fa9a83c5d6e0d964b8194e34e304b801a4"} err="failed to get container status \"06cf0b36118c3fa84296ec42001506fa9a83c5d6e0d964b8194e34e304b801a4\": rpc error: code = NotFound desc = could not find container \"06cf0b36118c3fa84296ec42001506fa9a83c5d6e0d964b8194e34e304b801a4\": container with ID starting with 06cf0b36118c3fa84296ec42001506fa9a83c5d6e0d964b8194e34e304b801a4 not found: ID does not exist" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.798097 4856 scope.go:117] "RemoveContainer" containerID="4e1907b6a0843ad1ac8a4a49d15cba35587eb859eaf055b1dba7e88798d05bbd" Nov 22 08:59:28 crc kubenswrapper[4856]: E1122 08:59:28.798653 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e1907b6a0843ad1ac8a4a49d15cba35587eb859eaf055b1dba7e88798d05bbd\": container with ID starting with 4e1907b6a0843ad1ac8a4a49d15cba35587eb859eaf055b1dba7e88798d05bbd not found: ID does not exist" containerID="4e1907b6a0843ad1ac8a4a49d15cba35587eb859eaf055b1dba7e88798d05bbd" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.798758 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1907b6a0843ad1ac8a4a49d15cba35587eb859eaf055b1dba7e88798d05bbd"} err="failed to get container status \"4e1907b6a0843ad1ac8a4a49d15cba35587eb859eaf055b1dba7e88798d05bbd\": rpc error: code = NotFound desc = could not find container \"4e1907b6a0843ad1ac8a4a49d15cba35587eb859eaf055b1dba7e88798d05bbd\": container with ID starting with 4e1907b6a0843ad1ac8a4a49d15cba35587eb859eaf055b1dba7e88798d05bbd not found: ID does not exist" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.825431 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" (UID: "ebb117bc-cb08-47ab-b63d-2f3cf39bdec7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.873219 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.873268 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpmrz\" (UniqueName: \"kubernetes.io/projected/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-kube-api-access-gpmrz\") on node \"crc\" DevicePath \"\"" Nov 22 08:59:28 crc kubenswrapper[4856]: I1122 08:59:28.873280 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 08:59:29 crc kubenswrapper[4856]: I1122 08:59:29.033723 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vk2bd"] Nov 22 08:59:29 crc kubenswrapper[4856]: I1122 08:59:29.045194 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vk2bd"] Nov 22 08:59:30 crc kubenswrapper[4856]: I1122 08:59:30.721882 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" path="/var/lib/kubelet/pods/ebb117bc-cb08-47ab-b63d-2f3cf39bdec7/volumes" Nov 22 08:59:41 crc kubenswrapper[4856]: I1122 08:59:41.710383 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 08:59:42 crc kubenswrapper[4856]: I1122 08:59:42.821155 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"8bedeef61d55017516ee06c2d6c95ca3849842a59141d0c9e8b0d0befed04499"} Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.166770 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6"] Nov 22 09:00:00 crc kubenswrapper[4856]: E1122 09:00:00.168795 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" containerName="registry-server" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.168879 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" containerName="registry-server" Nov 22 09:00:00 crc kubenswrapper[4856]: E1122 09:00:00.168957 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" containerName="extract-utilities" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.169017 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" containerName="extract-utilities" Nov 22 09:00:00 crc kubenswrapper[4856]: E1122 09:00:00.169093 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" containerName="extract-content" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.169151 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" containerName="extract-content" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.169407 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb117bc-cb08-47ab-b63d-2f3cf39bdec7" containerName="registry-server" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.170220 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.172869 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.173304 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.180428 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6"] Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.252834 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1b0306c-9e7a-4831-8e59-e0e743c35064-secret-volume\") pod \"collect-profiles-29396700-zxxf6\" (UID: \"e1b0306c-9e7a-4831-8e59-e0e743c35064\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.253232 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdlqt\" (UniqueName: \"kubernetes.io/projected/e1b0306c-9e7a-4831-8e59-e0e743c35064-kube-api-access-hdlqt\") pod \"collect-profiles-29396700-zxxf6\" (UID: \"e1b0306c-9e7a-4831-8e59-e0e743c35064\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.253554 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1b0306c-9e7a-4831-8e59-e0e743c35064-config-volume\") pod \"collect-profiles-29396700-zxxf6\" (UID: \"e1b0306c-9e7a-4831-8e59-e0e743c35064\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.355552 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1b0306c-9e7a-4831-8e59-e0e743c35064-config-volume\") pod \"collect-profiles-29396700-zxxf6\" (UID: \"e1b0306c-9e7a-4831-8e59-e0e743c35064\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.355877 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1b0306c-9e7a-4831-8e59-e0e743c35064-secret-volume\") pod \"collect-profiles-29396700-zxxf6\" (UID: \"e1b0306c-9e7a-4831-8e59-e0e743c35064\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.356017 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdlqt\" (UniqueName: \"kubernetes.io/projected/e1b0306c-9e7a-4831-8e59-e0e743c35064-kube-api-access-hdlqt\") pod \"collect-profiles-29396700-zxxf6\" (UID: \"e1b0306c-9e7a-4831-8e59-e0e743c35064\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.356730 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1b0306c-9e7a-4831-8e59-e0e743c35064-config-volume\") pod \"collect-profiles-29396700-zxxf6\" (UID: \"e1b0306c-9e7a-4831-8e59-e0e743c35064\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.363175 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1b0306c-9e7a-4831-8e59-e0e743c35064-secret-volume\") pod \"collect-profiles-29396700-zxxf6\" (UID: \"e1b0306c-9e7a-4831-8e59-e0e743c35064\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.375780 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdlqt\" (UniqueName: \"kubernetes.io/projected/e1b0306c-9e7a-4831-8e59-e0e743c35064-kube-api-access-hdlqt\") pod \"collect-profiles-29396700-zxxf6\" (UID: \"e1b0306c-9e7a-4831-8e59-e0e743c35064\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.499069 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" Nov 22 09:00:00 crc kubenswrapper[4856]: I1122 09:00:00.978716 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6"] Nov 22 09:00:01 crc kubenswrapper[4856]: I1122 09:00:01.005287 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" event={"ID":"e1b0306c-9e7a-4831-8e59-e0e743c35064","Type":"ContainerStarted","Data":"d7e5a996feca42e40177f8b3165353f0743919aa5e3d16043ffcffbf25713b64"} Nov 22 09:00:02 crc kubenswrapper[4856]: I1122 09:00:02.022282 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" event={"ID":"e1b0306c-9e7a-4831-8e59-e0e743c35064","Type":"ContainerStarted","Data":"85326c197aab7ac8d5e6b131d871ec6b2782ce2404f52819e68e114711ec7f2b"} Nov 22 09:00:02 crc kubenswrapper[4856]: I1122 09:00:02.038451 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" podStartSLOduration=2.03843442 podStartE2EDuration="2.03843442s" podCreationTimestamp="2025-11-22 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:00:02.038106102 +0000 UTC m=+7044.451499360" watchObservedRunningTime="2025-11-22 09:00:02.03843442 +0000 UTC m=+7044.451827678" Nov 22 09:00:03 crc kubenswrapper[4856]: I1122 09:00:03.032759 4856 generic.go:334] "Generic (PLEG): container finished" podID="e1b0306c-9e7a-4831-8e59-e0e743c35064" containerID="85326c197aab7ac8d5e6b131d871ec6b2782ce2404f52819e68e114711ec7f2b" exitCode=0 Nov 22 09:00:03 crc kubenswrapper[4856]: I1122 09:00:03.032816 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" event={"ID":"e1b0306c-9e7a-4831-8e59-e0e743c35064","Type":"ContainerDied","Data":"85326c197aab7ac8d5e6b131d871ec6b2782ce2404f52819e68e114711ec7f2b"} Nov 22 09:00:04 crc kubenswrapper[4856]: I1122 09:00:04.362449 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" Nov 22 09:00:04 crc kubenswrapper[4856]: I1122 09:00:04.546343 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1b0306c-9e7a-4831-8e59-e0e743c35064-secret-volume\") pod \"e1b0306c-9e7a-4831-8e59-e0e743c35064\" (UID: \"e1b0306c-9e7a-4831-8e59-e0e743c35064\") " Nov 22 09:00:04 crc kubenswrapper[4856]: I1122 09:00:04.546689 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1b0306c-9e7a-4831-8e59-e0e743c35064-config-volume\") pod \"e1b0306c-9e7a-4831-8e59-e0e743c35064\" (UID: \"e1b0306c-9e7a-4831-8e59-e0e743c35064\") " Nov 22 09:00:04 crc kubenswrapper[4856]: I1122 09:00:04.546812 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdlqt\" (UniqueName: \"kubernetes.io/projected/e1b0306c-9e7a-4831-8e59-e0e743c35064-kube-api-access-hdlqt\") pod \"e1b0306c-9e7a-4831-8e59-e0e743c35064\" (UID: \"e1b0306c-9e7a-4831-8e59-e0e743c35064\") " Nov 22 09:00:04 crc kubenswrapper[4856]: I1122 09:00:04.547666 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1b0306c-9e7a-4831-8e59-e0e743c35064-config-volume" (OuterVolumeSpecName: "config-volume") pod "e1b0306c-9e7a-4831-8e59-e0e743c35064" (UID: "e1b0306c-9e7a-4831-8e59-e0e743c35064"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:00:04 crc kubenswrapper[4856]: I1122 09:00:04.552110 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1b0306c-9e7a-4831-8e59-e0e743c35064-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e1b0306c-9e7a-4831-8e59-e0e743c35064" (UID: "e1b0306c-9e7a-4831-8e59-e0e743c35064"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:00:04 crc kubenswrapper[4856]: I1122 09:00:04.552497 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1b0306c-9e7a-4831-8e59-e0e743c35064-kube-api-access-hdlqt" (OuterVolumeSpecName: "kube-api-access-hdlqt") pod "e1b0306c-9e7a-4831-8e59-e0e743c35064" (UID: "e1b0306c-9e7a-4831-8e59-e0e743c35064"). InnerVolumeSpecName "kube-api-access-hdlqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:00:04 crc kubenswrapper[4856]: I1122 09:00:04.649355 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdlqt\" (UniqueName: \"kubernetes.io/projected/e1b0306c-9e7a-4831-8e59-e0e743c35064-kube-api-access-hdlqt\") on node \"crc\" DevicePath \"\"" Nov 22 09:00:04 crc kubenswrapper[4856]: I1122 09:00:04.649406 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1b0306c-9e7a-4831-8e59-e0e743c35064-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:00:04 crc kubenswrapper[4856]: I1122 09:00:04.649425 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1b0306c-9e7a-4831-8e59-e0e743c35064-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:00:05 crc kubenswrapper[4856]: I1122 09:00:05.056500 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" event={"ID":"e1b0306c-9e7a-4831-8e59-e0e743c35064","Type":"ContainerDied","Data":"d7e5a996feca42e40177f8b3165353f0743919aa5e3d16043ffcffbf25713b64"} Nov 22 09:00:05 crc kubenswrapper[4856]: I1122 09:00:05.056576 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7e5a996feca42e40177f8b3165353f0743919aa5e3d16043ffcffbf25713b64" Nov 22 09:00:05 crc kubenswrapper[4856]: I1122 09:00:05.056643 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6" Nov 22 09:00:05 crc kubenswrapper[4856]: I1122 09:00:05.145295 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c"] Nov 22 09:00:05 crc kubenswrapper[4856]: I1122 09:00:05.152782 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396655-vht6c"] Nov 22 09:00:06 crc kubenswrapper[4856]: I1122 09:00:06.723489 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e" path="/var/lib/kubelet/pods/eee5a7d8-60c5-4d9d-a7cb-faea88bdd67e/volumes" Nov 22 09:00:47 crc kubenswrapper[4856]: I1122 09:00:47.881336 4856 scope.go:117] "RemoveContainer" containerID="6e1d2f3c38ac18a9201c0b70e01e02be31cd1e024837c3df8f58e84a84e3a3b5" Nov 22 09:00:56 crc kubenswrapper[4856]: I1122 09:00:56.047122 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-nsv56"] Nov 22 09:00:56 crc kubenswrapper[4856]: I1122 09:00:56.058282 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-bd84-account-create-pzl2m"] Nov 22 09:00:56 crc kubenswrapper[4856]: I1122 09:00:56.066765 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-nsv56"] Nov 22 09:00:56 crc kubenswrapper[4856]: I1122 09:00:56.075383 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-bd84-account-create-pzl2m"] Nov 22 09:00:56 crc kubenswrapper[4856]: I1122 09:00:56.720867 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2159a754-b822-4794-aee2-1e2d51ddca60" path="/var/lib/kubelet/pods/2159a754-b822-4794-aee2-1e2d51ddca60/volumes" Nov 22 09:00:56 crc kubenswrapper[4856]: I1122 09:00:56.721569 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86563ff1-f26e-4490-b9d3-ebe7456ee633" path="/var/lib/kubelet/pods/86563ff1-f26e-4490-b9d3-ebe7456ee633/volumes" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.152899 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29396701-rhfc7"] Nov 22 09:01:00 crc kubenswrapper[4856]: E1122 09:01:00.153760 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1b0306c-9e7a-4831-8e59-e0e743c35064" containerName="collect-profiles" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.153779 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1b0306c-9e7a-4831-8e59-e0e743c35064" containerName="collect-profiles" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.154080 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1b0306c-9e7a-4831-8e59-e0e743c35064" containerName="collect-profiles" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.155022 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.165074 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396701-rhfc7"] Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.337361 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh9hq\" (UniqueName: \"kubernetes.io/projected/9860533e-121c-4025-b616-da777f3db9a3-kube-api-access-xh9hq\") pod \"keystone-cron-29396701-rhfc7\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.338103 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-combined-ca-bundle\") pod \"keystone-cron-29396701-rhfc7\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.338225 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-config-data\") pod \"keystone-cron-29396701-rhfc7\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.338381 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-fernet-keys\") pod \"keystone-cron-29396701-rhfc7\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.440998 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-fernet-keys\") pod \"keystone-cron-29396701-rhfc7\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.441157 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh9hq\" (UniqueName: \"kubernetes.io/projected/9860533e-121c-4025-b616-da777f3db9a3-kube-api-access-xh9hq\") pod \"keystone-cron-29396701-rhfc7\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.441219 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-combined-ca-bundle\") pod \"keystone-cron-29396701-rhfc7\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.441247 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-config-data\") pod \"keystone-cron-29396701-rhfc7\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.448010 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-fernet-keys\") pod \"keystone-cron-29396701-rhfc7\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.448269 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-combined-ca-bundle\") pod \"keystone-cron-29396701-rhfc7\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.449214 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-config-data\") pod \"keystone-cron-29396701-rhfc7\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.459897 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh9hq\" (UniqueName: \"kubernetes.io/projected/9860533e-121c-4025-b616-da777f3db9a3-kube-api-access-xh9hq\") pod \"keystone-cron-29396701-rhfc7\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.481195 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:00 crc kubenswrapper[4856]: I1122 09:01:00.939037 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396701-rhfc7"] Nov 22 09:01:01 crc kubenswrapper[4856]: I1122 09:01:01.564754 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396701-rhfc7" event={"ID":"9860533e-121c-4025-b616-da777f3db9a3","Type":"ContainerStarted","Data":"2e4504644df361601b1f63a123eb4269e618f4ff5b2c57d7d9449d747903ac8b"} Nov 22 09:01:01 crc kubenswrapper[4856]: I1122 09:01:01.565314 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396701-rhfc7" event={"ID":"9860533e-121c-4025-b616-da777f3db9a3","Type":"ContainerStarted","Data":"ccfa655e1684a71816eceb24655b1a4b1346bbde4b896a9d3c48cf5955985ec6"} Nov 22 09:01:01 crc kubenswrapper[4856]: I1122 09:01:01.584810 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29396701-rhfc7" podStartSLOduration=1.584790114 podStartE2EDuration="1.584790114s" podCreationTimestamp="2025-11-22 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:01:01.578559595 +0000 UTC m=+7103.991952863" watchObservedRunningTime="2025-11-22 09:01:01.584790114 +0000 UTC m=+7103.998183362" Nov 22 09:01:05 crc kubenswrapper[4856]: I1122 09:01:05.601856 4856 generic.go:334] "Generic (PLEG): container finished" podID="9860533e-121c-4025-b616-da777f3db9a3" containerID="2e4504644df361601b1f63a123eb4269e618f4ff5b2c57d7d9449d747903ac8b" exitCode=0 Nov 22 09:01:05 crc kubenswrapper[4856]: I1122 09:01:05.601942 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396701-rhfc7" event={"ID":"9860533e-121c-4025-b616-da777f3db9a3","Type":"ContainerDied","Data":"2e4504644df361601b1f63a123eb4269e618f4ff5b2c57d7d9449d747903ac8b"} Nov 22 09:01:06 crc kubenswrapper[4856]: I1122 09:01:06.974921 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.088289 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-combined-ca-bundle\") pod \"9860533e-121c-4025-b616-da777f3db9a3\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.088822 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-fernet-keys\") pod \"9860533e-121c-4025-b616-da777f3db9a3\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.088940 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-config-data\") pod \"9860533e-121c-4025-b616-da777f3db9a3\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.089003 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh9hq\" (UniqueName: \"kubernetes.io/projected/9860533e-121c-4025-b616-da777f3db9a3-kube-api-access-xh9hq\") pod \"9860533e-121c-4025-b616-da777f3db9a3\" (UID: \"9860533e-121c-4025-b616-da777f3db9a3\") " Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.094199 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9860533e-121c-4025-b616-da777f3db9a3" (UID: "9860533e-121c-4025-b616-da777f3db9a3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.094225 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9860533e-121c-4025-b616-da777f3db9a3-kube-api-access-xh9hq" (OuterVolumeSpecName: "kube-api-access-xh9hq") pod "9860533e-121c-4025-b616-da777f3db9a3" (UID: "9860533e-121c-4025-b616-da777f3db9a3"). InnerVolumeSpecName "kube-api-access-xh9hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.123889 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9860533e-121c-4025-b616-da777f3db9a3" (UID: "9860533e-121c-4025-b616-da777f3db9a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.147351 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-config-data" (OuterVolumeSpecName: "config-data") pod "9860533e-121c-4025-b616-da777f3db9a3" (UID: "9860533e-121c-4025-b616-da777f3db9a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.193098 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.193149 4856 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.193204 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9860533e-121c-4025-b616-da777f3db9a3-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.193223 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh9hq\" (UniqueName: \"kubernetes.io/projected/9860533e-121c-4025-b616-da777f3db9a3-kube-api-access-xh9hq\") on node \"crc\" DevicePath \"\"" Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.625409 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396701-rhfc7" event={"ID":"9860533e-121c-4025-b616-da777f3db9a3","Type":"ContainerDied","Data":"ccfa655e1684a71816eceb24655b1a4b1346bbde4b896a9d3c48cf5955985ec6"} Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.625482 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccfa655e1684a71816eceb24655b1a4b1346bbde4b896a9d3c48cf5955985ec6" Nov 22 09:01:07 crc kubenswrapper[4856]: I1122 09:01:07.625498 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396701-rhfc7" Nov 22 09:01:13 crc kubenswrapper[4856]: I1122 09:01:13.033251 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-p699w"] Nov 22 09:01:13 crc kubenswrapper[4856]: I1122 09:01:13.040774 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-p699w"] Nov 22 09:01:14 crc kubenswrapper[4856]: I1122 09:01:14.723045 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2212e72-48ef-465e-9839-473d346956cf" path="/var/lib/kubelet/pods/b2212e72-48ef-465e-9839-473d346956cf/volumes" Nov 22 09:01:47 crc kubenswrapper[4856]: I1122 09:01:47.946275 4856 scope.go:117] "RemoveContainer" containerID="c69d77b37819d69865e14ee6ae032db05f663d0c2bb98b95b2cd154737f096f2" Nov 22 09:01:47 crc kubenswrapper[4856]: I1122 09:01:47.986334 4856 scope.go:117] "RemoveContainer" containerID="46fbc23a6f52c136bea223cca57c9ce962ee3b91e3eb5220311bceb62393dc6f" Nov 22 09:01:48 crc kubenswrapper[4856]: I1122 09:01:48.026314 4856 scope.go:117] "RemoveContainer" containerID="919ac1cdbfababcff73b65739eb4a50d331f640a7b464e832589ede38abad25d" Nov 22 09:01:59 crc kubenswrapper[4856]: I1122 09:01:59.755491 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:01:59 crc kubenswrapper[4856]: I1122 09:01:59.756206 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:02:29 crc kubenswrapper[4856]: I1122 09:02:29.754848 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:02:29 crc kubenswrapper[4856]: I1122 09:02:29.755632 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:02:59 crc kubenswrapper[4856]: I1122 09:02:59.754394 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:02:59 crc kubenswrapper[4856]: I1122 09:02:59.755133 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:02:59 crc kubenswrapper[4856]: I1122 09:02:59.755192 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 09:02:59 crc kubenswrapper[4856]: I1122 09:02:59.756088 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bedeef61d55017516ee06c2d6c95ca3849842a59141d0c9e8b0d0befed04499"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:02:59 crc kubenswrapper[4856]: I1122 09:02:59.756161 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://8bedeef61d55017516ee06c2d6c95ca3849842a59141d0c9e8b0d0befed04499" gracePeriod=600 Nov 22 09:03:00 crc kubenswrapper[4856]: I1122 09:03:00.651110 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="8bedeef61d55017516ee06c2d6c95ca3849842a59141d0c9e8b0d0befed04499" exitCode=0 Nov 22 09:03:00 crc kubenswrapper[4856]: I1122 09:03:00.651183 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"8bedeef61d55017516ee06c2d6c95ca3849842a59141d0c9e8b0d0befed04499"} Nov 22 09:03:00 crc kubenswrapper[4856]: I1122 09:03:00.651797 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3"} Nov 22 09:03:00 crc kubenswrapper[4856]: I1122 09:03:00.651831 4856 scope.go:117] "RemoveContainer" containerID="121001251a38b83761fc3a6b5e2cf191760dc1698bfb1695f80360bacb62d79e" Nov 22 09:03:22 crc kubenswrapper[4856]: I1122 09:03:22.861561 4856 generic.go:334] "Generic (PLEG): container finished" podID="bf97b43b-e761-42f4-bd6b-837f60e9598c" containerID="b3f8c28b1ff122fe9b8c8b2a8f693263ab0d75407a6bb2b176fcc67ddfb1e100" exitCode=0 Nov 22 09:03:22 crc kubenswrapper[4856]: I1122 09:03:22.862245 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" event={"ID":"bf97b43b-e761-42f4-bd6b-837f60e9598c","Type":"ContainerDied","Data":"b3f8c28b1ff122fe9b8c8b2a8f693263ab0d75407a6bb2b176fcc67ddfb1e100"} Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.273366 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.370182 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-inventory\") pod \"bf97b43b-e761-42f4-bd6b-837f60e9598c\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.370231 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7n76\" (UniqueName: \"kubernetes.io/projected/bf97b43b-e761-42f4-bd6b-837f60e9598c-kube-api-access-f7n76\") pod \"bf97b43b-e761-42f4-bd6b-837f60e9598c\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.370439 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-tripleo-cleanup-combined-ca-bundle\") pod \"bf97b43b-e761-42f4-bd6b-837f60e9598c\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.370498 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-ssh-key\") pod \"bf97b43b-e761-42f4-bd6b-837f60e9598c\" (UID: \"bf97b43b-e761-42f4-bd6b-837f60e9598c\") " Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.377099 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "bf97b43b-e761-42f4-bd6b-837f60e9598c" (UID: "bf97b43b-e761-42f4-bd6b-837f60e9598c"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.378640 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf97b43b-e761-42f4-bd6b-837f60e9598c-kube-api-access-f7n76" (OuterVolumeSpecName: "kube-api-access-f7n76") pod "bf97b43b-e761-42f4-bd6b-837f60e9598c" (UID: "bf97b43b-e761-42f4-bd6b-837f60e9598c"). InnerVolumeSpecName "kube-api-access-f7n76". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.402156 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-inventory" (OuterVolumeSpecName: "inventory") pod "bf97b43b-e761-42f4-bd6b-837f60e9598c" (UID: "bf97b43b-e761-42f4-bd6b-837f60e9598c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.406756 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bf97b43b-e761-42f4-bd6b-837f60e9598c" (UID: "bf97b43b-e761-42f4-bd6b-837f60e9598c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.473187 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.473225 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7n76\" (UniqueName: \"kubernetes.io/projected/bf97b43b-e761-42f4-bd6b-837f60e9598c-kube-api-access-f7n76\") on node \"crc\" DevicePath \"\"" Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.473238 4856 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.473248 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf97b43b-e761-42f4-bd6b-837f60e9598c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.879580 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" event={"ID":"bf97b43b-e761-42f4-bd6b-837f60e9598c","Type":"ContainerDied","Data":"50954fe8ab97cf54c1006ecafe298485d83798b8e70db30b717b68315834a68a"} Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.879631 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50954fe8ab97cf54c1006ecafe298485d83798b8e70db30b717b68315834a68a" Nov 22 09:03:24 crc kubenswrapper[4856]: I1122 09:03:24.879656 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.464815 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-5tw45"] Nov 22 09:03:32 crc kubenswrapper[4856]: E1122 09:03:32.467370 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9860533e-121c-4025-b616-da777f3db9a3" containerName="keystone-cron" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.467402 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9860533e-121c-4025-b616-da777f3db9a3" containerName="keystone-cron" Nov 22 09:03:32 crc kubenswrapper[4856]: E1122 09:03:32.467419 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf97b43b-e761-42f4-bd6b-837f60e9598c" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.467427 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf97b43b-e761-42f4-bd6b-837f60e9598c" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.467640 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9860533e-121c-4025-b616-da777f3db9a3" containerName="keystone-cron" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.467667 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf97b43b-e761-42f4-bd6b-837f60e9598c" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.469017 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.477849 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.479437 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.479718 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.480421 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.509325 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-5tw45"] Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.544259 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-5tw45\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.544351 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-inventory\") pod \"bootstrap-openstack-openstack-cell1-5tw45\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.544426 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcbbt\" (UniqueName: \"kubernetes.io/projected/edda8fe7-9e3d-4753-86c7-539cc18590d5-kube-api-access-qcbbt\") pod \"bootstrap-openstack-openstack-cell1-5tw45\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.544468 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-5tw45\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.647178 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-5tw45\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.647264 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-inventory\") pod \"bootstrap-openstack-openstack-cell1-5tw45\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.647360 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcbbt\" (UniqueName: \"kubernetes.io/projected/edda8fe7-9e3d-4753-86c7-539cc18590d5-kube-api-access-qcbbt\") pod \"bootstrap-openstack-openstack-cell1-5tw45\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.647421 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-5tw45\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.655113 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-inventory\") pod \"bootstrap-openstack-openstack-cell1-5tw45\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.657098 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-5tw45\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.662160 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-5tw45\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.665444 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcbbt\" (UniqueName: \"kubernetes.io/projected/edda8fe7-9e3d-4753-86c7-539cc18590d5-kube-api-access-qcbbt\") pod \"bootstrap-openstack-openstack-cell1-5tw45\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:32 crc kubenswrapper[4856]: I1122 09:03:32.819105 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:03:33 crc kubenswrapper[4856]: I1122 09:03:33.364387 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-5tw45"] Nov 22 09:03:33 crc kubenswrapper[4856]: I1122 09:03:33.964458 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" event={"ID":"edda8fe7-9e3d-4753-86c7-539cc18590d5","Type":"ContainerStarted","Data":"c13c0bbd7da42c8bb3ecbe046dec3b0ef4adf192eb620f9e465d94410bd6af9d"} Nov 22 09:03:35 crc kubenswrapper[4856]: I1122 09:03:35.046793 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" event={"ID":"edda8fe7-9e3d-4753-86c7-539cc18590d5","Type":"ContainerStarted","Data":"69b7fe8d64f39d063b65fffc5ea3164e5349691cfba03abaae326b68baa93d3e"} Nov 22 09:03:35 crc kubenswrapper[4856]: I1122 09:03:35.083007 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" podStartSLOduration=2.664353444 podStartE2EDuration="3.082811377s" podCreationTimestamp="2025-11-22 09:03:32 +0000 UTC" firstStartedPulling="2025-11-22 09:03:33.370324453 +0000 UTC m=+7255.783717701" lastFinishedPulling="2025-11-22 09:03:33.788782376 +0000 UTC m=+7256.202175634" observedRunningTime="2025-11-22 09:03:35.068070889 +0000 UTC m=+7257.481464147" watchObservedRunningTime="2025-11-22 09:03:35.082811377 +0000 UTC m=+7257.496204635" Nov 22 09:04:57 crc kubenswrapper[4856]: I1122 09:04:57.328263 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l2w7n"] Nov 22 09:04:57 crc kubenswrapper[4856]: I1122 09:04:57.331117 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:04:57 crc kubenswrapper[4856]: I1122 09:04:57.349997 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2w7n"] Nov 22 09:04:57 crc kubenswrapper[4856]: I1122 09:04:57.438726 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2dn9\" (UniqueName: \"kubernetes.io/projected/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-kube-api-access-g2dn9\") pod \"redhat-marketplace-l2w7n\" (UID: \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\") " pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:04:57 crc kubenswrapper[4856]: I1122 09:04:57.439140 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-catalog-content\") pod \"redhat-marketplace-l2w7n\" (UID: \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\") " pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:04:57 crc kubenswrapper[4856]: I1122 09:04:57.439346 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-utilities\") pod \"redhat-marketplace-l2w7n\" (UID: \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\") " pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:04:57 crc kubenswrapper[4856]: I1122 09:04:57.541211 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-utilities\") pod \"redhat-marketplace-l2w7n\" (UID: \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\") " pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:04:57 crc kubenswrapper[4856]: I1122 09:04:57.541556 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2dn9\" (UniqueName: \"kubernetes.io/projected/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-kube-api-access-g2dn9\") pod \"redhat-marketplace-l2w7n\" (UID: \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\") " pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:04:57 crc kubenswrapper[4856]: I1122 09:04:57.541674 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-catalog-content\") pod \"redhat-marketplace-l2w7n\" (UID: \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\") " pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:04:57 crc kubenswrapper[4856]: I1122 09:04:57.541746 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-utilities\") pod \"redhat-marketplace-l2w7n\" (UID: \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\") " pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:04:57 crc kubenswrapper[4856]: I1122 09:04:57.541949 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-catalog-content\") pod \"redhat-marketplace-l2w7n\" (UID: \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\") " pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:04:57 crc kubenswrapper[4856]: I1122 09:04:57.561906 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2dn9\" (UniqueName: \"kubernetes.io/projected/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-kube-api-access-g2dn9\") pod \"redhat-marketplace-l2w7n\" (UID: \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\") " pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:04:57 crc kubenswrapper[4856]: I1122 09:04:57.662086 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:04:58 crc kubenswrapper[4856]: I1122 09:04:58.097794 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2w7n"] Nov 22 09:04:58 crc kubenswrapper[4856]: I1122 09:04:58.807725 4856 generic.go:334] "Generic (PLEG): container finished" podID="3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" containerID="8905c315a86925df863b1bc7a186d4d6adc358ff87d51d17a02c9910752e23df" exitCode=0 Nov 22 09:04:58 crc kubenswrapper[4856]: I1122 09:04:58.807784 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2w7n" event={"ID":"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3","Type":"ContainerDied","Data":"8905c315a86925df863b1bc7a186d4d6adc358ff87d51d17a02c9910752e23df"} Nov 22 09:04:58 crc kubenswrapper[4856]: I1122 09:04:58.808117 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2w7n" event={"ID":"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3","Type":"ContainerStarted","Data":"ebdf91d2f0e7ba04915bd5caf598a90ba5987e41de30c0c49b1863561a95fbba"} Nov 22 09:04:58 crc kubenswrapper[4856]: I1122 09:04:58.811311 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:05:00 crc kubenswrapper[4856]: I1122 09:05:00.827092 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2w7n" event={"ID":"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3","Type":"ContainerStarted","Data":"ff0ee0c2fb5ee455449b12850e422ffb16ca2eac138a750d929d5910c019e88d"} Nov 22 09:05:01 crc kubenswrapper[4856]: I1122 09:05:01.839137 4856 generic.go:334] "Generic (PLEG): container finished" podID="3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" containerID="ff0ee0c2fb5ee455449b12850e422ffb16ca2eac138a750d929d5910c019e88d" exitCode=0 Nov 22 09:05:01 crc kubenswrapper[4856]: I1122 09:05:01.839242 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2w7n" event={"ID":"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3","Type":"ContainerDied","Data":"ff0ee0c2fb5ee455449b12850e422ffb16ca2eac138a750d929d5910c019e88d"} Nov 22 09:05:02 crc kubenswrapper[4856]: I1122 09:05:02.854695 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2w7n" event={"ID":"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3","Type":"ContainerStarted","Data":"55e546b9b4a865875c60c50cbc67a4b18c1e3e362301d8db18cdfd03bb2376bb"} Nov 22 09:05:02 crc kubenswrapper[4856]: I1122 09:05:02.878741 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l2w7n" podStartSLOduration=2.146387574 podStartE2EDuration="5.87872486s" podCreationTimestamp="2025-11-22 09:04:57 +0000 UTC" firstStartedPulling="2025-11-22 09:04:58.810991341 +0000 UTC m=+7341.224384599" lastFinishedPulling="2025-11-22 09:05:02.543328637 +0000 UTC m=+7344.956721885" observedRunningTime="2025-11-22 09:05:02.875900424 +0000 UTC m=+7345.289293682" watchObservedRunningTime="2025-11-22 09:05:02.87872486 +0000 UTC m=+7345.292118118" Nov 22 09:05:07 crc kubenswrapper[4856]: I1122 09:05:07.662595 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:05:07 crc kubenswrapper[4856]: I1122 09:05:07.663027 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:05:07 crc kubenswrapper[4856]: I1122 09:05:07.711343 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:05:08 crc kubenswrapper[4856]: I1122 09:05:08.310255 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:05:08 crc kubenswrapper[4856]: I1122 09:05:08.366558 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2w7n"] Nov 22 09:05:10 crc kubenswrapper[4856]: I1122 09:05:10.271970 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l2w7n" podUID="3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" containerName="registry-server" containerID="cri-o://55e546b9b4a865875c60c50cbc67a4b18c1e3e362301d8db18cdfd03bb2376bb" gracePeriod=2 Nov 22 09:05:10 crc kubenswrapper[4856]: I1122 09:05:10.753626 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:05:10 crc kubenswrapper[4856]: I1122 09:05:10.947753 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2dn9\" (UniqueName: \"kubernetes.io/projected/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-kube-api-access-g2dn9\") pod \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\" (UID: \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\") " Nov 22 09:05:10 crc kubenswrapper[4856]: I1122 09:05:10.948231 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-catalog-content\") pod \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\" (UID: \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\") " Nov 22 09:05:10 crc kubenswrapper[4856]: I1122 09:05:10.948284 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-utilities\") pod \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\" (UID: \"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3\") " Nov 22 09:05:10 crc kubenswrapper[4856]: I1122 09:05:10.949184 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-utilities" (OuterVolumeSpecName: "utilities") pod "3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" (UID: "3aa8558d-652a-4ceb-bb80-51a3e8ee88c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:05:10 crc kubenswrapper[4856]: I1122 09:05:10.954609 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-kube-api-access-g2dn9" (OuterVolumeSpecName: "kube-api-access-g2dn9") pod "3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" (UID: "3aa8558d-652a-4ceb-bb80-51a3e8ee88c3"). InnerVolumeSpecName "kube-api-access-g2dn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:05:10 crc kubenswrapper[4856]: I1122 09:05:10.965715 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" (UID: "3aa8558d-652a-4ceb-bb80-51a3e8ee88c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.051463 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.051555 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2dn9\" (UniqueName: \"kubernetes.io/projected/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-kube-api-access-g2dn9\") on node \"crc\" DevicePath \"\"" Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.051572 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.286912 4856 generic.go:334] "Generic (PLEG): container finished" podID="3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" containerID="55e546b9b4a865875c60c50cbc67a4b18c1e3e362301d8db18cdfd03bb2376bb" exitCode=0 Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.286985 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2w7n" Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.286986 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2w7n" event={"ID":"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3","Type":"ContainerDied","Data":"55e546b9b4a865875c60c50cbc67a4b18c1e3e362301d8db18cdfd03bb2376bb"} Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.287039 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2w7n" event={"ID":"3aa8558d-652a-4ceb-bb80-51a3e8ee88c3","Type":"ContainerDied","Data":"ebdf91d2f0e7ba04915bd5caf598a90ba5987e41de30c0c49b1863561a95fbba"} Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.287065 4856 scope.go:117] "RemoveContainer" containerID="55e546b9b4a865875c60c50cbc67a4b18c1e3e362301d8db18cdfd03bb2376bb" Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.323883 4856 scope.go:117] "RemoveContainer" containerID="ff0ee0c2fb5ee455449b12850e422ffb16ca2eac138a750d929d5910c019e88d" Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.325139 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2w7n"] Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.332567 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2w7n"] Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.346816 4856 scope.go:117] "RemoveContainer" containerID="8905c315a86925df863b1bc7a186d4d6adc358ff87d51d17a02c9910752e23df" Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.395484 4856 scope.go:117] "RemoveContainer" containerID="55e546b9b4a865875c60c50cbc67a4b18c1e3e362301d8db18cdfd03bb2376bb" Nov 22 09:05:11 crc kubenswrapper[4856]: E1122 09:05:11.396133 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e546b9b4a865875c60c50cbc67a4b18c1e3e362301d8db18cdfd03bb2376bb\": container with ID starting with 55e546b9b4a865875c60c50cbc67a4b18c1e3e362301d8db18cdfd03bb2376bb not found: ID does not exist" containerID="55e546b9b4a865875c60c50cbc67a4b18c1e3e362301d8db18cdfd03bb2376bb" Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.396172 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e546b9b4a865875c60c50cbc67a4b18c1e3e362301d8db18cdfd03bb2376bb"} err="failed to get container status \"55e546b9b4a865875c60c50cbc67a4b18c1e3e362301d8db18cdfd03bb2376bb\": rpc error: code = NotFound desc = could not find container \"55e546b9b4a865875c60c50cbc67a4b18c1e3e362301d8db18cdfd03bb2376bb\": container with ID starting with 55e546b9b4a865875c60c50cbc67a4b18c1e3e362301d8db18cdfd03bb2376bb not found: ID does not exist" Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.396199 4856 scope.go:117] "RemoveContainer" containerID="ff0ee0c2fb5ee455449b12850e422ffb16ca2eac138a750d929d5910c019e88d" Nov 22 09:05:11 crc kubenswrapper[4856]: E1122 09:05:11.396689 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff0ee0c2fb5ee455449b12850e422ffb16ca2eac138a750d929d5910c019e88d\": container with ID starting with ff0ee0c2fb5ee455449b12850e422ffb16ca2eac138a750d929d5910c019e88d not found: ID does not exist" containerID="ff0ee0c2fb5ee455449b12850e422ffb16ca2eac138a750d929d5910c019e88d" Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.396719 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff0ee0c2fb5ee455449b12850e422ffb16ca2eac138a750d929d5910c019e88d"} err="failed to get container status \"ff0ee0c2fb5ee455449b12850e422ffb16ca2eac138a750d929d5910c019e88d\": rpc error: code = NotFound desc = could not find container \"ff0ee0c2fb5ee455449b12850e422ffb16ca2eac138a750d929d5910c019e88d\": container with ID starting with ff0ee0c2fb5ee455449b12850e422ffb16ca2eac138a750d929d5910c019e88d not found: ID does not exist" Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.396739 4856 scope.go:117] "RemoveContainer" containerID="8905c315a86925df863b1bc7a186d4d6adc358ff87d51d17a02c9910752e23df" Nov 22 09:05:11 crc kubenswrapper[4856]: E1122 09:05:11.397057 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8905c315a86925df863b1bc7a186d4d6adc358ff87d51d17a02c9910752e23df\": container with ID starting with 8905c315a86925df863b1bc7a186d4d6adc358ff87d51d17a02c9910752e23df not found: ID does not exist" containerID="8905c315a86925df863b1bc7a186d4d6adc358ff87d51d17a02c9910752e23df" Nov 22 09:05:11 crc kubenswrapper[4856]: I1122 09:05:11.397376 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8905c315a86925df863b1bc7a186d4d6adc358ff87d51d17a02c9910752e23df"} err="failed to get container status \"8905c315a86925df863b1bc7a186d4d6adc358ff87d51d17a02c9910752e23df\": rpc error: code = NotFound desc = could not find container \"8905c315a86925df863b1bc7a186d4d6adc358ff87d51d17a02c9910752e23df\": container with ID starting with 8905c315a86925df863b1bc7a186d4d6adc358ff87d51d17a02c9910752e23df not found: ID does not exist" Nov 22 09:05:12 crc kubenswrapper[4856]: I1122 09:05:12.723227 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" path="/var/lib/kubelet/pods/3aa8558d-652a-4ceb-bb80-51a3e8ee88c3/volumes" Nov 22 09:05:28 crc kubenswrapper[4856]: I1122 09:05:28.984753 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rb7pb"] Nov 22 09:05:28 crc kubenswrapper[4856]: E1122 09:05:28.986272 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" containerName="extract-utilities" Nov 22 09:05:28 crc kubenswrapper[4856]: I1122 09:05:28.986296 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" containerName="extract-utilities" Nov 22 09:05:28 crc kubenswrapper[4856]: E1122 09:05:28.986323 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" containerName="extract-content" Nov 22 09:05:28 crc kubenswrapper[4856]: I1122 09:05:28.986331 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" containerName="extract-content" Nov 22 09:05:28 crc kubenswrapper[4856]: E1122 09:05:28.986348 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" containerName="registry-server" Nov 22 09:05:28 crc kubenswrapper[4856]: I1122 09:05:28.986356 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" containerName="registry-server" Nov 22 09:05:28 crc kubenswrapper[4856]: I1122 09:05:28.986684 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa8558d-652a-4ceb-bb80-51a3e8ee88c3" containerName="registry-server" Nov 22 09:05:28 crc kubenswrapper[4856]: I1122 09:05:28.988998 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.011036 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rb7pb"] Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.067447 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftzmd\" (UniqueName: \"kubernetes.io/projected/76100e31-e6b6-4383-a753-b497a702b7f6-kube-api-access-ftzmd\") pod \"redhat-operators-rb7pb\" (UID: \"76100e31-e6b6-4383-a753-b497a702b7f6\") " pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.067506 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76100e31-e6b6-4383-a753-b497a702b7f6-utilities\") pod \"redhat-operators-rb7pb\" (UID: \"76100e31-e6b6-4383-a753-b497a702b7f6\") " pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.067540 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76100e31-e6b6-4383-a753-b497a702b7f6-catalog-content\") pod \"redhat-operators-rb7pb\" (UID: \"76100e31-e6b6-4383-a753-b497a702b7f6\") " pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.168885 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftzmd\" (UniqueName: \"kubernetes.io/projected/76100e31-e6b6-4383-a753-b497a702b7f6-kube-api-access-ftzmd\") pod \"redhat-operators-rb7pb\" (UID: \"76100e31-e6b6-4383-a753-b497a702b7f6\") " pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.168945 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76100e31-e6b6-4383-a753-b497a702b7f6-utilities\") pod \"redhat-operators-rb7pb\" (UID: \"76100e31-e6b6-4383-a753-b497a702b7f6\") " pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.168970 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76100e31-e6b6-4383-a753-b497a702b7f6-catalog-content\") pod \"redhat-operators-rb7pb\" (UID: \"76100e31-e6b6-4383-a753-b497a702b7f6\") " pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.169560 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76100e31-e6b6-4383-a753-b497a702b7f6-catalog-content\") pod \"redhat-operators-rb7pb\" (UID: \"76100e31-e6b6-4383-a753-b497a702b7f6\") " pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.169640 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76100e31-e6b6-4383-a753-b497a702b7f6-utilities\") pod \"redhat-operators-rb7pb\" (UID: \"76100e31-e6b6-4383-a753-b497a702b7f6\") " pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.189382 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftzmd\" (UniqueName: \"kubernetes.io/projected/76100e31-e6b6-4383-a753-b497a702b7f6-kube-api-access-ftzmd\") pod \"redhat-operators-rb7pb\" (UID: \"76100e31-e6b6-4383-a753-b497a702b7f6\") " pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.317219 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.754886 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.755329 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:05:29 crc kubenswrapper[4856]: I1122 09:05:29.769308 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rb7pb"] Nov 22 09:05:30 crc kubenswrapper[4856]: I1122 09:05:30.454837 4856 generic.go:334] "Generic (PLEG): container finished" podID="76100e31-e6b6-4383-a753-b497a702b7f6" containerID="b94bf15ae5e84f2182fd0645808c8950ecb0d231501101198b2f34bea0302e73" exitCode=0 Nov 22 09:05:30 crc kubenswrapper[4856]: I1122 09:05:30.454899 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb7pb" event={"ID":"76100e31-e6b6-4383-a753-b497a702b7f6","Type":"ContainerDied","Data":"b94bf15ae5e84f2182fd0645808c8950ecb0d231501101198b2f34bea0302e73"} Nov 22 09:05:30 crc kubenswrapper[4856]: I1122 09:05:30.455925 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb7pb" event={"ID":"76100e31-e6b6-4383-a753-b497a702b7f6","Type":"ContainerStarted","Data":"ef0292600d0f5b28a44f45506aac53cba0784f2a859ba63287f92dd515a66f83"} Nov 22 09:05:31 crc kubenswrapper[4856]: I1122 09:05:31.374006 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zpzwr"] Nov 22 09:05:31 crc kubenswrapper[4856]: I1122 09:05:31.376729 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:05:31 crc kubenswrapper[4856]: I1122 09:05:31.385728 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zpzwr"] Nov 22 09:05:31 crc kubenswrapper[4856]: I1122 09:05:31.412488 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f00bae-a3d2-4197-b0e3-057439d2264f-catalog-content\") pod \"certified-operators-zpzwr\" (UID: \"01f00bae-a3d2-4197-b0e3-057439d2264f\") " pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:05:31 crc kubenswrapper[4856]: I1122 09:05:31.412696 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6nxn\" (UniqueName: \"kubernetes.io/projected/01f00bae-a3d2-4197-b0e3-057439d2264f-kube-api-access-v6nxn\") pod \"certified-operators-zpzwr\" (UID: \"01f00bae-a3d2-4197-b0e3-057439d2264f\") " pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:05:31 crc kubenswrapper[4856]: I1122 09:05:31.412863 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f00bae-a3d2-4197-b0e3-057439d2264f-utilities\") pod \"certified-operators-zpzwr\" (UID: \"01f00bae-a3d2-4197-b0e3-057439d2264f\") " pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:05:31 crc kubenswrapper[4856]: I1122 09:05:31.514648 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f00bae-a3d2-4197-b0e3-057439d2264f-utilities\") pod \"certified-operators-zpzwr\" (UID: \"01f00bae-a3d2-4197-b0e3-057439d2264f\") " pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:05:31 crc kubenswrapper[4856]: I1122 09:05:31.515069 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f00bae-a3d2-4197-b0e3-057439d2264f-catalog-content\") pod \"certified-operators-zpzwr\" (UID: \"01f00bae-a3d2-4197-b0e3-057439d2264f\") " pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:05:31 crc kubenswrapper[4856]: I1122 09:05:31.515155 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6nxn\" (UniqueName: \"kubernetes.io/projected/01f00bae-a3d2-4197-b0e3-057439d2264f-kube-api-access-v6nxn\") pod \"certified-operators-zpzwr\" (UID: \"01f00bae-a3d2-4197-b0e3-057439d2264f\") " pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:05:31 crc kubenswrapper[4856]: I1122 09:05:31.516079 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f00bae-a3d2-4197-b0e3-057439d2264f-utilities\") pod \"certified-operators-zpzwr\" (UID: \"01f00bae-a3d2-4197-b0e3-057439d2264f\") " pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:05:31 crc kubenswrapper[4856]: I1122 09:05:31.516437 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f00bae-a3d2-4197-b0e3-057439d2264f-catalog-content\") pod \"certified-operators-zpzwr\" (UID: \"01f00bae-a3d2-4197-b0e3-057439d2264f\") " pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:05:31 crc kubenswrapper[4856]: I1122 09:05:31.539366 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6nxn\" (UniqueName: \"kubernetes.io/projected/01f00bae-a3d2-4197-b0e3-057439d2264f-kube-api-access-v6nxn\") pod \"certified-operators-zpzwr\" (UID: \"01f00bae-a3d2-4197-b0e3-057439d2264f\") " pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:05:31 crc kubenswrapper[4856]: I1122 09:05:31.704046 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:05:32 crc kubenswrapper[4856]: I1122 09:05:32.286981 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zpzwr"] Nov 22 09:05:32 crc kubenswrapper[4856]: W1122 09:05:32.291298 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01f00bae_a3d2_4197_b0e3_057439d2264f.slice/crio-a20694da89e57623eb2d1b65dd859c0939ed2ae2d0616e12abd621407b5ec27b WatchSource:0}: Error finding container a20694da89e57623eb2d1b65dd859c0939ed2ae2d0616e12abd621407b5ec27b: Status 404 returned error can't find the container with id a20694da89e57623eb2d1b65dd859c0939ed2ae2d0616e12abd621407b5ec27b Nov 22 09:05:32 crc kubenswrapper[4856]: I1122 09:05:32.472601 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzwr" event={"ID":"01f00bae-a3d2-4197-b0e3-057439d2264f","Type":"ContainerStarted","Data":"a20694da89e57623eb2d1b65dd859c0939ed2ae2d0616e12abd621407b5ec27b"} Nov 22 09:05:32 crc kubenswrapper[4856]: I1122 09:05:32.476271 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb7pb" event={"ID":"76100e31-e6b6-4383-a753-b497a702b7f6","Type":"ContainerStarted","Data":"0787c12890076f3453780407f187e1bfe4a7f8f08f41623d8d4a27ade6f379d4"} Nov 22 09:05:33 crc kubenswrapper[4856]: I1122 09:05:33.487416 4856 generic.go:334] "Generic (PLEG): container finished" podID="01f00bae-a3d2-4197-b0e3-057439d2264f" containerID="ffee4a08bb552fa9b8f6a265de396aa50ce54a8d89aa3486fc946841a3362aab" exitCode=0 Nov 22 09:05:33 crc kubenswrapper[4856]: I1122 09:05:33.487460 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzwr" event={"ID":"01f00bae-a3d2-4197-b0e3-057439d2264f","Type":"ContainerDied","Data":"ffee4a08bb552fa9b8f6a265de396aa50ce54a8d89aa3486fc946841a3362aab"} Nov 22 09:05:37 crc kubenswrapper[4856]: I1122 09:05:37.526306 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzwr" event={"ID":"01f00bae-a3d2-4197-b0e3-057439d2264f","Type":"ContainerStarted","Data":"ea064b288674e21e292dee7399a7043f051e868cc65d0a874833935eb44bf0e2"} Nov 22 09:05:45 crc kubenswrapper[4856]: I1122 09:05:45.604501 4856 generic.go:334] "Generic (PLEG): container finished" podID="01f00bae-a3d2-4197-b0e3-057439d2264f" containerID="ea064b288674e21e292dee7399a7043f051e868cc65d0a874833935eb44bf0e2" exitCode=0 Nov 22 09:05:45 crc kubenswrapper[4856]: I1122 09:05:45.604546 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzwr" event={"ID":"01f00bae-a3d2-4197-b0e3-057439d2264f","Type":"ContainerDied","Data":"ea064b288674e21e292dee7399a7043f051e868cc65d0a874833935eb44bf0e2"} Nov 22 09:05:48 crc kubenswrapper[4856]: I1122 09:05:48.635345 4856 generic.go:334] "Generic (PLEG): container finished" podID="76100e31-e6b6-4383-a753-b497a702b7f6" containerID="0787c12890076f3453780407f187e1bfe4a7f8f08f41623d8d4a27ade6f379d4" exitCode=0 Nov 22 09:05:48 crc kubenswrapper[4856]: I1122 09:05:48.635444 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb7pb" event={"ID":"76100e31-e6b6-4383-a753-b497a702b7f6","Type":"ContainerDied","Data":"0787c12890076f3453780407f187e1bfe4a7f8f08f41623d8d4a27ade6f379d4"} Nov 22 09:05:48 crc kubenswrapper[4856]: I1122 09:05:48.639686 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzwr" event={"ID":"01f00bae-a3d2-4197-b0e3-057439d2264f","Type":"ContainerStarted","Data":"66978f4831d3b89857f9c735782f59127dba33c443e710eb81567d0e55d6e6a4"} Nov 22 09:05:48 crc kubenswrapper[4856]: I1122 09:05:48.674176 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zpzwr" podStartSLOduration=2.891473833 podStartE2EDuration="17.674156088s" podCreationTimestamp="2025-11-22 09:05:31 +0000 UTC" firstStartedPulling="2025-11-22 09:05:33.48970294 +0000 UTC m=+7375.903096198" lastFinishedPulling="2025-11-22 09:05:48.272385195 +0000 UTC m=+7390.685778453" observedRunningTime="2025-11-22 09:05:48.671381144 +0000 UTC m=+7391.084774402" watchObservedRunningTime="2025-11-22 09:05:48.674156088 +0000 UTC m=+7391.087549346" Nov 22 09:05:49 crc kubenswrapper[4856]: I1122 09:05:49.656963 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb7pb" event={"ID":"76100e31-e6b6-4383-a753-b497a702b7f6","Type":"ContainerStarted","Data":"c45121e7c64145462036e23b400bbc9383b6ba5b1848a47c8bad88525aa3fd07"} Nov 22 09:05:49 crc kubenswrapper[4856]: I1122 09:05:49.683166 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rb7pb" podStartSLOduration=2.97400539 podStartE2EDuration="21.683137454s" podCreationTimestamp="2025-11-22 09:05:28 +0000 UTC" firstStartedPulling="2025-11-22 09:05:30.456883697 +0000 UTC m=+7372.870276955" lastFinishedPulling="2025-11-22 09:05:49.166015761 +0000 UTC m=+7391.579409019" observedRunningTime="2025-11-22 09:05:49.679586928 +0000 UTC m=+7392.092980186" watchObservedRunningTime="2025-11-22 09:05:49.683137454 +0000 UTC m=+7392.096530712" Nov 22 09:05:51 crc kubenswrapper[4856]: I1122 09:05:51.704985 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:05:51 crc kubenswrapper[4856]: I1122 09:05:51.705659 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:05:52 crc kubenswrapper[4856]: I1122 09:05:52.757117 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zpzwr" podUID="01f00bae-a3d2-4197-b0e3-057439d2264f" containerName="registry-server" probeResult="failure" output=< Nov 22 09:05:52 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 09:05:52 crc kubenswrapper[4856]: > Nov 22 09:05:59 crc kubenswrapper[4856]: I1122 09:05:59.318115 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:59 crc kubenswrapper[4856]: I1122 09:05:59.319065 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:59 crc kubenswrapper[4856]: I1122 09:05:59.375205 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:05:59 crc kubenswrapper[4856]: I1122 09:05:59.754143 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:05:59 crc kubenswrapper[4856]: I1122 09:05:59.755102 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:05:59 crc kubenswrapper[4856]: I1122 09:05:59.808116 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:06:00 crc kubenswrapper[4856]: I1122 09:06:00.187301 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rb7pb"] Nov 22 09:06:01 crc kubenswrapper[4856]: I1122 09:06:01.756015 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:06:01 crc kubenswrapper[4856]: I1122 09:06:01.780948 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rb7pb" podUID="76100e31-e6b6-4383-a753-b497a702b7f6" containerName="registry-server" containerID="cri-o://c45121e7c64145462036e23b400bbc9383b6ba5b1848a47c8bad88525aa3fd07" gracePeriod=2 Nov 22 09:06:01 crc kubenswrapper[4856]: I1122 09:06:01.812101 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:06:02 crc kubenswrapper[4856]: I1122 09:06:02.589107 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zpzwr"] Nov 22 09:06:02 crc kubenswrapper[4856]: I1122 09:06:02.792841 4856 generic.go:334] "Generic (PLEG): container finished" podID="76100e31-e6b6-4383-a753-b497a702b7f6" containerID="c45121e7c64145462036e23b400bbc9383b6ba5b1848a47c8bad88525aa3fd07" exitCode=0 Nov 22 09:06:02 crc kubenswrapper[4856]: I1122 09:06:02.793335 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zpzwr" podUID="01f00bae-a3d2-4197-b0e3-057439d2264f" containerName="registry-server" containerID="cri-o://66978f4831d3b89857f9c735782f59127dba33c443e710eb81567d0e55d6e6a4" gracePeriod=2 Nov 22 09:06:02 crc kubenswrapper[4856]: I1122 09:06:02.793742 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb7pb" event={"ID":"76100e31-e6b6-4383-a753-b497a702b7f6","Type":"ContainerDied","Data":"c45121e7c64145462036e23b400bbc9383b6ba5b1848a47c8bad88525aa3fd07"} Nov 22 09:06:02 crc kubenswrapper[4856]: I1122 09:06:02.793768 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb7pb" event={"ID":"76100e31-e6b6-4383-a753-b497a702b7f6","Type":"ContainerDied","Data":"ef0292600d0f5b28a44f45506aac53cba0784f2a859ba63287f92dd515a66f83"} Nov 22 09:06:02 crc kubenswrapper[4856]: I1122 09:06:02.793796 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef0292600d0f5b28a44f45506aac53cba0784f2a859ba63287f92dd515a66f83" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.010520 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.103557 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76100e31-e6b6-4383-a753-b497a702b7f6-catalog-content\") pod \"76100e31-e6b6-4383-a753-b497a702b7f6\" (UID: \"76100e31-e6b6-4383-a753-b497a702b7f6\") " Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.103649 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftzmd\" (UniqueName: \"kubernetes.io/projected/76100e31-e6b6-4383-a753-b497a702b7f6-kube-api-access-ftzmd\") pod \"76100e31-e6b6-4383-a753-b497a702b7f6\" (UID: \"76100e31-e6b6-4383-a753-b497a702b7f6\") " Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.103760 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76100e31-e6b6-4383-a753-b497a702b7f6-utilities\") pod \"76100e31-e6b6-4383-a753-b497a702b7f6\" (UID: \"76100e31-e6b6-4383-a753-b497a702b7f6\") " Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.105488 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76100e31-e6b6-4383-a753-b497a702b7f6-utilities" (OuterVolumeSpecName: "utilities") pod "76100e31-e6b6-4383-a753-b497a702b7f6" (UID: "76100e31-e6b6-4383-a753-b497a702b7f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.111331 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76100e31-e6b6-4383-a753-b497a702b7f6-kube-api-access-ftzmd" (OuterVolumeSpecName: "kube-api-access-ftzmd") pod "76100e31-e6b6-4383-a753-b497a702b7f6" (UID: "76100e31-e6b6-4383-a753-b497a702b7f6"). InnerVolumeSpecName "kube-api-access-ftzmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.207029 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftzmd\" (UniqueName: \"kubernetes.io/projected/76100e31-e6b6-4383-a753-b497a702b7f6-kube-api-access-ftzmd\") on node \"crc\" DevicePath \"\"" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.208360 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76100e31-e6b6-4383-a753-b497a702b7f6-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.214457 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76100e31-e6b6-4383-a753-b497a702b7f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76100e31-e6b6-4383-a753-b497a702b7f6" (UID: "76100e31-e6b6-4383-a753-b497a702b7f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.223890 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.309727 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f00bae-a3d2-4197-b0e3-057439d2264f-catalog-content\") pod \"01f00bae-a3d2-4197-b0e3-057439d2264f\" (UID: \"01f00bae-a3d2-4197-b0e3-057439d2264f\") " Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.309919 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f00bae-a3d2-4197-b0e3-057439d2264f-utilities\") pod \"01f00bae-a3d2-4197-b0e3-057439d2264f\" (UID: \"01f00bae-a3d2-4197-b0e3-057439d2264f\") " Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.309979 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6nxn\" (UniqueName: \"kubernetes.io/projected/01f00bae-a3d2-4197-b0e3-057439d2264f-kube-api-access-v6nxn\") pod \"01f00bae-a3d2-4197-b0e3-057439d2264f\" (UID: \"01f00bae-a3d2-4197-b0e3-057439d2264f\") " Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.310522 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76100e31-e6b6-4383-a753-b497a702b7f6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.310946 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01f00bae-a3d2-4197-b0e3-057439d2264f-utilities" (OuterVolumeSpecName: "utilities") pod "01f00bae-a3d2-4197-b0e3-057439d2264f" (UID: "01f00bae-a3d2-4197-b0e3-057439d2264f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.314208 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f00bae-a3d2-4197-b0e3-057439d2264f-kube-api-access-v6nxn" (OuterVolumeSpecName: "kube-api-access-v6nxn") pod "01f00bae-a3d2-4197-b0e3-057439d2264f" (UID: "01f00bae-a3d2-4197-b0e3-057439d2264f"). InnerVolumeSpecName "kube-api-access-v6nxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.358135 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01f00bae-a3d2-4197-b0e3-057439d2264f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01f00bae-a3d2-4197-b0e3-057439d2264f" (UID: "01f00bae-a3d2-4197-b0e3-057439d2264f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.412248 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01f00bae-a3d2-4197-b0e3-057439d2264f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.412276 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01f00bae-a3d2-4197-b0e3-057439d2264f-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.412289 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6nxn\" (UniqueName: \"kubernetes.io/projected/01f00bae-a3d2-4197-b0e3-057439d2264f-kube-api-access-v6nxn\") on node \"crc\" DevicePath \"\"" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.805299 4856 generic.go:334] "Generic (PLEG): container finished" podID="01f00bae-a3d2-4197-b0e3-057439d2264f" containerID="66978f4831d3b89857f9c735782f59127dba33c443e710eb81567d0e55d6e6a4" exitCode=0 Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.805439 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zpzwr" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.805452 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb7pb" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.805425 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzwr" event={"ID":"01f00bae-a3d2-4197-b0e3-057439d2264f","Type":"ContainerDied","Data":"66978f4831d3b89857f9c735782f59127dba33c443e710eb81567d0e55d6e6a4"} Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.805598 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zpzwr" event={"ID":"01f00bae-a3d2-4197-b0e3-057439d2264f","Type":"ContainerDied","Data":"a20694da89e57623eb2d1b65dd859c0939ed2ae2d0616e12abd621407b5ec27b"} Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.805621 4856 scope.go:117] "RemoveContainer" containerID="66978f4831d3b89857f9c735782f59127dba33c443e710eb81567d0e55d6e6a4" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.835116 4856 scope.go:117] "RemoveContainer" containerID="ea064b288674e21e292dee7399a7043f051e868cc65d0a874833935eb44bf0e2" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.848804 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zpzwr"] Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.860605 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zpzwr"] Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.869559 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rb7pb"] Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.878358 4856 scope.go:117] "RemoveContainer" containerID="ffee4a08bb552fa9b8f6a265de396aa50ce54a8d89aa3486fc946841a3362aab" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.878777 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rb7pb"] Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.921249 4856 scope.go:117] "RemoveContainer" containerID="66978f4831d3b89857f9c735782f59127dba33c443e710eb81567d0e55d6e6a4" Nov 22 09:06:03 crc kubenswrapper[4856]: E1122 09:06:03.921952 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66978f4831d3b89857f9c735782f59127dba33c443e710eb81567d0e55d6e6a4\": container with ID starting with 66978f4831d3b89857f9c735782f59127dba33c443e710eb81567d0e55d6e6a4 not found: ID does not exist" containerID="66978f4831d3b89857f9c735782f59127dba33c443e710eb81567d0e55d6e6a4" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.921993 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66978f4831d3b89857f9c735782f59127dba33c443e710eb81567d0e55d6e6a4"} err="failed to get container status \"66978f4831d3b89857f9c735782f59127dba33c443e710eb81567d0e55d6e6a4\": rpc error: code = NotFound desc = could not find container \"66978f4831d3b89857f9c735782f59127dba33c443e710eb81567d0e55d6e6a4\": container with ID starting with 66978f4831d3b89857f9c735782f59127dba33c443e710eb81567d0e55d6e6a4 not found: ID does not exist" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.922022 4856 scope.go:117] "RemoveContainer" containerID="ea064b288674e21e292dee7399a7043f051e868cc65d0a874833935eb44bf0e2" Nov 22 09:06:03 crc kubenswrapper[4856]: E1122 09:06:03.923005 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea064b288674e21e292dee7399a7043f051e868cc65d0a874833935eb44bf0e2\": container with ID starting with ea064b288674e21e292dee7399a7043f051e868cc65d0a874833935eb44bf0e2 not found: ID does not exist" containerID="ea064b288674e21e292dee7399a7043f051e868cc65d0a874833935eb44bf0e2" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.923039 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea064b288674e21e292dee7399a7043f051e868cc65d0a874833935eb44bf0e2"} err="failed to get container status \"ea064b288674e21e292dee7399a7043f051e868cc65d0a874833935eb44bf0e2\": rpc error: code = NotFound desc = could not find container \"ea064b288674e21e292dee7399a7043f051e868cc65d0a874833935eb44bf0e2\": container with ID starting with ea064b288674e21e292dee7399a7043f051e868cc65d0a874833935eb44bf0e2 not found: ID does not exist" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.923059 4856 scope.go:117] "RemoveContainer" containerID="ffee4a08bb552fa9b8f6a265de396aa50ce54a8d89aa3486fc946841a3362aab" Nov 22 09:06:03 crc kubenswrapper[4856]: E1122 09:06:03.927693 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffee4a08bb552fa9b8f6a265de396aa50ce54a8d89aa3486fc946841a3362aab\": container with ID starting with ffee4a08bb552fa9b8f6a265de396aa50ce54a8d89aa3486fc946841a3362aab not found: ID does not exist" containerID="ffee4a08bb552fa9b8f6a265de396aa50ce54a8d89aa3486fc946841a3362aab" Nov 22 09:06:03 crc kubenswrapper[4856]: I1122 09:06:03.927757 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffee4a08bb552fa9b8f6a265de396aa50ce54a8d89aa3486fc946841a3362aab"} err="failed to get container status \"ffee4a08bb552fa9b8f6a265de396aa50ce54a8d89aa3486fc946841a3362aab\": rpc error: code = NotFound desc = could not find container \"ffee4a08bb552fa9b8f6a265de396aa50ce54a8d89aa3486fc946841a3362aab\": container with ID starting with ffee4a08bb552fa9b8f6a265de396aa50ce54a8d89aa3486fc946841a3362aab not found: ID does not exist" Nov 22 09:06:04 crc kubenswrapper[4856]: I1122 09:06:04.722765 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f00bae-a3d2-4197-b0e3-057439d2264f" path="/var/lib/kubelet/pods/01f00bae-a3d2-4197-b0e3-057439d2264f/volumes" Nov 22 09:06:04 crc kubenswrapper[4856]: I1122 09:06:04.723924 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76100e31-e6b6-4383-a753-b497a702b7f6" path="/var/lib/kubelet/pods/76100e31-e6b6-4383-a753-b497a702b7f6/volumes" Nov 22 09:06:29 crc kubenswrapper[4856]: I1122 09:06:29.754290 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:06:29 crc kubenswrapper[4856]: I1122 09:06:29.754944 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:06:29 crc kubenswrapper[4856]: I1122 09:06:29.754990 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 09:06:29 crc kubenswrapper[4856]: I1122 09:06:29.755841 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:06:29 crc kubenswrapper[4856]: I1122 09:06:29.755899 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" gracePeriod=600 Nov 22 09:06:29 crc kubenswrapper[4856]: E1122 09:06:29.983056 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:06:30 crc kubenswrapper[4856]: I1122 09:06:30.055011 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" exitCode=0 Nov 22 09:06:30 crc kubenswrapper[4856]: I1122 09:06:30.055074 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3"} Nov 22 09:06:30 crc kubenswrapper[4856]: I1122 09:06:30.055188 4856 scope.go:117] "RemoveContainer" containerID="8bedeef61d55017516ee06c2d6c95ca3849842a59141d0c9e8b0d0befed04499" Nov 22 09:06:30 crc kubenswrapper[4856]: I1122 09:06:30.057309 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:06:30 crc kubenswrapper[4856]: E1122 09:06:30.058151 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:06:39 crc kubenswrapper[4856]: I1122 09:06:39.138348 4856 generic.go:334] "Generic (PLEG): container finished" podID="edda8fe7-9e3d-4753-86c7-539cc18590d5" containerID="69b7fe8d64f39d063b65fffc5ea3164e5349691cfba03abaae326b68baa93d3e" exitCode=0 Nov 22 09:06:39 crc kubenswrapper[4856]: I1122 09:06:39.138530 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" event={"ID":"edda8fe7-9e3d-4753-86c7-539cc18590d5","Type":"ContainerDied","Data":"69b7fe8d64f39d063b65fffc5ea3164e5349691cfba03abaae326b68baa93d3e"} Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.710427 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:06:40 crc kubenswrapper[4856]: E1122 09:06:40.711263 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.823876 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.894800 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcbbt\" (UniqueName: \"kubernetes.io/projected/edda8fe7-9e3d-4753-86c7-539cc18590d5-kube-api-access-qcbbt\") pod \"edda8fe7-9e3d-4753-86c7-539cc18590d5\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.894935 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-inventory\") pod \"edda8fe7-9e3d-4753-86c7-539cc18590d5\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.895034 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-ssh-key\") pod \"edda8fe7-9e3d-4753-86c7-539cc18590d5\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.895151 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-bootstrap-combined-ca-bundle\") pod \"edda8fe7-9e3d-4753-86c7-539cc18590d5\" (UID: \"edda8fe7-9e3d-4753-86c7-539cc18590d5\") " Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.901913 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "edda8fe7-9e3d-4753-86c7-539cc18590d5" (UID: "edda8fe7-9e3d-4753-86c7-539cc18590d5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.904014 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edda8fe7-9e3d-4753-86c7-539cc18590d5-kube-api-access-qcbbt" (OuterVolumeSpecName: "kube-api-access-qcbbt") pod "edda8fe7-9e3d-4753-86c7-539cc18590d5" (UID: "edda8fe7-9e3d-4753-86c7-539cc18590d5"). InnerVolumeSpecName "kube-api-access-qcbbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.925471 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "edda8fe7-9e3d-4753-86c7-539cc18590d5" (UID: "edda8fe7-9e3d-4753-86c7-539cc18590d5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.936237 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-inventory" (OuterVolumeSpecName: "inventory") pod "edda8fe7-9e3d-4753-86c7-539cc18590d5" (UID: "edda8fe7-9e3d-4753-86c7-539cc18590d5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.997011 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.997047 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.997057 4856 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edda8fe7-9e3d-4753-86c7-539cc18590d5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:06:40 crc kubenswrapper[4856]: I1122 09:06:40.997067 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcbbt\" (UniqueName: \"kubernetes.io/projected/edda8fe7-9e3d-4753-86c7-539cc18590d5-kube-api-access-qcbbt\") on node \"crc\" DevicePath \"\"" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.159574 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" event={"ID":"edda8fe7-9e3d-4753-86c7-539cc18590d5","Type":"ContainerDied","Data":"c13c0bbd7da42c8bb3ecbe046dec3b0ef4adf192eb620f9e465d94410bd6af9d"} Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.159640 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c13c0bbd7da42c8bb3ecbe046dec3b0ef4adf192eb620f9e465d94410bd6af9d" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.159638 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-5tw45" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.315838 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-59zw7"] Nov 22 09:06:41 crc kubenswrapper[4856]: E1122 09:06:41.316491 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f00bae-a3d2-4197-b0e3-057439d2264f" containerName="extract-content" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.316529 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f00bae-a3d2-4197-b0e3-057439d2264f" containerName="extract-content" Nov 22 09:06:41 crc kubenswrapper[4856]: E1122 09:06:41.316546 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76100e31-e6b6-4383-a753-b497a702b7f6" containerName="registry-server" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.316554 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="76100e31-e6b6-4383-a753-b497a702b7f6" containerName="registry-server" Nov 22 09:06:41 crc kubenswrapper[4856]: E1122 09:06:41.316582 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f00bae-a3d2-4197-b0e3-057439d2264f" containerName="registry-server" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.316591 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f00bae-a3d2-4197-b0e3-057439d2264f" containerName="registry-server" Nov 22 09:06:41 crc kubenswrapper[4856]: E1122 09:06:41.316657 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76100e31-e6b6-4383-a753-b497a702b7f6" containerName="extract-content" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.316666 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="76100e31-e6b6-4383-a753-b497a702b7f6" containerName="extract-content" Nov 22 09:06:41 crc kubenswrapper[4856]: E1122 09:06:41.316683 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f00bae-a3d2-4197-b0e3-057439d2264f" containerName="extract-utilities" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.316693 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f00bae-a3d2-4197-b0e3-057439d2264f" containerName="extract-utilities" Nov 22 09:06:41 crc kubenswrapper[4856]: E1122 09:06:41.316715 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76100e31-e6b6-4383-a753-b497a702b7f6" containerName="extract-utilities" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.316724 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="76100e31-e6b6-4383-a753-b497a702b7f6" containerName="extract-utilities" Nov 22 09:06:41 crc kubenswrapper[4856]: E1122 09:06:41.316746 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edda8fe7-9e3d-4753-86c7-539cc18590d5" containerName="bootstrap-openstack-openstack-cell1" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.316755 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="edda8fe7-9e3d-4753-86c7-539cc18590d5" containerName="bootstrap-openstack-openstack-cell1" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.317033 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="edda8fe7-9e3d-4753-86c7-539cc18590d5" containerName="bootstrap-openstack-openstack-cell1" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.317052 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="76100e31-e6b6-4383-a753-b497a702b7f6" containerName="registry-server" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.317088 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f00bae-a3d2-4197-b0e3-057439d2264f" containerName="registry-server" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.318130 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-59zw7" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.320150 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.320687 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.320857 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.320992 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.325730 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-59zw7"] Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.405855 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm25z\" (UniqueName: \"kubernetes.io/projected/1019a693-31a9-4b08-bc98-878920e83124-kube-api-access-pm25z\") pod \"download-cache-openstack-openstack-cell1-59zw7\" (UID: \"1019a693-31a9-4b08-bc98-878920e83124\") " pod="openstack/download-cache-openstack-openstack-cell1-59zw7" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.405920 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1019a693-31a9-4b08-bc98-878920e83124-ssh-key\") pod \"download-cache-openstack-openstack-cell1-59zw7\" (UID: \"1019a693-31a9-4b08-bc98-878920e83124\") " pod="openstack/download-cache-openstack-openstack-cell1-59zw7" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.406046 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1019a693-31a9-4b08-bc98-878920e83124-inventory\") pod \"download-cache-openstack-openstack-cell1-59zw7\" (UID: \"1019a693-31a9-4b08-bc98-878920e83124\") " pod="openstack/download-cache-openstack-openstack-cell1-59zw7" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.507848 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1019a693-31a9-4b08-bc98-878920e83124-inventory\") pod \"download-cache-openstack-openstack-cell1-59zw7\" (UID: \"1019a693-31a9-4b08-bc98-878920e83124\") " pod="openstack/download-cache-openstack-openstack-cell1-59zw7" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.508311 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm25z\" (UniqueName: \"kubernetes.io/projected/1019a693-31a9-4b08-bc98-878920e83124-kube-api-access-pm25z\") pod \"download-cache-openstack-openstack-cell1-59zw7\" (UID: \"1019a693-31a9-4b08-bc98-878920e83124\") " pod="openstack/download-cache-openstack-openstack-cell1-59zw7" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.508372 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1019a693-31a9-4b08-bc98-878920e83124-ssh-key\") pod \"download-cache-openstack-openstack-cell1-59zw7\" (UID: \"1019a693-31a9-4b08-bc98-878920e83124\") " pod="openstack/download-cache-openstack-openstack-cell1-59zw7" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.512670 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1019a693-31a9-4b08-bc98-878920e83124-inventory\") pod \"download-cache-openstack-openstack-cell1-59zw7\" (UID: \"1019a693-31a9-4b08-bc98-878920e83124\") " pod="openstack/download-cache-openstack-openstack-cell1-59zw7" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.513095 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1019a693-31a9-4b08-bc98-878920e83124-ssh-key\") pod \"download-cache-openstack-openstack-cell1-59zw7\" (UID: \"1019a693-31a9-4b08-bc98-878920e83124\") " pod="openstack/download-cache-openstack-openstack-cell1-59zw7" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.527839 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm25z\" (UniqueName: \"kubernetes.io/projected/1019a693-31a9-4b08-bc98-878920e83124-kube-api-access-pm25z\") pod \"download-cache-openstack-openstack-cell1-59zw7\" (UID: \"1019a693-31a9-4b08-bc98-878920e83124\") " pod="openstack/download-cache-openstack-openstack-cell1-59zw7" Nov 22 09:06:41 crc kubenswrapper[4856]: I1122 09:06:41.635659 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-59zw7" Nov 22 09:06:42 crc kubenswrapper[4856]: I1122 09:06:42.141833 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-59zw7"] Nov 22 09:06:42 crc kubenswrapper[4856]: I1122 09:06:42.170287 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-59zw7" event={"ID":"1019a693-31a9-4b08-bc98-878920e83124","Type":"ContainerStarted","Data":"28cedb3cfd8ba75ecb6e88bc435ce2ad3e2e29b078642b310bc0f7f2ec645965"} Nov 22 09:06:44 crc kubenswrapper[4856]: I1122 09:06:44.218640 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-59zw7" event={"ID":"1019a693-31a9-4b08-bc98-878920e83124","Type":"ContainerStarted","Data":"6544ce506f596ca70b4d6880c35bc5d670f893b3e3953019ba26d421c57e975d"} Nov 22 09:06:44 crc kubenswrapper[4856]: I1122 09:06:44.243344 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-59zw7" podStartSLOduration=2.255362605 podStartE2EDuration="3.243321254s" podCreationTimestamp="2025-11-22 09:06:41 +0000 UTC" firstStartedPulling="2025-11-22 09:06:42.147127924 +0000 UTC m=+7444.560521182" lastFinishedPulling="2025-11-22 09:06:43.135086553 +0000 UTC m=+7445.548479831" observedRunningTime="2025-11-22 09:06:44.23465577 +0000 UTC m=+7446.648049028" watchObservedRunningTime="2025-11-22 09:06:44.243321254 +0000 UTC m=+7446.656714512" Nov 22 09:06:51 crc kubenswrapper[4856]: I1122 09:06:51.710531 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:06:51 crc kubenswrapper[4856]: E1122 09:06:51.711937 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:07:05 crc kubenswrapper[4856]: I1122 09:07:05.709931 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:07:05 crc kubenswrapper[4856]: E1122 09:07:05.710909 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:07:17 crc kubenswrapper[4856]: I1122 09:07:17.709482 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:07:17 crc kubenswrapper[4856]: E1122 09:07:17.710327 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:07:28 crc kubenswrapper[4856]: I1122 09:07:28.715298 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:07:28 crc kubenswrapper[4856]: E1122 09:07:28.715995 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:07:41 crc kubenswrapper[4856]: I1122 09:07:41.710830 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:07:41 crc kubenswrapper[4856]: E1122 09:07:41.712251 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:07:54 crc kubenswrapper[4856]: I1122 09:07:54.709770 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:07:54 crc kubenswrapper[4856]: E1122 09:07:54.710740 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:08:07 crc kubenswrapper[4856]: I1122 09:08:07.709950 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:08:07 crc kubenswrapper[4856]: E1122 09:08:07.710739 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:08:14 crc kubenswrapper[4856]: I1122 09:08:14.096649 4856 generic.go:334] "Generic (PLEG): container finished" podID="1019a693-31a9-4b08-bc98-878920e83124" containerID="6544ce506f596ca70b4d6880c35bc5d670f893b3e3953019ba26d421c57e975d" exitCode=0 Nov 22 09:08:14 crc kubenswrapper[4856]: I1122 09:08:14.096739 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-59zw7" event={"ID":"1019a693-31a9-4b08-bc98-878920e83124","Type":"ContainerDied","Data":"6544ce506f596ca70b4d6880c35bc5d670f893b3e3953019ba26d421c57e975d"} Nov 22 09:08:15 crc kubenswrapper[4856]: I1122 09:08:15.567370 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-59zw7" Nov 22 09:08:15 crc kubenswrapper[4856]: I1122 09:08:15.663690 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1019a693-31a9-4b08-bc98-878920e83124-ssh-key\") pod \"1019a693-31a9-4b08-bc98-878920e83124\" (UID: \"1019a693-31a9-4b08-bc98-878920e83124\") " Nov 22 09:08:15 crc kubenswrapper[4856]: I1122 09:08:15.663925 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1019a693-31a9-4b08-bc98-878920e83124-inventory\") pod \"1019a693-31a9-4b08-bc98-878920e83124\" (UID: \"1019a693-31a9-4b08-bc98-878920e83124\") " Nov 22 09:08:15 crc kubenswrapper[4856]: I1122 09:08:15.663967 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm25z\" (UniqueName: \"kubernetes.io/projected/1019a693-31a9-4b08-bc98-878920e83124-kube-api-access-pm25z\") pod \"1019a693-31a9-4b08-bc98-878920e83124\" (UID: \"1019a693-31a9-4b08-bc98-878920e83124\") " Nov 22 09:08:15 crc kubenswrapper[4856]: I1122 09:08:15.671702 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1019a693-31a9-4b08-bc98-878920e83124-kube-api-access-pm25z" (OuterVolumeSpecName: "kube-api-access-pm25z") pod "1019a693-31a9-4b08-bc98-878920e83124" (UID: "1019a693-31a9-4b08-bc98-878920e83124"). InnerVolumeSpecName "kube-api-access-pm25z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:08:15 crc kubenswrapper[4856]: I1122 09:08:15.695407 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1019a693-31a9-4b08-bc98-878920e83124-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1019a693-31a9-4b08-bc98-878920e83124" (UID: "1019a693-31a9-4b08-bc98-878920e83124"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:08:15 crc kubenswrapper[4856]: I1122 09:08:15.696829 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1019a693-31a9-4b08-bc98-878920e83124-inventory" (OuterVolumeSpecName: "inventory") pod "1019a693-31a9-4b08-bc98-878920e83124" (UID: "1019a693-31a9-4b08-bc98-878920e83124"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:08:15 crc kubenswrapper[4856]: I1122 09:08:15.766930 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1019a693-31a9-4b08-bc98-878920e83124-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:15 crc kubenswrapper[4856]: I1122 09:08:15.766982 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm25z\" (UniqueName: \"kubernetes.io/projected/1019a693-31a9-4b08-bc98-878920e83124-kube-api-access-pm25z\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:15 crc kubenswrapper[4856]: I1122 09:08:15.766996 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1019a693-31a9-4b08-bc98-878920e83124-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.119272 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-59zw7" event={"ID":"1019a693-31a9-4b08-bc98-878920e83124","Type":"ContainerDied","Data":"28cedb3cfd8ba75ecb6e88bc435ce2ad3e2e29b078642b310bc0f7f2ec645965"} Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.119644 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28cedb3cfd8ba75ecb6e88bc435ce2ad3e2e29b078642b310bc0f7f2ec645965" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.119355 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-59zw7" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.197335 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-hxn22"] Nov 22 09:08:16 crc kubenswrapper[4856]: E1122 09:08:16.197834 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1019a693-31a9-4b08-bc98-878920e83124" containerName="download-cache-openstack-openstack-cell1" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.197856 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1019a693-31a9-4b08-bc98-878920e83124" containerName="download-cache-openstack-openstack-cell1" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.198091 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1019a693-31a9-4b08-bc98-878920e83124" containerName="download-cache-openstack-openstack-cell1" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.199114 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hxn22" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.201198 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.202149 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.202282 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.204952 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.217113 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-hxn22"] Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.280614 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-ssh-key\") pod \"configure-network-openstack-openstack-cell1-hxn22\" (UID: \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\") " pod="openstack/configure-network-openstack-openstack-cell1-hxn22" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.281578 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-inventory\") pod \"configure-network-openstack-openstack-cell1-hxn22\" (UID: \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\") " pod="openstack/configure-network-openstack-openstack-cell1-hxn22" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.281877 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj7sh\" (UniqueName: \"kubernetes.io/projected/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-kube-api-access-gj7sh\") pod \"configure-network-openstack-openstack-cell1-hxn22\" (UID: \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\") " pod="openstack/configure-network-openstack-openstack-cell1-hxn22" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.384006 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-ssh-key\") pod \"configure-network-openstack-openstack-cell1-hxn22\" (UID: \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\") " pod="openstack/configure-network-openstack-openstack-cell1-hxn22" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.384226 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-inventory\") pod \"configure-network-openstack-openstack-cell1-hxn22\" (UID: \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\") " pod="openstack/configure-network-openstack-openstack-cell1-hxn22" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.384298 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj7sh\" (UniqueName: \"kubernetes.io/projected/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-kube-api-access-gj7sh\") pod \"configure-network-openstack-openstack-cell1-hxn22\" (UID: \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\") " pod="openstack/configure-network-openstack-openstack-cell1-hxn22" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.388322 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-inventory\") pod \"configure-network-openstack-openstack-cell1-hxn22\" (UID: \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\") " pod="openstack/configure-network-openstack-openstack-cell1-hxn22" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.388446 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-ssh-key\") pod \"configure-network-openstack-openstack-cell1-hxn22\" (UID: \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\") " pod="openstack/configure-network-openstack-openstack-cell1-hxn22" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.402026 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj7sh\" (UniqueName: \"kubernetes.io/projected/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-kube-api-access-gj7sh\") pod \"configure-network-openstack-openstack-cell1-hxn22\" (UID: \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\") " pod="openstack/configure-network-openstack-openstack-cell1-hxn22" Nov 22 09:08:16 crc kubenswrapper[4856]: I1122 09:08:16.519314 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hxn22" Nov 22 09:08:17 crc kubenswrapper[4856]: I1122 09:08:17.050329 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-hxn22"] Nov 22 09:08:17 crc kubenswrapper[4856]: I1122 09:08:17.129789 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hxn22" event={"ID":"2279f2ab-cdc9-4bbb-9d75-4f259de8f544","Type":"ContainerStarted","Data":"3310ec4745939fe9cfcb5b61a8521426646e8b6ce56cd16a29311e847e6ed0c3"} Nov 22 09:08:18 crc kubenswrapper[4856]: I1122 09:08:18.719309 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:08:18 crc kubenswrapper[4856]: E1122 09:08:18.720653 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:08:19 crc kubenswrapper[4856]: I1122 09:08:19.161975 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hxn22" event={"ID":"2279f2ab-cdc9-4bbb-9d75-4f259de8f544","Type":"ContainerStarted","Data":"7073f7133bb65e95f70709323dbf45f75927acc19116165d96a0d7c876d573bd"} Nov 22 09:08:19 crc kubenswrapper[4856]: I1122 09:08:19.186463 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-hxn22" podStartSLOduration=1.5122040669999999 podStartE2EDuration="3.18644399s" podCreationTimestamp="2025-11-22 09:08:16 +0000 UTC" firstStartedPulling="2025-11-22 09:08:17.056832259 +0000 UTC m=+7539.470225517" lastFinishedPulling="2025-11-22 09:08:18.731072182 +0000 UTC m=+7541.144465440" observedRunningTime="2025-11-22 09:08:19.179586055 +0000 UTC m=+7541.592979313" watchObservedRunningTime="2025-11-22 09:08:19.18644399 +0000 UTC m=+7541.599837248" Nov 22 09:08:32 crc kubenswrapper[4856]: I1122 09:08:32.712075 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:08:32 crc kubenswrapper[4856]: E1122 09:08:32.713666 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:08:44 crc kubenswrapper[4856]: I1122 09:08:44.710322 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:08:44 crc kubenswrapper[4856]: E1122 09:08:44.711258 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:08:56 crc kubenswrapper[4856]: I1122 09:08:56.710477 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:08:56 crc kubenswrapper[4856]: E1122 09:08:56.711216 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:09:08 crc kubenswrapper[4856]: I1122 09:09:08.715322 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:09:08 crc kubenswrapper[4856]: E1122 09:09:08.716161 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:09:10 crc kubenswrapper[4856]: I1122 09:09:10.190306 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qqmr8"] Nov 22 09:09:10 crc kubenswrapper[4856]: I1122 09:09:10.192861 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:10 crc kubenswrapper[4856]: I1122 09:09:10.204288 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qqmr8"] Nov 22 09:09:10 crc kubenswrapper[4856]: I1122 09:09:10.302971 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-catalog-content\") pod \"community-operators-qqmr8\" (UID: \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\") " pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:10 crc kubenswrapper[4856]: I1122 09:09:10.303066 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-utilities\") pod \"community-operators-qqmr8\" (UID: \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\") " pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:10 crc kubenswrapper[4856]: I1122 09:09:10.303119 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5btkq\" (UniqueName: \"kubernetes.io/projected/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-kube-api-access-5btkq\") pod \"community-operators-qqmr8\" (UID: \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\") " pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:10 crc kubenswrapper[4856]: I1122 09:09:10.405729 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5btkq\" (UniqueName: \"kubernetes.io/projected/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-kube-api-access-5btkq\") pod \"community-operators-qqmr8\" (UID: \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\") " pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:10 crc kubenswrapper[4856]: I1122 09:09:10.405972 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-catalog-content\") pod \"community-operators-qqmr8\" (UID: \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\") " pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:10 crc kubenswrapper[4856]: I1122 09:09:10.406045 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-utilities\") pod \"community-operators-qqmr8\" (UID: \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\") " pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:10 crc kubenswrapper[4856]: I1122 09:09:10.406566 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-catalog-content\") pod \"community-operators-qqmr8\" (UID: \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\") " pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:10 crc kubenswrapper[4856]: I1122 09:09:10.406602 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-utilities\") pod \"community-operators-qqmr8\" (UID: \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\") " pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:10 crc kubenswrapper[4856]: I1122 09:09:10.424523 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5btkq\" (UniqueName: \"kubernetes.io/projected/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-kube-api-access-5btkq\") pod \"community-operators-qqmr8\" (UID: \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\") " pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:10 crc kubenswrapper[4856]: I1122 09:09:10.517386 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:11 crc kubenswrapper[4856]: I1122 09:09:11.077065 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qqmr8"] Nov 22 09:09:11 crc kubenswrapper[4856]: I1122 09:09:11.682701 4856 generic.go:334] "Generic (PLEG): container finished" podID="feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" containerID="2fefad74ac21880129d8bafeaea947100ec3458b95917353432eea6e1966c8a3" exitCode=0 Nov 22 09:09:11 crc kubenswrapper[4856]: I1122 09:09:11.682782 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqmr8" event={"ID":"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a","Type":"ContainerDied","Data":"2fefad74ac21880129d8bafeaea947100ec3458b95917353432eea6e1966c8a3"} Nov 22 09:09:11 crc kubenswrapper[4856]: I1122 09:09:11.683064 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqmr8" event={"ID":"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a","Type":"ContainerStarted","Data":"c526e65d8a23551a767ef14b7e86b436a92b7d60364e925915286416cf17c301"} Nov 22 09:09:14 crc kubenswrapper[4856]: I1122 09:09:14.708268 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqmr8" event={"ID":"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a","Type":"ContainerStarted","Data":"13fa82845345359d7613e67fc1c959c27b21b984b1265a3c29c328d8a83d669c"} Nov 22 09:09:18 crc kubenswrapper[4856]: I1122 09:09:18.745103 4856 generic.go:334] "Generic (PLEG): container finished" podID="feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" containerID="13fa82845345359d7613e67fc1c959c27b21b984b1265a3c29c328d8a83d669c" exitCode=0 Nov 22 09:09:18 crc kubenswrapper[4856]: I1122 09:09:18.745149 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqmr8" event={"ID":"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a","Type":"ContainerDied","Data":"13fa82845345359d7613e67fc1c959c27b21b984b1265a3c29c328d8a83d669c"} Nov 22 09:09:21 crc kubenswrapper[4856]: I1122 09:09:21.775287 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqmr8" event={"ID":"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a","Type":"ContainerStarted","Data":"e2ddd653075820337f0e970447edc26467a25172dbd8cf2a5e6876dccb082149"} Nov 22 09:09:21 crc kubenswrapper[4856]: I1122 09:09:21.796879 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qqmr8" podStartSLOduration=2.652090621 podStartE2EDuration="11.79686309s" podCreationTimestamp="2025-11-22 09:09:10 +0000 UTC" firstStartedPulling="2025-11-22 09:09:11.684143162 +0000 UTC m=+7594.097536420" lastFinishedPulling="2025-11-22 09:09:20.828915631 +0000 UTC m=+7603.242308889" observedRunningTime="2025-11-22 09:09:21.792687567 +0000 UTC m=+7604.206080825" watchObservedRunningTime="2025-11-22 09:09:21.79686309 +0000 UTC m=+7604.210256338" Nov 22 09:09:22 crc kubenswrapper[4856]: I1122 09:09:22.709111 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:09:22 crc kubenswrapper[4856]: E1122 09:09:22.709499 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:09:30 crc kubenswrapper[4856]: I1122 09:09:30.518222 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:30 crc kubenswrapper[4856]: I1122 09:09:30.519180 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:30 crc kubenswrapper[4856]: I1122 09:09:30.569288 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:30 crc kubenswrapper[4856]: I1122 09:09:30.894024 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:30 crc kubenswrapper[4856]: I1122 09:09:30.940069 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qqmr8"] Nov 22 09:09:32 crc kubenswrapper[4856]: I1122 09:09:32.867473 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qqmr8" podUID="feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" containerName="registry-server" containerID="cri-o://e2ddd653075820337f0e970447edc26467a25172dbd8cf2a5e6876dccb082149" gracePeriod=2 Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.342684 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.479894 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5btkq\" (UniqueName: \"kubernetes.io/projected/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-kube-api-access-5btkq\") pod \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\" (UID: \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\") " Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.480001 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-catalog-content\") pod \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\" (UID: \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\") " Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.480317 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-utilities\") pod \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\" (UID: \"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a\") " Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.481312 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-utilities" (OuterVolumeSpecName: "utilities") pod "feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" (UID: "feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.486373 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-kube-api-access-5btkq" (OuterVolumeSpecName: "kube-api-access-5btkq") pod "feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" (UID: "feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a"). InnerVolumeSpecName "kube-api-access-5btkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.530163 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" (UID: "feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.583584 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.583628 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5btkq\" (UniqueName: \"kubernetes.io/projected/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-kube-api-access-5btkq\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.583639 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.709459 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:09:33 crc kubenswrapper[4856]: E1122 09:09:33.709881 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.882632 4856 generic.go:334] "Generic (PLEG): container finished" podID="feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" containerID="e2ddd653075820337f0e970447edc26467a25172dbd8cf2a5e6876dccb082149" exitCode=0 Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.882681 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqmr8" event={"ID":"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a","Type":"ContainerDied","Data":"e2ddd653075820337f0e970447edc26467a25172dbd8cf2a5e6876dccb082149"} Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.882723 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qqmr8" event={"ID":"feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a","Type":"ContainerDied","Data":"c526e65d8a23551a767ef14b7e86b436a92b7d60364e925915286416cf17c301"} Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.882741 4856 scope.go:117] "RemoveContainer" containerID="e2ddd653075820337f0e970447edc26467a25172dbd8cf2a5e6876dccb082149" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.882771 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qqmr8" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.909724 4856 scope.go:117] "RemoveContainer" containerID="13fa82845345359d7613e67fc1c959c27b21b984b1265a3c29c328d8a83d669c" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.934144 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qqmr8"] Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.943258 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qqmr8"] Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.950572 4856 scope.go:117] "RemoveContainer" containerID="2fefad74ac21880129d8bafeaea947100ec3458b95917353432eea6e1966c8a3" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.981965 4856 scope.go:117] "RemoveContainer" containerID="e2ddd653075820337f0e970447edc26467a25172dbd8cf2a5e6876dccb082149" Nov 22 09:09:33 crc kubenswrapper[4856]: E1122 09:09:33.982706 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2ddd653075820337f0e970447edc26467a25172dbd8cf2a5e6876dccb082149\": container with ID starting with e2ddd653075820337f0e970447edc26467a25172dbd8cf2a5e6876dccb082149 not found: ID does not exist" containerID="e2ddd653075820337f0e970447edc26467a25172dbd8cf2a5e6876dccb082149" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.982748 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2ddd653075820337f0e970447edc26467a25172dbd8cf2a5e6876dccb082149"} err="failed to get container status \"e2ddd653075820337f0e970447edc26467a25172dbd8cf2a5e6876dccb082149\": rpc error: code = NotFound desc = could not find container \"e2ddd653075820337f0e970447edc26467a25172dbd8cf2a5e6876dccb082149\": container with ID starting with e2ddd653075820337f0e970447edc26467a25172dbd8cf2a5e6876dccb082149 not found: ID does not exist" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.982774 4856 scope.go:117] "RemoveContainer" containerID="13fa82845345359d7613e67fc1c959c27b21b984b1265a3c29c328d8a83d669c" Nov 22 09:09:33 crc kubenswrapper[4856]: E1122 09:09:33.983047 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13fa82845345359d7613e67fc1c959c27b21b984b1265a3c29c328d8a83d669c\": container with ID starting with 13fa82845345359d7613e67fc1c959c27b21b984b1265a3c29c328d8a83d669c not found: ID does not exist" containerID="13fa82845345359d7613e67fc1c959c27b21b984b1265a3c29c328d8a83d669c" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.983077 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13fa82845345359d7613e67fc1c959c27b21b984b1265a3c29c328d8a83d669c"} err="failed to get container status \"13fa82845345359d7613e67fc1c959c27b21b984b1265a3c29c328d8a83d669c\": rpc error: code = NotFound desc = could not find container \"13fa82845345359d7613e67fc1c959c27b21b984b1265a3c29c328d8a83d669c\": container with ID starting with 13fa82845345359d7613e67fc1c959c27b21b984b1265a3c29c328d8a83d669c not found: ID does not exist" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.983098 4856 scope.go:117] "RemoveContainer" containerID="2fefad74ac21880129d8bafeaea947100ec3458b95917353432eea6e1966c8a3" Nov 22 09:09:33 crc kubenswrapper[4856]: E1122 09:09:33.983313 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fefad74ac21880129d8bafeaea947100ec3458b95917353432eea6e1966c8a3\": container with ID starting with 2fefad74ac21880129d8bafeaea947100ec3458b95917353432eea6e1966c8a3 not found: ID does not exist" containerID="2fefad74ac21880129d8bafeaea947100ec3458b95917353432eea6e1966c8a3" Nov 22 09:09:33 crc kubenswrapper[4856]: I1122 09:09:33.983333 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fefad74ac21880129d8bafeaea947100ec3458b95917353432eea6e1966c8a3"} err="failed to get container status \"2fefad74ac21880129d8bafeaea947100ec3458b95917353432eea6e1966c8a3\": rpc error: code = NotFound desc = could not find container \"2fefad74ac21880129d8bafeaea947100ec3458b95917353432eea6e1966c8a3\": container with ID starting with 2fefad74ac21880129d8bafeaea947100ec3458b95917353432eea6e1966c8a3 not found: ID does not exist" Nov 22 09:09:34 crc kubenswrapper[4856]: I1122 09:09:34.724630 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" path="/var/lib/kubelet/pods/feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a/volumes" Nov 22 09:09:38 crc kubenswrapper[4856]: I1122 09:09:38.940961 4856 generic.go:334] "Generic (PLEG): container finished" podID="2279f2ab-cdc9-4bbb-9d75-4f259de8f544" containerID="7073f7133bb65e95f70709323dbf45f75927acc19116165d96a0d7c876d573bd" exitCode=0 Nov 22 09:09:38 crc kubenswrapper[4856]: I1122 09:09:38.941044 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hxn22" event={"ID":"2279f2ab-cdc9-4bbb-9d75-4f259de8f544","Type":"ContainerDied","Data":"7073f7133bb65e95f70709323dbf45f75927acc19116165d96a0d7c876d573bd"} Nov 22 09:09:40 crc kubenswrapper[4856]: I1122 09:09:40.353825 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hxn22" Nov 22 09:09:40 crc kubenswrapper[4856]: I1122 09:09:40.437800 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-inventory\") pod \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\" (UID: \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\") " Nov 22 09:09:40 crc kubenswrapper[4856]: I1122 09:09:40.437900 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj7sh\" (UniqueName: \"kubernetes.io/projected/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-kube-api-access-gj7sh\") pod \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\" (UID: \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\") " Nov 22 09:09:40 crc kubenswrapper[4856]: I1122 09:09:40.438026 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-ssh-key\") pod \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\" (UID: \"2279f2ab-cdc9-4bbb-9d75-4f259de8f544\") " Nov 22 09:09:40 crc kubenswrapper[4856]: I1122 09:09:40.443407 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-kube-api-access-gj7sh" (OuterVolumeSpecName: "kube-api-access-gj7sh") pod "2279f2ab-cdc9-4bbb-9d75-4f259de8f544" (UID: "2279f2ab-cdc9-4bbb-9d75-4f259de8f544"). InnerVolumeSpecName "kube-api-access-gj7sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:40 crc kubenswrapper[4856]: I1122 09:09:40.465049 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-inventory" (OuterVolumeSpecName: "inventory") pod "2279f2ab-cdc9-4bbb-9d75-4f259de8f544" (UID: "2279f2ab-cdc9-4bbb-9d75-4f259de8f544"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:09:40 crc kubenswrapper[4856]: I1122 09:09:40.471711 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2279f2ab-cdc9-4bbb-9d75-4f259de8f544" (UID: "2279f2ab-cdc9-4bbb-9d75-4f259de8f544"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:09:40 crc kubenswrapper[4856]: I1122 09:09:40.541097 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:40 crc kubenswrapper[4856]: I1122 09:09:40.541138 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:40 crc kubenswrapper[4856]: I1122 09:09:40.541153 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj7sh\" (UniqueName: \"kubernetes.io/projected/2279f2ab-cdc9-4bbb-9d75-4f259de8f544-kube-api-access-gj7sh\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:40 crc kubenswrapper[4856]: I1122 09:09:40.960653 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-hxn22" event={"ID":"2279f2ab-cdc9-4bbb-9d75-4f259de8f544","Type":"ContainerDied","Data":"3310ec4745939fe9cfcb5b61a8521426646e8b6ce56cd16a29311e847e6ed0c3"} Nov 22 09:09:40 crc kubenswrapper[4856]: I1122 09:09:40.960969 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3310ec4745939fe9cfcb5b61a8521426646e8b6ce56cd16a29311e847e6ed0c3" Nov 22 09:09:40 crc kubenswrapper[4856]: I1122 09:09:40.961530 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-hxn22" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.049353 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5c5b9"] Nov 22 09:09:41 crc kubenswrapper[4856]: E1122 09:09:41.049877 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" containerName="extract-content" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.049903 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" containerName="extract-content" Nov 22 09:09:41 crc kubenswrapper[4856]: E1122 09:09:41.049924 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" containerName="extract-utilities" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.049933 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" containerName="extract-utilities" Nov 22 09:09:41 crc kubenswrapper[4856]: E1122 09:09:41.049946 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2279f2ab-cdc9-4bbb-9d75-4f259de8f544" containerName="configure-network-openstack-openstack-cell1" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.049957 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="2279f2ab-cdc9-4bbb-9d75-4f259de8f544" containerName="configure-network-openstack-openstack-cell1" Nov 22 09:09:41 crc kubenswrapper[4856]: E1122 09:09:41.049994 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" containerName="registry-server" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.050004 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" containerName="registry-server" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.050209 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="feac4e1d-4f8a-42f0-bfa5-43cd8ff4e97a" containerName="registry-server" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.050252 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="2279f2ab-cdc9-4bbb-9d75-4f259de8f544" containerName="configure-network-openstack-openstack-cell1" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.051091 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.057297 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.057921 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.058703 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.058802 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.074547 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5c5b9"] Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.152153 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e2c9028-9241-4e80-b568-edbac775f871-inventory\") pod \"validate-network-openstack-openstack-cell1-5c5b9\" (UID: \"5e2c9028-9241-4e80-b568-edbac775f871\") " pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.152235 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzt4h\" (UniqueName: \"kubernetes.io/projected/5e2c9028-9241-4e80-b568-edbac775f871-kube-api-access-lzt4h\") pod \"validate-network-openstack-openstack-cell1-5c5b9\" (UID: \"5e2c9028-9241-4e80-b568-edbac775f871\") " pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.152257 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e2c9028-9241-4e80-b568-edbac775f871-ssh-key\") pod \"validate-network-openstack-openstack-cell1-5c5b9\" (UID: \"5e2c9028-9241-4e80-b568-edbac775f871\") " pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.253663 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e2c9028-9241-4e80-b568-edbac775f871-inventory\") pod \"validate-network-openstack-openstack-cell1-5c5b9\" (UID: \"5e2c9028-9241-4e80-b568-edbac775f871\") " pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.253732 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzt4h\" (UniqueName: \"kubernetes.io/projected/5e2c9028-9241-4e80-b568-edbac775f871-kube-api-access-lzt4h\") pod \"validate-network-openstack-openstack-cell1-5c5b9\" (UID: \"5e2c9028-9241-4e80-b568-edbac775f871\") " pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.253755 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e2c9028-9241-4e80-b568-edbac775f871-ssh-key\") pod \"validate-network-openstack-openstack-cell1-5c5b9\" (UID: \"5e2c9028-9241-4e80-b568-edbac775f871\") " pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.258361 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e2c9028-9241-4e80-b568-edbac775f871-inventory\") pod \"validate-network-openstack-openstack-cell1-5c5b9\" (UID: \"5e2c9028-9241-4e80-b568-edbac775f871\") " pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.258718 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e2c9028-9241-4e80-b568-edbac775f871-ssh-key\") pod \"validate-network-openstack-openstack-cell1-5c5b9\" (UID: \"5e2c9028-9241-4e80-b568-edbac775f871\") " pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.274412 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzt4h\" (UniqueName: \"kubernetes.io/projected/5e2c9028-9241-4e80-b568-edbac775f871-kube-api-access-lzt4h\") pod \"validate-network-openstack-openstack-cell1-5c5b9\" (UID: \"5e2c9028-9241-4e80-b568-edbac775f871\") " pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.378212 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.918385 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5c5b9"] Nov 22 09:09:41 crc kubenswrapper[4856]: I1122 09:09:41.970118 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" event={"ID":"5e2c9028-9241-4e80-b568-edbac775f871","Type":"ContainerStarted","Data":"2499cc129ac570b65ef53d5e69bc8828c93f3cada4c48dfac479532fa59289e7"} Nov 22 09:09:42 crc kubenswrapper[4856]: E1122 09:09:42.050904 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2279f2ab_cdc9_4bbb_9d75_4f259de8f544.slice/crio-7073f7133bb65e95f70709323dbf45f75927acc19116165d96a0d7c876d573bd.scope\": RecentStats: unable to find data in memory cache]" Nov 22 09:09:42 crc kubenswrapper[4856]: I1122 09:09:42.979238 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" event={"ID":"5e2c9028-9241-4e80-b568-edbac775f871","Type":"ContainerStarted","Data":"39ab20a420d719ddf3a0d4d825bd024369ab6d772e008e6c73876572bd123b62"} Nov 22 09:09:43 crc kubenswrapper[4856]: I1122 09:09:43.000975 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" podStartSLOduration=1.56352496 podStartE2EDuration="2.000956635s" podCreationTimestamp="2025-11-22 09:09:41 +0000 UTC" firstStartedPulling="2025-11-22 09:09:41.921302134 +0000 UTC m=+7624.334695392" lastFinishedPulling="2025-11-22 09:09:42.358733809 +0000 UTC m=+7624.772127067" observedRunningTime="2025-11-22 09:09:42.994622785 +0000 UTC m=+7625.408016053" watchObservedRunningTime="2025-11-22 09:09:43.000956635 +0000 UTC m=+7625.414349893" Nov 22 09:09:45 crc kubenswrapper[4856]: I1122 09:09:45.710566 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:09:45 crc kubenswrapper[4856]: E1122 09:09:45.711091 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:09:48 crc kubenswrapper[4856]: I1122 09:09:48.028428 4856 generic.go:334] "Generic (PLEG): container finished" podID="5e2c9028-9241-4e80-b568-edbac775f871" containerID="39ab20a420d719ddf3a0d4d825bd024369ab6d772e008e6c73876572bd123b62" exitCode=0 Nov 22 09:09:48 crc kubenswrapper[4856]: I1122 09:09:48.028555 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" event={"ID":"5e2c9028-9241-4e80-b568-edbac775f871","Type":"ContainerDied","Data":"39ab20a420d719ddf3a0d4d825bd024369ab6d772e008e6c73876572bd123b62"} Nov 22 09:09:49 crc kubenswrapper[4856]: I1122 09:09:49.430347 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" Nov 22 09:09:49 crc kubenswrapper[4856]: I1122 09:09:49.527107 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzt4h\" (UniqueName: \"kubernetes.io/projected/5e2c9028-9241-4e80-b568-edbac775f871-kube-api-access-lzt4h\") pod \"5e2c9028-9241-4e80-b568-edbac775f871\" (UID: \"5e2c9028-9241-4e80-b568-edbac775f871\") " Nov 22 09:09:49 crc kubenswrapper[4856]: I1122 09:09:49.527324 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e2c9028-9241-4e80-b568-edbac775f871-inventory\") pod \"5e2c9028-9241-4e80-b568-edbac775f871\" (UID: \"5e2c9028-9241-4e80-b568-edbac775f871\") " Nov 22 09:09:49 crc kubenswrapper[4856]: I1122 09:09:49.527412 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e2c9028-9241-4e80-b568-edbac775f871-ssh-key\") pod \"5e2c9028-9241-4e80-b568-edbac775f871\" (UID: \"5e2c9028-9241-4e80-b568-edbac775f871\") " Nov 22 09:09:49 crc kubenswrapper[4856]: I1122 09:09:49.533455 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e2c9028-9241-4e80-b568-edbac775f871-kube-api-access-lzt4h" (OuterVolumeSpecName: "kube-api-access-lzt4h") pod "5e2c9028-9241-4e80-b568-edbac775f871" (UID: "5e2c9028-9241-4e80-b568-edbac775f871"). InnerVolumeSpecName "kube-api-access-lzt4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:09:49 crc kubenswrapper[4856]: I1122 09:09:49.556841 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2c9028-9241-4e80-b568-edbac775f871-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5e2c9028-9241-4e80-b568-edbac775f871" (UID: "5e2c9028-9241-4e80-b568-edbac775f871"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:09:49 crc kubenswrapper[4856]: I1122 09:09:49.557947 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e2c9028-9241-4e80-b568-edbac775f871-inventory" (OuterVolumeSpecName: "inventory") pod "5e2c9028-9241-4e80-b568-edbac775f871" (UID: "5e2c9028-9241-4e80-b568-edbac775f871"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:09:49 crc kubenswrapper[4856]: I1122 09:09:49.629806 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5e2c9028-9241-4e80-b568-edbac775f871-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:49 crc kubenswrapper[4856]: I1122 09:09:49.629834 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzt4h\" (UniqueName: \"kubernetes.io/projected/5e2c9028-9241-4e80-b568-edbac775f871-kube-api-access-lzt4h\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:49 crc kubenswrapper[4856]: I1122 09:09:49.629845 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e2c9028-9241-4e80-b568-edbac775f871-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.047105 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" event={"ID":"5e2c9028-9241-4e80-b568-edbac775f871","Type":"ContainerDied","Data":"2499cc129ac570b65ef53d5e69bc8828c93f3cada4c48dfac479532fa59289e7"} Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.047161 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2499cc129ac570b65ef53d5e69bc8828c93f3cada4c48dfac479532fa59289e7" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.047231 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5c5b9" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.108942 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-zflxc"] Nov 22 09:09:50 crc kubenswrapper[4856]: E1122 09:09:50.109446 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2c9028-9241-4e80-b568-edbac775f871" containerName="validate-network-openstack-openstack-cell1" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.109468 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2c9028-9241-4e80-b568-edbac775f871" containerName="validate-network-openstack-openstack-cell1" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.109729 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2c9028-9241-4e80-b568-edbac775f871" containerName="validate-network-openstack-openstack-cell1" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.110417 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-zflxc" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.117166 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-zflxc"] Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.117545 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.118022 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.118118 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.119079 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.243948 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s877f\" (UniqueName: \"kubernetes.io/projected/0b3118f9-cb97-4f71-95d4-65c235c904dc-kube-api-access-s877f\") pod \"install-os-openstack-openstack-cell1-zflxc\" (UID: \"0b3118f9-cb97-4f71-95d4-65c235c904dc\") " pod="openstack/install-os-openstack-openstack-cell1-zflxc" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.244106 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b3118f9-cb97-4f71-95d4-65c235c904dc-inventory\") pod \"install-os-openstack-openstack-cell1-zflxc\" (UID: \"0b3118f9-cb97-4f71-95d4-65c235c904dc\") " pod="openstack/install-os-openstack-openstack-cell1-zflxc" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.244153 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b3118f9-cb97-4f71-95d4-65c235c904dc-ssh-key\") pod \"install-os-openstack-openstack-cell1-zflxc\" (UID: \"0b3118f9-cb97-4f71-95d4-65c235c904dc\") " pod="openstack/install-os-openstack-openstack-cell1-zflxc" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.345767 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s877f\" (UniqueName: \"kubernetes.io/projected/0b3118f9-cb97-4f71-95d4-65c235c904dc-kube-api-access-s877f\") pod \"install-os-openstack-openstack-cell1-zflxc\" (UID: \"0b3118f9-cb97-4f71-95d4-65c235c904dc\") " pod="openstack/install-os-openstack-openstack-cell1-zflxc" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.345849 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b3118f9-cb97-4f71-95d4-65c235c904dc-inventory\") pod \"install-os-openstack-openstack-cell1-zflxc\" (UID: \"0b3118f9-cb97-4f71-95d4-65c235c904dc\") " pod="openstack/install-os-openstack-openstack-cell1-zflxc" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.345897 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b3118f9-cb97-4f71-95d4-65c235c904dc-ssh-key\") pod \"install-os-openstack-openstack-cell1-zflxc\" (UID: \"0b3118f9-cb97-4f71-95d4-65c235c904dc\") " pod="openstack/install-os-openstack-openstack-cell1-zflxc" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.349583 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b3118f9-cb97-4f71-95d4-65c235c904dc-ssh-key\") pod \"install-os-openstack-openstack-cell1-zflxc\" (UID: \"0b3118f9-cb97-4f71-95d4-65c235c904dc\") " pod="openstack/install-os-openstack-openstack-cell1-zflxc" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.350292 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b3118f9-cb97-4f71-95d4-65c235c904dc-inventory\") pod \"install-os-openstack-openstack-cell1-zflxc\" (UID: \"0b3118f9-cb97-4f71-95d4-65c235c904dc\") " pod="openstack/install-os-openstack-openstack-cell1-zflxc" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.362346 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s877f\" (UniqueName: \"kubernetes.io/projected/0b3118f9-cb97-4f71-95d4-65c235c904dc-kube-api-access-s877f\") pod \"install-os-openstack-openstack-cell1-zflxc\" (UID: \"0b3118f9-cb97-4f71-95d4-65c235c904dc\") " pod="openstack/install-os-openstack-openstack-cell1-zflxc" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.441711 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-zflxc" Nov 22 09:09:50 crc kubenswrapper[4856]: I1122 09:09:50.927390 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-zflxc"] Nov 22 09:09:51 crc kubenswrapper[4856]: I1122 09:09:51.057720 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-zflxc" event={"ID":"0b3118f9-cb97-4f71-95d4-65c235c904dc","Type":"ContainerStarted","Data":"2c5adee31ed7142115a15d71e275bb1d0d4c272fac91d33a9a88c961b887cb6d"} Nov 22 09:09:52 crc kubenswrapper[4856]: I1122 09:09:52.069565 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-zflxc" event={"ID":"0b3118f9-cb97-4f71-95d4-65c235c904dc","Type":"ContainerStarted","Data":"5cfd4233c863d8cc3631c931d6fef14a8581527cbd6e6fd3d3cc83e8b8b5eac7"} Nov 22 09:09:52 crc kubenswrapper[4856]: I1122 09:09:52.090960 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-zflxc" podStartSLOduration=1.638417364 podStartE2EDuration="2.090882275s" podCreationTimestamp="2025-11-22 09:09:50 +0000 UTC" firstStartedPulling="2025-11-22 09:09:50.934263879 +0000 UTC m=+7633.347657137" lastFinishedPulling="2025-11-22 09:09:51.38672877 +0000 UTC m=+7633.800122048" observedRunningTime="2025-11-22 09:09:52.087206546 +0000 UTC m=+7634.500599814" watchObservedRunningTime="2025-11-22 09:09:52.090882275 +0000 UTC m=+7634.504275543" Nov 22 09:09:52 crc kubenswrapper[4856]: E1122 09:09:52.315808 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2279f2ab_cdc9_4bbb_9d75_4f259de8f544.slice/crio-7073f7133bb65e95f70709323dbf45f75927acc19116165d96a0d7c876d573bd.scope\": RecentStats: unable to find data in memory cache]" Nov 22 09:09:58 crc kubenswrapper[4856]: I1122 09:09:58.716124 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:09:58 crc kubenswrapper[4856]: E1122 09:09:58.716765 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:10:02 crc kubenswrapper[4856]: E1122 09:10:02.635851 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2279f2ab_cdc9_4bbb_9d75_4f259de8f544.slice/crio-7073f7133bb65e95f70709323dbf45f75927acc19116165d96a0d7c876d573bd.scope\": RecentStats: unable to find data in memory cache]" Nov 22 09:10:12 crc kubenswrapper[4856]: I1122 09:10:12.710401 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:10:12 crc kubenswrapper[4856]: E1122 09:10:12.711483 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:10:12 crc kubenswrapper[4856]: E1122 09:10:12.896626 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2279f2ab_cdc9_4bbb_9d75_4f259de8f544.slice/crio-7073f7133bb65e95f70709323dbf45f75927acc19116165d96a0d7c876d573bd.scope\": RecentStats: unable to find data in memory cache]" Nov 22 09:10:23 crc kubenswrapper[4856]: E1122 09:10:23.216888 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2279f2ab_cdc9_4bbb_9d75_4f259de8f544.slice/crio-7073f7133bb65e95f70709323dbf45f75927acc19116165d96a0d7c876d573bd.scope\": RecentStats: unable to find data in memory cache]" Nov 22 09:10:27 crc kubenswrapper[4856]: I1122 09:10:27.709872 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:10:27 crc kubenswrapper[4856]: E1122 09:10:27.710693 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:10:33 crc kubenswrapper[4856]: E1122 09:10:33.476785 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2279f2ab_cdc9_4bbb_9d75_4f259de8f544.slice/crio-7073f7133bb65e95f70709323dbf45f75927acc19116165d96a0d7c876d573bd.scope\": RecentStats: unable to find data in memory cache]" Nov 22 09:10:35 crc kubenswrapper[4856]: I1122 09:10:35.478185 4856 generic.go:334] "Generic (PLEG): container finished" podID="0b3118f9-cb97-4f71-95d4-65c235c904dc" containerID="5cfd4233c863d8cc3631c931d6fef14a8581527cbd6e6fd3d3cc83e8b8b5eac7" exitCode=0 Nov 22 09:10:35 crc kubenswrapper[4856]: I1122 09:10:35.478593 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-zflxc" event={"ID":"0b3118f9-cb97-4f71-95d4-65c235c904dc","Type":"ContainerDied","Data":"5cfd4233c863d8cc3631c931d6fef14a8581527cbd6e6fd3d3cc83e8b8b5eac7"} Nov 22 09:10:36 crc kubenswrapper[4856]: I1122 09:10:36.919762 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-zflxc" Nov 22 09:10:36 crc kubenswrapper[4856]: I1122 09:10:36.939404 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b3118f9-cb97-4f71-95d4-65c235c904dc-ssh-key\") pod \"0b3118f9-cb97-4f71-95d4-65c235c904dc\" (UID: \"0b3118f9-cb97-4f71-95d4-65c235c904dc\") " Nov 22 09:10:36 crc kubenswrapper[4856]: I1122 09:10:36.939501 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s877f\" (UniqueName: \"kubernetes.io/projected/0b3118f9-cb97-4f71-95d4-65c235c904dc-kube-api-access-s877f\") pod \"0b3118f9-cb97-4f71-95d4-65c235c904dc\" (UID: \"0b3118f9-cb97-4f71-95d4-65c235c904dc\") " Nov 22 09:10:36 crc kubenswrapper[4856]: I1122 09:10:36.939762 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b3118f9-cb97-4f71-95d4-65c235c904dc-inventory\") pod \"0b3118f9-cb97-4f71-95d4-65c235c904dc\" (UID: \"0b3118f9-cb97-4f71-95d4-65c235c904dc\") " Nov 22 09:10:36 crc kubenswrapper[4856]: I1122 09:10:36.947759 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3118f9-cb97-4f71-95d4-65c235c904dc-kube-api-access-s877f" (OuterVolumeSpecName: "kube-api-access-s877f") pod "0b3118f9-cb97-4f71-95d4-65c235c904dc" (UID: "0b3118f9-cb97-4f71-95d4-65c235c904dc"). InnerVolumeSpecName "kube-api-access-s877f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:10:36 crc kubenswrapper[4856]: I1122 09:10:36.972960 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3118f9-cb97-4f71-95d4-65c235c904dc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0b3118f9-cb97-4f71-95d4-65c235c904dc" (UID: "0b3118f9-cb97-4f71-95d4-65c235c904dc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:10:36 crc kubenswrapper[4856]: I1122 09:10:36.999599 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3118f9-cb97-4f71-95d4-65c235c904dc-inventory" (OuterVolumeSpecName: "inventory") pod "0b3118f9-cb97-4f71-95d4-65c235c904dc" (UID: "0b3118f9-cb97-4f71-95d4-65c235c904dc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.042293 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b3118f9-cb97-4f71-95d4-65c235c904dc-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.042330 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b3118f9-cb97-4f71-95d4-65c235c904dc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.042339 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s877f\" (UniqueName: \"kubernetes.io/projected/0b3118f9-cb97-4f71-95d4-65c235c904dc-kube-api-access-s877f\") on node \"crc\" DevicePath \"\"" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.501188 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-zflxc" event={"ID":"0b3118f9-cb97-4f71-95d4-65c235c904dc","Type":"ContainerDied","Data":"2c5adee31ed7142115a15d71e275bb1d0d4c272fac91d33a9a88c961b887cb6d"} Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.501231 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c5adee31ed7142115a15d71e275bb1d0d4c272fac91d33a9a88c961b887cb6d" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.501257 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-zflxc" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.597867 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-qhn5d"] Nov 22 09:10:37 crc kubenswrapper[4856]: E1122 09:10:37.598366 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3118f9-cb97-4f71-95d4-65c235c904dc" containerName="install-os-openstack-openstack-cell1" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.598390 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3118f9-cb97-4f71-95d4-65c235c904dc" containerName="install-os-openstack-openstack-cell1" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.598713 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3118f9-cb97-4f71-95d4-65c235c904dc" containerName="install-os-openstack-openstack-cell1" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.599536 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.601553 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.601669 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.601732 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.602877 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.620103 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-qhn5d"] Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.655319 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ae2a389-4844-467a-a2a7-2296bdb9275b-ssh-key\") pod \"configure-os-openstack-openstack-cell1-qhn5d\" (UID: \"1ae2a389-4844-467a-a2a7-2296bdb9275b\") " pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.655634 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ae2a389-4844-467a-a2a7-2296bdb9275b-inventory\") pod \"configure-os-openstack-openstack-cell1-qhn5d\" (UID: \"1ae2a389-4844-467a-a2a7-2296bdb9275b\") " pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.655781 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzr56\" (UniqueName: \"kubernetes.io/projected/1ae2a389-4844-467a-a2a7-2296bdb9275b-kube-api-access-nzr56\") pod \"configure-os-openstack-openstack-cell1-qhn5d\" (UID: \"1ae2a389-4844-467a-a2a7-2296bdb9275b\") " pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.757957 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzr56\" (UniqueName: \"kubernetes.io/projected/1ae2a389-4844-467a-a2a7-2296bdb9275b-kube-api-access-nzr56\") pod \"configure-os-openstack-openstack-cell1-qhn5d\" (UID: \"1ae2a389-4844-467a-a2a7-2296bdb9275b\") " pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.758656 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ae2a389-4844-467a-a2a7-2296bdb9275b-ssh-key\") pod \"configure-os-openstack-openstack-cell1-qhn5d\" (UID: \"1ae2a389-4844-467a-a2a7-2296bdb9275b\") " pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.758787 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ae2a389-4844-467a-a2a7-2296bdb9275b-inventory\") pod \"configure-os-openstack-openstack-cell1-qhn5d\" (UID: \"1ae2a389-4844-467a-a2a7-2296bdb9275b\") " pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.762392 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ae2a389-4844-467a-a2a7-2296bdb9275b-inventory\") pod \"configure-os-openstack-openstack-cell1-qhn5d\" (UID: \"1ae2a389-4844-467a-a2a7-2296bdb9275b\") " pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.762435 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ae2a389-4844-467a-a2a7-2296bdb9275b-ssh-key\") pod \"configure-os-openstack-openstack-cell1-qhn5d\" (UID: \"1ae2a389-4844-467a-a2a7-2296bdb9275b\") " pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.774294 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzr56\" (UniqueName: \"kubernetes.io/projected/1ae2a389-4844-467a-a2a7-2296bdb9275b-kube-api-access-nzr56\") pod \"configure-os-openstack-openstack-cell1-qhn5d\" (UID: \"1ae2a389-4844-467a-a2a7-2296bdb9275b\") " pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" Nov 22 09:10:37 crc kubenswrapper[4856]: I1122 09:10:37.928808 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" Nov 22 09:10:38 crc kubenswrapper[4856]: I1122 09:10:38.498896 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-qhn5d"] Nov 22 09:10:38 crc kubenswrapper[4856]: I1122 09:10:38.504593 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:10:38 crc kubenswrapper[4856]: I1122 09:10:38.512996 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" event={"ID":"1ae2a389-4844-467a-a2a7-2296bdb9275b","Type":"ContainerStarted","Data":"c7e1197fe17a2ad94c49ddc679775e7e05f85f935d32dfc4ea41fb77d99e9c6c"} Nov 22 09:10:38 crc kubenswrapper[4856]: I1122 09:10:38.710140 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:10:38 crc kubenswrapper[4856]: E1122 09:10:38.710416 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:10:38 crc kubenswrapper[4856]: I1122 09:10:38.933919 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:10:39 crc kubenswrapper[4856]: I1122 09:10:39.525571 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" event={"ID":"1ae2a389-4844-467a-a2a7-2296bdb9275b","Type":"ContainerStarted","Data":"c6948b1719a9a176faa66453446438d4aa2db236726cac5f6ca5d1f21d407187"} Nov 22 09:10:39 crc kubenswrapper[4856]: I1122 09:10:39.551849 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" podStartSLOduration=2.124790217 podStartE2EDuration="2.551832141s" podCreationTimestamp="2025-11-22 09:10:37 +0000 UTC" firstStartedPulling="2025-11-22 09:10:38.504327487 +0000 UTC m=+7680.917720745" lastFinishedPulling="2025-11-22 09:10:38.931369411 +0000 UTC m=+7681.344762669" observedRunningTime="2025-11-22 09:10:39.542826588 +0000 UTC m=+7681.956219866" watchObservedRunningTime="2025-11-22 09:10:39.551832141 +0000 UTC m=+7681.965225399" Nov 22 09:10:53 crc kubenswrapper[4856]: I1122 09:10:53.711013 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:10:53 crc kubenswrapper[4856]: E1122 09:10:53.712132 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:11:04 crc kubenswrapper[4856]: I1122 09:11:04.710567 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:11:04 crc kubenswrapper[4856]: E1122 09:11:04.712588 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:11:18 crc kubenswrapper[4856]: I1122 09:11:18.716849 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:11:18 crc kubenswrapper[4856]: E1122 09:11:18.717675 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:11:23 crc kubenswrapper[4856]: I1122 09:11:23.212238 4856 generic.go:334] "Generic (PLEG): container finished" podID="1ae2a389-4844-467a-a2a7-2296bdb9275b" containerID="c6948b1719a9a176faa66453446438d4aa2db236726cac5f6ca5d1f21d407187" exitCode=0 Nov 22 09:11:23 crc kubenswrapper[4856]: I1122 09:11:23.212372 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" event={"ID":"1ae2a389-4844-467a-a2a7-2296bdb9275b","Type":"ContainerDied","Data":"c6948b1719a9a176faa66453446438d4aa2db236726cac5f6ca5d1f21d407187"} Nov 22 09:11:24 crc kubenswrapper[4856]: I1122 09:11:24.651715 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" Nov 22 09:11:24 crc kubenswrapper[4856]: I1122 09:11:24.829904 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzr56\" (UniqueName: \"kubernetes.io/projected/1ae2a389-4844-467a-a2a7-2296bdb9275b-kube-api-access-nzr56\") pod \"1ae2a389-4844-467a-a2a7-2296bdb9275b\" (UID: \"1ae2a389-4844-467a-a2a7-2296bdb9275b\") " Nov 22 09:11:24 crc kubenswrapper[4856]: I1122 09:11:24.830179 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ae2a389-4844-467a-a2a7-2296bdb9275b-inventory\") pod \"1ae2a389-4844-467a-a2a7-2296bdb9275b\" (UID: \"1ae2a389-4844-467a-a2a7-2296bdb9275b\") " Nov 22 09:11:24 crc kubenswrapper[4856]: I1122 09:11:24.831003 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ae2a389-4844-467a-a2a7-2296bdb9275b-ssh-key\") pod \"1ae2a389-4844-467a-a2a7-2296bdb9275b\" (UID: \"1ae2a389-4844-467a-a2a7-2296bdb9275b\") " Nov 22 09:11:24 crc kubenswrapper[4856]: I1122 09:11:24.835520 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae2a389-4844-467a-a2a7-2296bdb9275b-kube-api-access-nzr56" (OuterVolumeSpecName: "kube-api-access-nzr56") pod "1ae2a389-4844-467a-a2a7-2296bdb9275b" (UID: "1ae2a389-4844-467a-a2a7-2296bdb9275b"). InnerVolumeSpecName "kube-api-access-nzr56". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:11:24 crc kubenswrapper[4856]: I1122 09:11:24.861643 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ae2a389-4844-467a-a2a7-2296bdb9275b-inventory" (OuterVolumeSpecName: "inventory") pod "1ae2a389-4844-467a-a2a7-2296bdb9275b" (UID: "1ae2a389-4844-467a-a2a7-2296bdb9275b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:11:24 crc kubenswrapper[4856]: I1122 09:11:24.861976 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ae2a389-4844-467a-a2a7-2296bdb9275b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1ae2a389-4844-467a-a2a7-2296bdb9275b" (UID: "1ae2a389-4844-467a-a2a7-2296bdb9275b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:11:24 crc kubenswrapper[4856]: I1122 09:11:24.933727 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1ae2a389-4844-467a-a2a7-2296bdb9275b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:11:24 crc kubenswrapper[4856]: I1122 09:11:24.933763 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzr56\" (UniqueName: \"kubernetes.io/projected/1ae2a389-4844-467a-a2a7-2296bdb9275b-kube-api-access-nzr56\") on node \"crc\" DevicePath \"\"" Nov 22 09:11:24 crc kubenswrapper[4856]: I1122 09:11:24.933776 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ae2a389-4844-467a-a2a7-2296bdb9275b-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.236342 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" event={"ID":"1ae2a389-4844-467a-a2a7-2296bdb9275b","Type":"ContainerDied","Data":"c7e1197fe17a2ad94c49ddc679775e7e05f85f935d32dfc4ea41fb77d99e9c6c"} Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.236396 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7e1197fe17a2ad94c49ddc679775e7e05f85f935d32dfc4ea41fb77d99e9c6c" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.236462 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-qhn5d" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.330783 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-openstack-kvrgl"] Nov 22 09:11:25 crc kubenswrapper[4856]: E1122 09:11:25.331339 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae2a389-4844-467a-a2a7-2296bdb9275b" containerName="configure-os-openstack-openstack-cell1" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.331364 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae2a389-4844-467a-a2a7-2296bdb9275b" containerName="configure-os-openstack-openstack-cell1" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.331655 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae2a389-4844-467a-a2a7-2296bdb9275b" containerName="configure-os-openstack-openstack-cell1" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.332797 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-kvrgl" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.335790 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.336854 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.337016 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.337054 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.340561 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dc5da2fb-1405-4be9-adca-169ef62d4f19-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-kvrgl\" (UID: \"dc5da2fb-1405-4be9-adca-169ef62d4f19\") " pod="openstack/ssh-known-hosts-openstack-kvrgl" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.340832 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dc5da2fb-1405-4be9-adca-169ef62d4f19-inventory-0\") pod \"ssh-known-hosts-openstack-kvrgl\" (UID: \"dc5da2fb-1405-4be9-adca-169ef62d4f19\") " pod="openstack/ssh-known-hosts-openstack-kvrgl" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.341067 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqjw7\" (UniqueName: \"kubernetes.io/projected/dc5da2fb-1405-4be9-adca-169ef62d4f19-kube-api-access-dqjw7\") pod \"ssh-known-hosts-openstack-kvrgl\" (UID: \"dc5da2fb-1405-4be9-adca-169ef62d4f19\") " pod="openstack/ssh-known-hosts-openstack-kvrgl" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.344313 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-kvrgl"] Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.442306 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dc5da2fb-1405-4be9-adca-169ef62d4f19-inventory-0\") pod \"ssh-known-hosts-openstack-kvrgl\" (UID: \"dc5da2fb-1405-4be9-adca-169ef62d4f19\") " pod="openstack/ssh-known-hosts-openstack-kvrgl" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.442473 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqjw7\" (UniqueName: \"kubernetes.io/projected/dc5da2fb-1405-4be9-adca-169ef62d4f19-kube-api-access-dqjw7\") pod \"ssh-known-hosts-openstack-kvrgl\" (UID: \"dc5da2fb-1405-4be9-adca-169ef62d4f19\") " pod="openstack/ssh-known-hosts-openstack-kvrgl" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.442585 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dc5da2fb-1405-4be9-adca-169ef62d4f19-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-kvrgl\" (UID: \"dc5da2fb-1405-4be9-adca-169ef62d4f19\") " pod="openstack/ssh-known-hosts-openstack-kvrgl" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.449740 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dc5da2fb-1405-4be9-adca-169ef62d4f19-ssh-key-openstack-cell1\") pod \"ssh-known-hosts-openstack-kvrgl\" (UID: \"dc5da2fb-1405-4be9-adca-169ef62d4f19\") " pod="openstack/ssh-known-hosts-openstack-kvrgl" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.449748 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dc5da2fb-1405-4be9-adca-169ef62d4f19-inventory-0\") pod \"ssh-known-hosts-openstack-kvrgl\" (UID: \"dc5da2fb-1405-4be9-adca-169ef62d4f19\") " pod="openstack/ssh-known-hosts-openstack-kvrgl" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.461798 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqjw7\" (UniqueName: \"kubernetes.io/projected/dc5da2fb-1405-4be9-adca-169ef62d4f19-kube-api-access-dqjw7\") pod \"ssh-known-hosts-openstack-kvrgl\" (UID: \"dc5da2fb-1405-4be9-adca-169ef62d4f19\") " pod="openstack/ssh-known-hosts-openstack-kvrgl" Nov 22 09:11:25 crc kubenswrapper[4856]: I1122 09:11:25.655621 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-kvrgl" Nov 22 09:11:26 crc kubenswrapper[4856]: I1122 09:11:26.198862 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-openstack-kvrgl"] Nov 22 09:11:26 crc kubenswrapper[4856]: I1122 09:11:26.245447 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-kvrgl" event={"ID":"dc5da2fb-1405-4be9-adca-169ef62d4f19","Type":"ContainerStarted","Data":"042de01cd641c5a971e5e92697fe0ad75eb79b70e4635e6cac1998e574624964"} Nov 22 09:11:27 crc kubenswrapper[4856]: I1122 09:11:27.256943 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-kvrgl" event={"ID":"dc5da2fb-1405-4be9-adca-169ef62d4f19","Type":"ContainerStarted","Data":"0895858a53afacea52af0b1c458148ccb5558cddcbbfb36cc76b4346ccf83ace"} Nov 22 09:11:30 crc kubenswrapper[4856]: I1122 09:11:30.710438 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:11:31 crc kubenswrapper[4856]: I1122 09:11:31.313771 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"fc11a243af1a19cf535ce76d7bb4962a44374e57856bbefc7f1a740aa36c0387"} Nov 22 09:11:31 crc kubenswrapper[4856]: I1122 09:11:31.347418 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-openstack-kvrgl" podStartSLOduration=5.867806659 podStartE2EDuration="6.347398311s" podCreationTimestamp="2025-11-22 09:11:25 +0000 UTC" firstStartedPulling="2025-11-22 09:11:26.202833848 +0000 UTC m=+7728.616227106" lastFinishedPulling="2025-11-22 09:11:26.68242549 +0000 UTC m=+7729.095818758" observedRunningTime="2025-11-22 09:11:27.278472631 +0000 UTC m=+7729.691865899" watchObservedRunningTime="2025-11-22 09:11:31.347398311 +0000 UTC m=+7733.760791569" Nov 22 09:11:35 crc kubenswrapper[4856]: E1122 09:11:35.079402 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc5da2fb_1405_4be9_adca_169ef62d4f19.slice/crio-0895858a53afacea52af0b1c458148ccb5558cddcbbfb36cc76b4346ccf83ace.scope\": RecentStats: unable to find data in memory cache]" Nov 22 09:11:35 crc kubenswrapper[4856]: I1122 09:11:35.370161 4856 generic.go:334] "Generic (PLEG): container finished" podID="dc5da2fb-1405-4be9-adca-169ef62d4f19" containerID="0895858a53afacea52af0b1c458148ccb5558cddcbbfb36cc76b4346ccf83ace" exitCode=0 Nov 22 09:11:35 crc kubenswrapper[4856]: I1122 09:11:35.370217 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-kvrgl" event={"ID":"dc5da2fb-1405-4be9-adca-169ef62d4f19","Type":"ContainerDied","Data":"0895858a53afacea52af0b1c458148ccb5558cddcbbfb36cc76b4346ccf83ace"} Nov 22 09:11:36 crc kubenswrapper[4856]: I1122 09:11:36.811977 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-kvrgl" Nov 22 09:11:36 crc kubenswrapper[4856]: I1122 09:11:36.970168 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dc5da2fb-1405-4be9-adca-169ef62d4f19-inventory-0\") pod \"dc5da2fb-1405-4be9-adca-169ef62d4f19\" (UID: \"dc5da2fb-1405-4be9-adca-169ef62d4f19\") " Nov 22 09:11:36 crc kubenswrapper[4856]: I1122 09:11:36.971150 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqjw7\" (UniqueName: \"kubernetes.io/projected/dc5da2fb-1405-4be9-adca-169ef62d4f19-kube-api-access-dqjw7\") pod \"dc5da2fb-1405-4be9-adca-169ef62d4f19\" (UID: \"dc5da2fb-1405-4be9-adca-169ef62d4f19\") " Nov 22 09:11:36 crc kubenswrapper[4856]: I1122 09:11:36.971692 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dc5da2fb-1405-4be9-adca-169ef62d4f19-ssh-key-openstack-cell1\") pod \"dc5da2fb-1405-4be9-adca-169ef62d4f19\" (UID: \"dc5da2fb-1405-4be9-adca-169ef62d4f19\") " Nov 22 09:11:36 crc kubenswrapper[4856]: I1122 09:11:36.976114 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5da2fb-1405-4be9-adca-169ef62d4f19-kube-api-access-dqjw7" (OuterVolumeSpecName: "kube-api-access-dqjw7") pod "dc5da2fb-1405-4be9-adca-169ef62d4f19" (UID: "dc5da2fb-1405-4be9-adca-169ef62d4f19"). InnerVolumeSpecName "kube-api-access-dqjw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.003061 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5da2fb-1405-4be9-adca-169ef62d4f19-ssh-key-openstack-cell1" (OuterVolumeSpecName: "ssh-key-openstack-cell1") pod "dc5da2fb-1405-4be9-adca-169ef62d4f19" (UID: "dc5da2fb-1405-4be9-adca-169ef62d4f19"). InnerVolumeSpecName "ssh-key-openstack-cell1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.003501 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc5da2fb-1405-4be9-adca-169ef62d4f19-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "dc5da2fb-1405-4be9-adca-169ef62d4f19" (UID: "dc5da2fb-1405-4be9-adca-169ef62d4f19"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.074390 4856 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/dc5da2fb-1405-4be9-adca-169ef62d4f19-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.074420 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqjw7\" (UniqueName: \"kubernetes.io/projected/dc5da2fb-1405-4be9-adca-169ef62d4f19-kube-api-access-dqjw7\") on node \"crc\" DevicePath \"\"" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.074431 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-cell1\" (UniqueName: \"kubernetes.io/secret/dc5da2fb-1405-4be9-adca-169ef62d4f19-ssh-key-openstack-cell1\") on node \"crc\" DevicePath \"\"" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.391742 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-openstack-kvrgl" event={"ID":"dc5da2fb-1405-4be9-adca-169ef62d4f19","Type":"ContainerDied","Data":"042de01cd641c5a971e5e92697fe0ad75eb79b70e4635e6cac1998e574624964"} Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.391792 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="042de01cd641c5a971e5e92697fe0ad75eb79b70e4635e6cac1998e574624964" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.391893 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-openstack-kvrgl" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.463670 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-openstack-openstack-cell1-9mthx"] Nov 22 09:11:37 crc kubenswrapper[4856]: E1122 09:11:37.464393 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5da2fb-1405-4be9-adca-169ef62d4f19" containerName="ssh-known-hosts-openstack" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.464526 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5da2fb-1405-4be9-adca-169ef62d4f19" containerName="ssh-known-hosts-openstack" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.464799 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5da2fb-1405-4be9-adca-169ef62d4f19" containerName="ssh-known-hosts-openstack" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.465708 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-9mthx" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.468450 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.468795 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.469504 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.469836 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.473833 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-9mthx"] Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.481899 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qvm7\" (UniqueName: \"kubernetes.io/projected/3d96cb97-55b2-4bec-a4dc-6065d4143687-kube-api-access-4qvm7\") pod \"run-os-openstack-openstack-cell1-9mthx\" (UID: \"3d96cb97-55b2-4bec-a4dc-6065d4143687\") " pod="openstack/run-os-openstack-openstack-cell1-9mthx" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.481973 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d96cb97-55b2-4bec-a4dc-6065d4143687-ssh-key\") pod \"run-os-openstack-openstack-cell1-9mthx\" (UID: \"3d96cb97-55b2-4bec-a4dc-6065d4143687\") " pod="openstack/run-os-openstack-openstack-cell1-9mthx" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.482039 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d96cb97-55b2-4bec-a4dc-6065d4143687-inventory\") pod \"run-os-openstack-openstack-cell1-9mthx\" (UID: \"3d96cb97-55b2-4bec-a4dc-6065d4143687\") " pod="openstack/run-os-openstack-openstack-cell1-9mthx" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.584798 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qvm7\" (UniqueName: \"kubernetes.io/projected/3d96cb97-55b2-4bec-a4dc-6065d4143687-kube-api-access-4qvm7\") pod \"run-os-openstack-openstack-cell1-9mthx\" (UID: \"3d96cb97-55b2-4bec-a4dc-6065d4143687\") " pod="openstack/run-os-openstack-openstack-cell1-9mthx" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.585034 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d96cb97-55b2-4bec-a4dc-6065d4143687-ssh-key\") pod \"run-os-openstack-openstack-cell1-9mthx\" (UID: \"3d96cb97-55b2-4bec-a4dc-6065d4143687\") " pod="openstack/run-os-openstack-openstack-cell1-9mthx" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.585182 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d96cb97-55b2-4bec-a4dc-6065d4143687-inventory\") pod \"run-os-openstack-openstack-cell1-9mthx\" (UID: \"3d96cb97-55b2-4bec-a4dc-6065d4143687\") " pod="openstack/run-os-openstack-openstack-cell1-9mthx" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.589108 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d96cb97-55b2-4bec-a4dc-6065d4143687-inventory\") pod \"run-os-openstack-openstack-cell1-9mthx\" (UID: \"3d96cb97-55b2-4bec-a4dc-6065d4143687\") " pod="openstack/run-os-openstack-openstack-cell1-9mthx" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.589148 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d96cb97-55b2-4bec-a4dc-6065d4143687-ssh-key\") pod \"run-os-openstack-openstack-cell1-9mthx\" (UID: \"3d96cb97-55b2-4bec-a4dc-6065d4143687\") " pod="openstack/run-os-openstack-openstack-cell1-9mthx" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.607211 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qvm7\" (UniqueName: \"kubernetes.io/projected/3d96cb97-55b2-4bec-a4dc-6065d4143687-kube-api-access-4qvm7\") pod \"run-os-openstack-openstack-cell1-9mthx\" (UID: \"3d96cb97-55b2-4bec-a4dc-6065d4143687\") " pod="openstack/run-os-openstack-openstack-cell1-9mthx" Nov 22 09:11:37 crc kubenswrapper[4856]: I1122 09:11:37.796354 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-9mthx" Nov 22 09:11:38 crc kubenswrapper[4856]: I1122 09:11:38.359594 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-openstack-openstack-cell1-9mthx"] Nov 22 09:11:38 crc kubenswrapper[4856]: I1122 09:11:38.406754 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-9mthx" event={"ID":"3d96cb97-55b2-4bec-a4dc-6065d4143687","Type":"ContainerStarted","Data":"0058d688ee366b3460782fe2609f520d9d3aa27aa89ebf3aeec8b175164f8a62"} Nov 22 09:11:39 crc kubenswrapper[4856]: I1122 09:11:39.887254 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:11:40 crc kubenswrapper[4856]: I1122 09:11:40.432628 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-9mthx" event={"ID":"3d96cb97-55b2-4bec-a4dc-6065d4143687","Type":"ContainerStarted","Data":"d3e169c16a659d063a5ce198d89441ea830c89ebe7706bb8e2ff318082e56f83"} Nov 22 09:11:40 crc kubenswrapper[4856]: I1122 09:11:40.453356 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-openstack-openstack-cell1-9mthx" podStartSLOduration=1.938146028 podStartE2EDuration="3.453322232s" podCreationTimestamp="2025-11-22 09:11:37 +0000 UTC" firstStartedPulling="2025-11-22 09:11:38.369104756 +0000 UTC m=+7740.782498034" lastFinishedPulling="2025-11-22 09:11:39.88428096 +0000 UTC m=+7742.297674238" observedRunningTime="2025-11-22 09:11:40.448763169 +0000 UTC m=+7742.862156447" watchObservedRunningTime="2025-11-22 09:11:40.453322232 +0000 UTC m=+7742.866715490" Nov 22 09:11:48 crc kubenswrapper[4856]: I1122 09:11:48.421494 4856 scope.go:117] "RemoveContainer" containerID="0787c12890076f3453780407f187e1bfe4a7f8f08f41623d8d4a27ade6f379d4" Nov 22 09:11:48 crc kubenswrapper[4856]: I1122 09:11:48.455942 4856 scope.go:117] "RemoveContainer" containerID="b94bf15ae5e84f2182fd0645808c8950ecb0d231501101198b2f34bea0302e73" Nov 22 09:11:48 crc kubenswrapper[4856]: I1122 09:11:48.506909 4856 generic.go:334] "Generic (PLEG): container finished" podID="3d96cb97-55b2-4bec-a4dc-6065d4143687" containerID="d3e169c16a659d063a5ce198d89441ea830c89ebe7706bb8e2ff318082e56f83" exitCode=0 Nov 22 09:11:48 crc kubenswrapper[4856]: I1122 09:11:48.507022 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-9mthx" event={"ID":"3d96cb97-55b2-4bec-a4dc-6065d4143687","Type":"ContainerDied","Data":"d3e169c16a659d063a5ce198d89441ea830c89ebe7706bb8e2ff318082e56f83"} Nov 22 09:11:49 crc kubenswrapper[4856]: I1122 09:11:49.940359 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-9mthx" Nov 22 09:11:49 crc kubenswrapper[4856]: I1122 09:11:49.960561 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d96cb97-55b2-4bec-a4dc-6065d4143687-inventory\") pod \"3d96cb97-55b2-4bec-a4dc-6065d4143687\" (UID: \"3d96cb97-55b2-4bec-a4dc-6065d4143687\") " Nov 22 09:11:49 crc kubenswrapper[4856]: I1122 09:11:49.960669 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d96cb97-55b2-4bec-a4dc-6065d4143687-ssh-key\") pod \"3d96cb97-55b2-4bec-a4dc-6065d4143687\" (UID: \"3d96cb97-55b2-4bec-a4dc-6065d4143687\") " Nov 22 09:11:49 crc kubenswrapper[4856]: I1122 09:11:49.960766 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qvm7\" (UniqueName: \"kubernetes.io/projected/3d96cb97-55b2-4bec-a4dc-6065d4143687-kube-api-access-4qvm7\") pod \"3d96cb97-55b2-4bec-a4dc-6065d4143687\" (UID: \"3d96cb97-55b2-4bec-a4dc-6065d4143687\") " Nov 22 09:11:49 crc kubenswrapper[4856]: I1122 09:11:49.967982 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d96cb97-55b2-4bec-a4dc-6065d4143687-kube-api-access-4qvm7" (OuterVolumeSpecName: "kube-api-access-4qvm7") pod "3d96cb97-55b2-4bec-a4dc-6065d4143687" (UID: "3d96cb97-55b2-4bec-a4dc-6065d4143687"). InnerVolumeSpecName "kube-api-access-4qvm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:11:49 crc kubenswrapper[4856]: I1122 09:11:49.996290 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d96cb97-55b2-4bec-a4dc-6065d4143687-inventory" (OuterVolumeSpecName: "inventory") pod "3d96cb97-55b2-4bec-a4dc-6065d4143687" (UID: "3d96cb97-55b2-4bec-a4dc-6065d4143687"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:11:49 crc kubenswrapper[4856]: I1122 09:11:49.999787 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d96cb97-55b2-4bec-a4dc-6065d4143687-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3d96cb97-55b2-4bec-a4dc-6065d4143687" (UID: "3d96cb97-55b2-4bec-a4dc-6065d4143687"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.063105 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qvm7\" (UniqueName: \"kubernetes.io/projected/3d96cb97-55b2-4bec-a4dc-6065d4143687-kube-api-access-4qvm7\") on node \"crc\" DevicePath \"\"" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.063174 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d96cb97-55b2-4bec-a4dc-6065d4143687-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.063188 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3d96cb97-55b2-4bec-a4dc-6065d4143687-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.536043 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-openstack-openstack-cell1-9mthx" event={"ID":"3d96cb97-55b2-4bec-a4dc-6065d4143687","Type":"ContainerDied","Data":"0058d688ee366b3460782fe2609f520d9d3aa27aa89ebf3aeec8b175164f8a62"} Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.536093 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0058d688ee366b3460782fe2609f520d9d3aa27aa89ebf3aeec8b175164f8a62" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.536149 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-openstack-openstack-cell1-9mthx" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.603027 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-bscsv"] Nov 22 09:11:50 crc kubenswrapper[4856]: E1122 09:11:50.603561 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d96cb97-55b2-4bec-a4dc-6065d4143687" containerName="run-os-openstack-openstack-cell1" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.603580 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d96cb97-55b2-4bec-a4dc-6065d4143687" containerName="run-os-openstack-openstack-cell1" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.603818 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d96cb97-55b2-4bec-a4dc-6065d4143687" containerName="run-os-openstack-openstack-cell1" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.604572 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.606719 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.606783 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.606942 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.607571 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.618705 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-bscsv"] Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.673985 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2ljf\" (UniqueName: \"kubernetes.io/projected/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-kube-api-access-x2ljf\") pod \"reboot-os-openstack-openstack-cell1-bscsv\" (UID: \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\") " pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.674409 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-inventory\") pod \"reboot-os-openstack-openstack-cell1-bscsv\" (UID: \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\") " pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.674451 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-bscsv\" (UID: \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\") " pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.777231 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2ljf\" (UniqueName: \"kubernetes.io/projected/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-kube-api-access-x2ljf\") pod \"reboot-os-openstack-openstack-cell1-bscsv\" (UID: \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\") " pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.777766 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-inventory\") pod \"reboot-os-openstack-openstack-cell1-bscsv\" (UID: \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\") " pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.777887 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-bscsv\" (UID: \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\") " pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.783344 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-ssh-key\") pod \"reboot-os-openstack-openstack-cell1-bscsv\" (UID: \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\") " pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.783845 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-inventory\") pod \"reboot-os-openstack-openstack-cell1-bscsv\" (UID: \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\") " pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.802899 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2ljf\" (UniqueName: \"kubernetes.io/projected/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-kube-api-access-x2ljf\") pod \"reboot-os-openstack-openstack-cell1-bscsv\" (UID: \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\") " pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" Nov 22 09:11:50 crc kubenswrapper[4856]: I1122 09:11:50.929136 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" Nov 22 09:11:51 crc kubenswrapper[4856]: I1122 09:11:51.437305 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-openstack-openstack-cell1-bscsv"] Nov 22 09:11:51 crc kubenswrapper[4856]: I1122 09:11:51.544995 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" event={"ID":"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed","Type":"ContainerStarted","Data":"1463c4cb507d198cad033a22a6a9e1f1be50bc5017a0be839ae8aa2f4abcbf36"} Nov 22 09:11:53 crc kubenswrapper[4856]: I1122 09:11:53.564798 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" event={"ID":"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed","Type":"ContainerStarted","Data":"c47c7d115649b36533b7df1aeb2ec7e501b435ec5ffdc4fe5ba7cbafd6f6f9be"} Nov 22 09:11:53 crc kubenswrapper[4856]: I1122 09:11:53.605298 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" podStartSLOduration=2.655889788 podStartE2EDuration="3.605277947s" podCreationTimestamp="2025-11-22 09:11:50 +0000 UTC" firstStartedPulling="2025-11-22 09:11:51.440587889 +0000 UTC m=+7753.853981147" lastFinishedPulling="2025-11-22 09:11:52.389976048 +0000 UTC m=+7754.803369306" observedRunningTime="2025-11-22 09:11:53.603406165 +0000 UTC m=+7756.016799423" watchObservedRunningTime="2025-11-22 09:11:53.605277947 +0000 UTC m=+7756.018671205" Nov 22 09:12:09 crc kubenswrapper[4856]: I1122 09:12:09.717227 4856 generic.go:334] "Generic (PLEG): container finished" podID="ad7d4bc8-7324-4941-9bdb-c870dbcba3ed" containerID="c47c7d115649b36533b7df1aeb2ec7e501b435ec5ffdc4fe5ba7cbafd6f6f9be" exitCode=0 Nov 22 09:12:09 crc kubenswrapper[4856]: I1122 09:12:09.717291 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" event={"ID":"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed","Type":"ContainerDied","Data":"c47c7d115649b36533b7df1aeb2ec7e501b435ec5ffdc4fe5ba7cbafd6f6f9be"} Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.163239 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.221607 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2ljf\" (UniqueName: \"kubernetes.io/projected/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-kube-api-access-x2ljf\") pod \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\" (UID: \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\") " Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.221741 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-ssh-key\") pod \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\" (UID: \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\") " Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.228538 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-kube-api-access-x2ljf" (OuterVolumeSpecName: "kube-api-access-x2ljf") pod "ad7d4bc8-7324-4941-9bdb-c870dbcba3ed" (UID: "ad7d4bc8-7324-4941-9bdb-c870dbcba3ed"). InnerVolumeSpecName "kube-api-access-x2ljf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.251838 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ad7d4bc8-7324-4941-9bdb-c870dbcba3ed" (UID: "ad7d4bc8-7324-4941-9bdb-c870dbcba3ed"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.326597 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-inventory\") pod \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\" (UID: \"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed\") " Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.327609 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2ljf\" (UniqueName: \"kubernetes.io/projected/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-kube-api-access-x2ljf\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.327632 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.351743 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-inventory" (OuterVolumeSpecName: "inventory") pod "ad7d4bc8-7324-4941-9bdb-c870dbcba3ed" (UID: "ad7d4bc8-7324-4941-9bdb-c870dbcba3ed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.428508 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ad7d4bc8-7324-4941-9bdb-c870dbcba3ed-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.737192 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" event={"ID":"ad7d4bc8-7324-4941-9bdb-c870dbcba3ed","Type":"ContainerDied","Data":"1463c4cb507d198cad033a22a6a9e1f1be50bc5017a0be839ae8aa2f4abcbf36"} Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.737228 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1463c4cb507d198cad033a22a6a9e1f1be50bc5017a0be839ae8aa2f4abcbf36" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.737299 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-openstack-openstack-cell1-bscsv" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.872420 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-lqv7q"] Nov 22 09:12:12 crc kubenswrapper[4856]: E1122 09:12:11.873919 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad7d4bc8-7324-4941-9bdb-c870dbcba3ed" containerName="reboot-os-openstack-openstack-cell1" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.873937 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad7d4bc8-7324-4941-9bdb-c870dbcba3ed" containerName="reboot-os-openstack-openstack-cell1" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.874943 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad7d4bc8-7324-4941-9bdb-c870dbcba3ed" containerName="reboot-os-openstack-openstack-cell1" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.877462 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.882353 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-neutron-metadata-default-certs-0" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.882473 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-libvirt-default-certs-0" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.882553 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-ovn-default-certs-0" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.882666 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.883073 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-telemetry-default-certs-0" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.883237 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.883429 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.883583 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:11.891439 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-lqv7q"] Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.039823 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.039909 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.039940 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.039987 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-inventory\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.040025 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.040068 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.040105 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.040138 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.040188 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.040217 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.040258 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.040298 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bgrt\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-kube-api-access-8bgrt\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.040324 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.040366 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.040396 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-ssh-key\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.141740 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.141784 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.141828 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-inventory\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.141867 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.141907 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.141938 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.141965 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.142008 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.142039 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.142079 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.142106 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bgrt\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-kube-api-access-8bgrt\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.142133 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.142172 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.142199 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-ssh-key\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.142258 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.146332 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-sriov-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.146602 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-ssh-key\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.146671 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-libvirt-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.146739 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-telemetry-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.147045 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-ovn-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.148925 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-nova-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.149108 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-metadata-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.149222 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-inventory\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.150572 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-dhcp-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.150629 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-telemetry-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.151564 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-bootstrap-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.152358 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-libvirt-combined-ca-bundle\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.153881 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-ovn-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.158435 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-neutron-metadata-default-certs-0\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.162955 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bgrt\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-kube-api-access-8bgrt\") pod \"install-certs-openstack-openstack-cell1-lqv7q\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:12 crc kubenswrapper[4856]: I1122 09:12:12.202181 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:13 crc kubenswrapper[4856]: I1122 09:12:13.185515 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-openstack-openstack-cell1-lqv7q"] Nov 22 09:12:13 crc kubenswrapper[4856]: I1122 09:12:13.783908 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" event={"ID":"c16cd078-a2a8-4021-aa0a-60dd1aabbe02","Type":"ContainerStarted","Data":"41f1318029a07385028ea1fad6bfc1f4b21d7be08a3f9abaf6dc7bc90c773350"} Nov 22 09:12:14 crc kubenswrapper[4856]: I1122 09:12:14.794022 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" event={"ID":"c16cd078-a2a8-4021-aa0a-60dd1aabbe02","Type":"ContainerStarted","Data":"fdc4cbd56f35fc458f3050b3268fe6bf675a279bf470f53e49a4e301b3bbe2f5"} Nov 22 09:12:14 crc kubenswrapper[4856]: I1122 09:12:14.825763 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" podStartSLOduration=3.315207537 podStartE2EDuration="3.825740592s" podCreationTimestamp="2025-11-22 09:12:11 +0000 UTC" firstStartedPulling="2025-11-22 09:12:13.199165226 +0000 UTC m=+7775.612558484" lastFinishedPulling="2025-11-22 09:12:13.709698281 +0000 UTC m=+7776.123091539" observedRunningTime="2025-11-22 09:12:14.813539344 +0000 UTC m=+7777.226932602" watchObservedRunningTime="2025-11-22 09:12:14.825740592 +0000 UTC m=+7777.239133850" Nov 22 09:12:48 crc kubenswrapper[4856]: I1122 09:12:48.550758 4856 scope.go:117] "RemoveContainer" containerID="c45121e7c64145462036e23b400bbc9383b6ba5b1848a47c8bad88525aa3fd07" Nov 22 09:12:49 crc kubenswrapper[4856]: I1122 09:12:49.155421 4856 generic.go:334] "Generic (PLEG): container finished" podID="c16cd078-a2a8-4021-aa0a-60dd1aabbe02" containerID="fdc4cbd56f35fc458f3050b3268fe6bf675a279bf470f53e49a4e301b3bbe2f5" exitCode=0 Nov 22 09:12:49 crc kubenswrapper[4856]: I1122 09:12:49.155465 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" event={"ID":"c16cd078-a2a8-4021-aa0a-60dd1aabbe02","Type":"ContainerDied","Data":"fdc4cbd56f35fc458f3050b3268fe6bf675a279bf470f53e49a4e301b3bbe2f5"} Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.572534 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.671446 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-ovn-default-certs-0\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.671532 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-sriov-combined-ca-bundle\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.671589 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-telemetry-default-certs-0\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.671616 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-ovn-combined-ca-bundle\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.671636 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-neutron-metadata-default-certs-0\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.671668 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-nova-combined-ca-bundle\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.671720 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-ssh-key\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.671797 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-telemetry-combined-ca-bundle\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.671821 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-inventory\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.671851 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-bootstrap-combined-ca-bundle\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.671970 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-libvirt-combined-ca-bundle\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.671998 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-metadata-combined-ca-bundle\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.672027 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bgrt\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-kube-api-access-8bgrt\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.672093 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-libvirt-default-certs-0\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.672111 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-dhcp-combined-ca-bundle\") pod \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\" (UID: \"c16cd078-a2a8-4021-aa0a-60dd1aabbe02\") " Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.679041 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.680642 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.680423 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.681258 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.682025 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-telemetry-default-certs-0") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "openstack-cell1-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.682368 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-kube-api-access-8bgrt" (OuterVolumeSpecName: "kube-api-access-8bgrt") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "kube-api-access-8bgrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.683270 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.684267 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.684808 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-neutron-metadata-default-certs-0") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "openstack-cell1-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.685588 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.685900 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-ovn-default-certs-0") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "openstack-cell1-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.687204 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.690220 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-cell1-libvirt-default-certs-0") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "openstack-cell1-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.709456 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-inventory" (OuterVolumeSpecName: "inventory") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.715648 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c16cd078-a2a8-4021-aa0a-60dd1aabbe02" (UID: "c16cd078-a2a8-4021-aa0a-60dd1aabbe02"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777047 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777106 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777122 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777135 4856 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777151 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777165 4856 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777181 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777193 4856 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777204 4856 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777216 4856 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777229 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bgrt\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-kube-api-access-8bgrt\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777240 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777255 4856 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777267 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-openstack-cell1-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:50 crc kubenswrapper[4856]: I1122 09:12:50.777281 4856 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c16cd078-a2a8-4021-aa0a-60dd1aabbe02-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.175187 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" event={"ID":"c16cd078-a2a8-4021-aa0a-60dd1aabbe02","Type":"ContainerDied","Data":"41f1318029a07385028ea1fad6bfc1f4b21d7be08a3f9abaf6dc7bc90c773350"} Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.175231 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f1318029a07385028ea1fad6bfc1f4b21d7be08a3f9abaf6dc7bc90c773350" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.175289 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-openstack-openstack-cell1-lqv7q" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.317238 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-openstack-openstack-cell1-znkhd"] Nov 22 09:12:51 crc kubenswrapper[4856]: E1122 09:12:51.318190 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c16cd078-a2a8-4021-aa0a-60dd1aabbe02" containerName="install-certs-openstack-openstack-cell1" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.318212 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c16cd078-a2a8-4021-aa0a-60dd1aabbe02" containerName="install-certs-openstack-openstack-cell1" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.318689 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c16cd078-a2a8-4021-aa0a-60dd1aabbe02" containerName="install-certs-openstack-openstack-cell1" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.320364 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.322737 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.323096 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.323506 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.323785 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.325253 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.335143 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-znkhd"] Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.502124 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f3674061-72ad-4651-b5f4-29795684fe8e-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.502505 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-ssh-key\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.502678 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.502758 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-inventory\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.502849 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcmcl\" (UniqueName: \"kubernetes.io/projected/f3674061-72ad-4651-b5f4-29795684fe8e-kube-api-access-tcmcl\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.604548 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-inventory\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.604617 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcmcl\" (UniqueName: \"kubernetes.io/projected/f3674061-72ad-4651-b5f4-29795684fe8e-kube-api-access-tcmcl\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.604888 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f3674061-72ad-4651-b5f4-29795684fe8e-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.604941 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-ssh-key\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.604965 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.606984 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f3674061-72ad-4651-b5f4-29795684fe8e-ovncontroller-config-0\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.609535 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-ovn-combined-ca-bundle\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.609882 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-inventory\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.611550 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-ssh-key\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.631411 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcmcl\" (UniqueName: \"kubernetes.io/projected/f3674061-72ad-4651-b5f4-29795684fe8e-kube-api-access-tcmcl\") pod \"ovn-openstack-openstack-cell1-znkhd\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:51 crc kubenswrapper[4856]: I1122 09:12:51.645024 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:12:52 crc kubenswrapper[4856]: I1122 09:12:52.198626 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-openstack-openstack-cell1-znkhd"] Nov 22 09:12:53 crc kubenswrapper[4856]: I1122 09:12:53.196919 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-znkhd" event={"ID":"f3674061-72ad-4651-b5f4-29795684fe8e","Type":"ContainerStarted","Data":"72c03a11297c61406c5ecdc4e5240db19998353997cf522bb8de877ed2134c80"} Nov 22 09:12:54 crc kubenswrapper[4856]: I1122 09:12:54.205847 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-znkhd" event={"ID":"f3674061-72ad-4651-b5f4-29795684fe8e","Type":"ContainerStarted","Data":"8675731ff11f0834bbe545a03a00d02b0720e7d451ee36402578eb34783b5356"} Nov 22 09:12:54 crc kubenswrapper[4856]: I1122 09:12:54.237279 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-openstack-openstack-cell1-znkhd" podStartSLOduration=2.711494476 podStartE2EDuration="3.237254733s" podCreationTimestamp="2025-11-22 09:12:51 +0000 UTC" firstStartedPulling="2025-11-22 09:12:52.216218919 +0000 UTC m=+7814.629612187" lastFinishedPulling="2025-11-22 09:12:52.741979186 +0000 UTC m=+7815.155372444" observedRunningTime="2025-11-22 09:12:54.22603754 +0000 UTC m=+7816.639430808" watchObservedRunningTime="2025-11-22 09:12:54.237254733 +0000 UTC m=+7816.650647991" Nov 22 09:13:55 crc kubenswrapper[4856]: I1122 09:13:55.788040 4856 generic.go:334] "Generic (PLEG): container finished" podID="f3674061-72ad-4651-b5f4-29795684fe8e" containerID="8675731ff11f0834bbe545a03a00d02b0720e7d451ee36402578eb34783b5356" exitCode=0 Nov 22 09:13:55 crc kubenswrapper[4856]: I1122 09:13:55.788307 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-znkhd" event={"ID":"f3674061-72ad-4651-b5f4-29795684fe8e","Type":"ContainerDied","Data":"8675731ff11f0834bbe545a03a00d02b0720e7d451ee36402578eb34783b5356"} Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.220057 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.384095 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcmcl\" (UniqueName: \"kubernetes.io/projected/f3674061-72ad-4651-b5f4-29795684fe8e-kube-api-access-tcmcl\") pod \"f3674061-72ad-4651-b5f4-29795684fe8e\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.384711 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-ssh-key\") pod \"f3674061-72ad-4651-b5f4-29795684fe8e\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.384798 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-inventory\") pod \"f3674061-72ad-4651-b5f4-29795684fe8e\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.384891 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-ovn-combined-ca-bundle\") pod \"f3674061-72ad-4651-b5f4-29795684fe8e\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.385150 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f3674061-72ad-4651-b5f4-29795684fe8e-ovncontroller-config-0\") pod \"f3674061-72ad-4651-b5f4-29795684fe8e\" (UID: \"f3674061-72ad-4651-b5f4-29795684fe8e\") " Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.392048 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "f3674061-72ad-4651-b5f4-29795684fe8e" (UID: "f3674061-72ad-4651-b5f4-29795684fe8e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.398679 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3674061-72ad-4651-b5f4-29795684fe8e-kube-api-access-tcmcl" (OuterVolumeSpecName: "kube-api-access-tcmcl") pod "f3674061-72ad-4651-b5f4-29795684fe8e" (UID: "f3674061-72ad-4651-b5f4-29795684fe8e"). InnerVolumeSpecName "kube-api-access-tcmcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.419944 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f3674061-72ad-4651-b5f4-29795684fe8e" (UID: "f3674061-72ad-4651-b5f4-29795684fe8e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.420964 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-inventory" (OuterVolumeSpecName: "inventory") pod "f3674061-72ad-4651-b5f4-29795684fe8e" (UID: "f3674061-72ad-4651-b5f4-29795684fe8e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.424813 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3674061-72ad-4651-b5f4-29795684fe8e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "f3674061-72ad-4651-b5f4-29795684fe8e" (UID: "f3674061-72ad-4651-b5f4-29795684fe8e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.488153 4856 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/f3674061-72ad-4651-b5f4-29795684fe8e-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.488313 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcmcl\" (UniqueName: \"kubernetes.io/projected/f3674061-72ad-4651-b5f4-29795684fe8e-kube-api-access-tcmcl\") on node \"crc\" DevicePath \"\"" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.488373 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.488432 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.488492 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3674061-72ad-4651-b5f4-29795684fe8e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.812140 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-openstack-openstack-cell1-znkhd" event={"ID":"f3674061-72ad-4651-b5f4-29795684fe8e","Type":"ContainerDied","Data":"72c03a11297c61406c5ecdc4e5240db19998353997cf522bb8de877ed2134c80"} Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.812186 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72c03a11297c61406c5ecdc4e5240db19998353997cf522bb8de877ed2134c80" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.812448 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-openstack-openstack-cell1-znkhd" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.891420 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-k5j8z"] Nov 22 09:13:57 crc kubenswrapper[4856]: E1122 09:13:57.891999 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3674061-72ad-4651-b5f4-29795684fe8e" containerName="ovn-openstack-openstack-cell1" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.892022 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3674061-72ad-4651-b5f4-29795684fe8e" containerName="ovn-openstack-openstack-cell1" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.892217 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3674061-72ad-4651-b5f4-29795684fe8e" containerName="ovn-openstack-openstack-cell1" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.893000 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.895036 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.895042 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.895497 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.895734 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.895772 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.895807 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.895916 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mzqh\" (UniqueName: \"kubernetes.io/projected/c65c99da-b7aa-4e12-9973-9d87da7c85af-kube-api-access-6mzqh\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.896097 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.896184 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.896340 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.897641 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.897907 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.901963 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-k5j8z"] Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.997567 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mzqh\" (UniqueName: \"kubernetes.io/projected/c65c99da-b7aa-4e12-9973-9d87da7c85af-kube-api-access-6mzqh\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.997842 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.997875 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.997923 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.997982 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:57 crc kubenswrapper[4856]: I1122 09:13:57.998016 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:58 crc kubenswrapper[4856]: I1122 09:13:58.002244 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-nova-metadata-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:58 crc kubenswrapper[4856]: I1122 09:13:58.002598 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:58 crc kubenswrapper[4856]: I1122 09:13:58.002790 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:58 crc kubenswrapper[4856]: I1122 09:13:58.003256 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-inventory\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:58 crc kubenswrapper[4856]: I1122 09:13:58.003401 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-ssh-key\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:58 crc kubenswrapper[4856]: I1122 09:13:58.016722 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mzqh\" (UniqueName: \"kubernetes.io/projected/c65c99da-b7aa-4e12-9973-9d87da7c85af-kube-api-access-6mzqh\") pod \"neutron-metadata-openstack-openstack-cell1-k5j8z\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:58 crc kubenswrapper[4856]: I1122 09:13:58.223424 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:13:58 crc kubenswrapper[4856]: I1122 09:13:58.762021 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-openstack-openstack-cell1-k5j8z"] Nov 22 09:13:58 crc kubenswrapper[4856]: I1122 09:13:58.822124 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" event={"ID":"c65c99da-b7aa-4e12-9973-9d87da7c85af","Type":"ContainerStarted","Data":"a411064104506aebf64e4ca953dba95556c03ba346bd6f695b1b9b55d425b899"} Nov 22 09:13:59 crc kubenswrapper[4856]: I1122 09:13:59.753997 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:13:59 crc kubenswrapper[4856]: I1122 09:13:59.754414 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:13:59 crc kubenswrapper[4856]: I1122 09:13:59.832672 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" event={"ID":"c65c99da-b7aa-4e12-9973-9d87da7c85af","Type":"ContainerStarted","Data":"d3b236b6256a11d0b390bd774b3df51a0349f944081cc702aaca01c5a6da79f1"} Nov 22 09:13:59 crc kubenswrapper[4856]: I1122 09:13:59.856588 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" podStartSLOduration=2.358203904 podStartE2EDuration="2.856568101s" podCreationTimestamp="2025-11-22 09:13:57 +0000 UTC" firstStartedPulling="2025-11-22 09:13:58.767325172 +0000 UTC m=+7881.180718440" lastFinishedPulling="2025-11-22 09:13:59.265689389 +0000 UTC m=+7881.679082637" observedRunningTime="2025-11-22 09:13:59.848614506 +0000 UTC m=+7882.262007774" watchObservedRunningTime="2025-11-22 09:13:59.856568101 +0000 UTC m=+7882.269961359" Nov 22 09:14:29 crc kubenswrapper[4856]: I1122 09:14:29.754133 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:14:29 crc kubenswrapper[4856]: I1122 09:14:29.754915 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:14:50 crc kubenswrapper[4856]: I1122 09:14:50.296110 4856 generic.go:334] "Generic (PLEG): container finished" podID="c65c99da-b7aa-4e12-9973-9d87da7c85af" containerID="d3b236b6256a11d0b390bd774b3df51a0349f944081cc702aaca01c5a6da79f1" exitCode=0 Nov 22 09:14:50 crc kubenswrapper[4856]: I1122 09:14:50.296209 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" event={"ID":"c65c99da-b7aa-4e12-9973-9d87da7c85af","Type":"ContainerDied","Data":"d3b236b6256a11d0b390bd774b3df51a0349f944081cc702aaca01c5a6da79f1"} Nov 22 09:14:51 crc kubenswrapper[4856]: I1122 09:14:51.799471 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:14:51 crc kubenswrapper[4856]: I1122 09:14:51.941223 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mzqh\" (UniqueName: \"kubernetes.io/projected/c65c99da-b7aa-4e12-9973-9d87da7c85af-kube-api-access-6mzqh\") pod \"c65c99da-b7aa-4e12-9973-9d87da7c85af\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " Nov 22 09:14:51 crc kubenswrapper[4856]: I1122 09:14:51.941299 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-ssh-key\") pod \"c65c99da-b7aa-4e12-9973-9d87da7c85af\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " Nov 22 09:14:51 crc kubenswrapper[4856]: I1122 09:14:51.941417 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-nova-metadata-neutron-config-0\") pod \"c65c99da-b7aa-4e12-9973-9d87da7c85af\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " Nov 22 09:14:51 crc kubenswrapper[4856]: I1122 09:14:51.941489 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-neutron-ovn-metadata-agent-neutron-config-0\") pod \"c65c99da-b7aa-4e12-9973-9d87da7c85af\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " Nov 22 09:14:51 crc kubenswrapper[4856]: I1122 09:14:51.941551 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-inventory\") pod \"c65c99da-b7aa-4e12-9973-9d87da7c85af\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " Nov 22 09:14:51 crc kubenswrapper[4856]: I1122 09:14:51.941601 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-neutron-metadata-combined-ca-bundle\") pod \"c65c99da-b7aa-4e12-9973-9d87da7c85af\" (UID: \"c65c99da-b7aa-4e12-9973-9d87da7c85af\") " Nov 22 09:14:51 crc kubenswrapper[4856]: I1122 09:14:51.947999 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c65c99da-b7aa-4e12-9973-9d87da7c85af-kube-api-access-6mzqh" (OuterVolumeSpecName: "kube-api-access-6mzqh") pod "c65c99da-b7aa-4e12-9973-9d87da7c85af" (UID: "c65c99da-b7aa-4e12-9973-9d87da7c85af"). InnerVolumeSpecName "kube-api-access-6mzqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:14:51 crc kubenswrapper[4856]: I1122 09:14:51.948065 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "c65c99da-b7aa-4e12-9973-9d87da7c85af" (UID: "c65c99da-b7aa-4e12-9973-9d87da7c85af"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:14:51 crc kubenswrapper[4856]: I1122 09:14:51.975823 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-inventory" (OuterVolumeSpecName: "inventory") pod "c65c99da-b7aa-4e12-9973-9d87da7c85af" (UID: "c65c99da-b7aa-4e12-9973-9d87da7c85af"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:14:51 crc kubenswrapper[4856]: I1122 09:14:51.976014 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "c65c99da-b7aa-4e12-9973-9d87da7c85af" (UID: "c65c99da-b7aa-4e12-9973-9d87da7c85af"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:14:51 crc kubenswrapper[4856]: I1122 09:14:51.978749 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "c65c99da-b7aa-4e12-9973-9d87da7c85af" (UID: "c65c99da-b7aa-4e12-9973-9d87da7c85af"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:14:51 crc kubenswrapper[4856]: I1122 09:14:51.980891 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c65c99da-b7aa-4e12-9973-9d87da7c85af" (UID: "c65c99da-b7aa-4e12-9973-9d87da7c85af"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.043963 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mzqh\" (UniqueName: \"kubernetes.io/projected/c65c99da-b7aa-4e12-9973-9d87da7c85af-kube-api-access-6mzqh\") on node \"crc\" DevicePath \"\"" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.044007 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.044017 4856 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.044032 4856 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.044043 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.044055 4856 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c65c99da-b7aa-4e12-9973-9d87da7c85af-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.318245 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" event={"ID":"c65c99da-b7aa-4e12-9973-9d87da7c85af","Type":"ContainerDied","Data":"a411064104506aebf64e4ca953dba95556c03ba346bd6f695b1b9b55d425b899"} Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.318304 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a411064104506aebf64e4ca953dba95556c03ba346bd6f695b1b9b55d425b899" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.318304 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-openstack-openstack-cell1-k5j8z" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.534586 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-bld8w"] Nov 22 09:14:52 crc kubenswrapper[4856]: E1122 09:14:52.535350 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c65c99da-b7aa-4e12-9973-9d87da7c85af" containerName="neutron-metadata-openstack-openstack-cell1" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.535418 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c65c99da-b7aa-4e12-9973-9d87da7c85af" containerName="neutron-metadata-openstack-openstack-cell1" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.535693 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c65c99da-b7aa-4e12-9973-9d87da7c85af" containerName="neutron-metadata-openstack-openstack-cell1" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.536487 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.538846 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.539911 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.540379 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.540491 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.540833 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.560897 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-bld8w"] Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.662104 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-inventory\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.662369 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.662738 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-ssh-key\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.662960 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.663000 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7gk4\" (UniqueName: \"kubernetes.io/projected/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-kube-api-access-v7gk4\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.764841 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-ssh-key\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.765233 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.765319 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7gk4\" (UniqueName: \"kubernetes.io/projected/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-kube-api-access-v7gk4\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.765422 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-inventory\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.765645 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.770153 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-libvirt-secret-0\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.770455 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-inventory\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.773811 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-ssh-key\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.774803 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-libvirt-combined-ca-bundle\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.786637 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7gk4\" (UniqueName: \"kubernetes.io/projected/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-kube-api-access-v7gk4\") pod \"libvirt-openstack-openstack-cell1-bld8w\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:52 crc kubenswrapper[4856]: I1122 09:14:52.863945 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:14:53 crc kubenswrapper[4856]: I1122 09:14:53.622228 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-openstack-openstack-cell1-bld8w"] Nov 22 09:14:53 crc kubenswrapper[4856]: W1122 09:14:53.629981 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a9f3905_ecd4_4d91_9d32_89e0c6bf5c44.slice/crio-b7c88272d1f16c13c953669938829065da4f428f4ae37b3f836d5ee097b2f224 WatchSource:0}: Error finding container b7c88272d1f16c13c953669938829065da4f428f4ae37b3f836d5ee097b2f224: Status 404 returned error can't find the container with id b7c88272d1f16c13c953669938829065da4f428f4ae37b3f836d5ee097b2f224 Nov 22 09:14:54 crc kubenswrapper[4856]: I1122 09:14:54.347058 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-bld8w" event={"ID":"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44","Type":"ContainerStarted","Data":"b7c88272d1f16c13c953669938829065da4f428f4ae37b3f836d5ee097b2f224"} Nov 22 09:14:55 crc kubenswrapper[4856]: I1122 09:14:55.357144 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-bld8w" event={"ID":"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44","Type":"ContainerStarted","Data":"ea23717aabb1f84461c9d699bccefbfd10d9d16dc3633098b406765a78aaa8fe"} Nov 22 09:14:59 crc kubenswrapper[4856]: I1122 09:14:59.754935 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:14:59 crc kubenswrapper[4856]: I1122 09:14:59.755323 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:14:59 crc kubenswrapper[4856]: I1122 09:14:59.755374 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 09:14:59 crc kubenswrapper[4856]: I1122 09:14:59.756342 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc11a243af1a19cf535ce76d7bb4962a44374e57856bbefc7f1a740aa36c0387"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:14:59 crc kubenswrapper[4856]: I1122 09:14:59.756406 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://fc11a243af1a19cf535ce76d7bb4962a44374e57856bbefc7f1a740aa36c0387" gracePeriod=600 Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.144048 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-openstack-openstack-cell1-bld8w" podStartSLOduration=7.687887197 podStartE2EDuration="8.144010926s" podCreationTimestamp="2025-11-22 09:14:52 +0000 UTC" firstStartedPulling="2025-11-22 09:14:53.633814672 +0000 UTC m=+7936.047207930" lastFinishedPulling="2025-11-22 09:14:54.089938401 +0000 UTC m=+7936.503331659" observedRunningTime="2025-11-22 09:14:55.376416478 +0000 UTC m=+7937.789809746" watchObservedRunningTime="2025-11-22 09:15:00.144010926 +0000 UTC m=+7942.557404194" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.153963 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r"] Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.156318 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.158839 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.162051 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.170455 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r"] Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.231737 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc48c\" (UniqueName: \"kubernetes.io/projected/48401830-c3ac-4955-a5d1-125bcf6a70a3-kube-api-access-jc48c\") pod \"collect-profiles-29396715-smf5r\" (UID: \"48401830-c3ac-4955-a5d1-125bcf6a70a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.232043 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48401830-c3ac-4955-a5d1-125bcf6a70a3-config-volume\") pod \"collect-profiles-29396715-smf5r\" (UID: \"48401830-c3ac-4955-a5d1-125bcf6a70a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.232297 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48401830-c3ac-4955-a5d1-125bcf6a70a3-secret-volume\") pod \"collect-profiles-29396715-smf5r\" (UID: \"48401830-c3ac-4955-a5d1-125bcf6a70a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.334670 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48401830-c3ac-4955-a5d1-125bcf6a70a3-config-volume\") pod \"collect-profiles-29396715-smf5r\" (UID: \"48401830-c3ac-4955-a5d1-125bcf6a70a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.334740 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48401830-c3ac-4955-a5d1-125bcf6a70a3-secret-volume\") pod \"collect-profiles-29396715-smf5r\" (UID: \"48401830-c3ac-4955-a5d1-125bcf6a70a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.334899 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc48c\" (UniqueName: \"kubernetes.io/projected/48401830-c3ac-4955-a5d1-125bcf6a70a3-kube-api-access-jc48c\") pod \"collect-profiles-29396715-smf5r\" (UID: \"48401830-c3ac-4955-a5d1-125bcf6a70a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.335831 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48401830-c3ac-4955-a5d1-125bcf6a70a3-config-volume\") pod \"collect-profiles-29396715-smf5r\" (UID: \"48401830-c3ac-4955-a5d1-125bcf6a70a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.348858 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48401830-c3ac-4955-a5d1-125bcf6a70a3-secret-volume\") pod \"collect-profiles-29396715-smf5r\" (UID: \"48401830-c3ac-4955-a5d1-125bcf6a70a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.352947 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc48c\" (UniqueName: \"kubernetes.io/projected/48401830-c3ac-4955-a5d1-125bcf6a70a3-kube-api-access-jc48c\") pod \"collect-profiles-29396715-smf5r\" (UID: \"48401830-c3ac-4955-a5d1-125bcf6a70a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.410577 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="fc11a243af1a19cf535ce76d7bb4962a44374e57856bbefc7f1a740aa36c0387" exitCode=0 Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.410630 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"fc11a243af1a19cf535ce76d7bb4962a44374e57856bbefc7f1a740aa36c0387"} Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.410665 4856 scope.go:117] "RemoveContainer" containerID="a89026631d5d43a06332f9f7e6dfa9fcf03a096c699d35fe9e6f731af210d3a3" Nov 22 09:15:00 crc kubenswrapper[4856]: I1122 09:15:00.482761 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" Nov 22 09:15:01 crc kubenswrapper[4856]: I1122 09:15:01.009580 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r"] Nov 22 09:15:01 crc kubenswrapper[4856]: I1122 09:15:01.432422 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313"} Nov 22 09:15:01 crc kubenswrapper[4856]: I1122 09:15:01.438376 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" event={"ID":"48401830-c3ac-4955-a5d1-125bcf6a70a3","Type":"ContainerStarted","Data":"c843aa8c4cdaedaf3736714f0c456037aabe7edabc24a864ef358d3d2af4b1d6"} Nov 22 09:15:01 crc kubenswrapper[4856]: I1122 09:15:01.438430 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" event={"ID":"48401830-c3ac-4955-a5d1-125bcf6a70a3","Type":"ContainerStarted","Data":"c5c9324d12e6cd11a6c591ae13607f00d35b210c5effac1965119e6518b83ef3"} Nov 22 09:15:01 crc kubenswrapper[4856]: I1122 09:15:01.498932 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" podStartSLOduration=1.498900917 podStartE2EDuration="1.498900917s" podCreationTimestamp="2025-11-22 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:15:01.488594499 +0000 UTC m=+7943.901987757" watchObservedRunningTime="2025-11-22 09:15:01.498900917 +0000 UTC m=+7943.912294175" Nov 22 09:15:02 crc kubenswrapper[4856]: I1122 09:15:02.453354 4856 generic.go:334] "Generic (PLEG): container finished" podID="48401830-c3ac-4955-a5d1-125bcf6a70a3" containerID="c843aa8c4cdaedaf3736714f0c456037aabe7edabc24a864ef358d3d2af4b1d6" exitCode=0 Nov 22 09:15:02 crc kubenswrapper[4856]: I1122 09:15:02.453421 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" event={"ID":"48401830-c3ac-4955-a5d1-125bcf6a70a3","Type":"ContainerDied","Data":"c843aa8c4cdaedaf3736714f0c456037aabe7edabc24a864ef358d3d2af4b1d6"} Nov 22 09:15:03 crc kubenswrapper[4856]: I1122 09:15:03.838790 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" Nov 22 09:15:03 crc kubenswrapper[4856]: I1122 09:15:03.923316 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48401830-c3ac-4955-a5d1-125bcf6a70a3-secret-volume\") pod \"48401830-c3ac-4955-a5d1-125bcf6a70a3\" (UID: \"48401830-c3ac-4955-a5d1-125bcf6a70a3\") " Nov 22 09:15:03 crc kubenswrapper[4856]: I1122 09:15:03.923806 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48401830-c3ac-4955-a5d1-125bcf6a70a3-config-volume\") pod \"48401830-c3ac-4955-a5d1-125bcf6a70a3\" (UID: \"48401830-c3ac-4955-a5d1-125bcf6a70a3\") " Nov 22 09:15:03 crc kubenswrapper[4856]: I1122 09:15:03.923844 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc48c\" (UniqueName: \"kubernetes.io/projected/48401830-c3ac-4955-a5d1-125bcf6a70a3-kube-api-access-jc48c\") pod \"48401830-c3ac-4955-a5d1-125bcf6a70a3\" (UID: \"48401830-c3ac-4955-a5d1-125bcf6a70a3\") " Nov 22 09:15:03 crc kubenswrapper[4856]: I1122 09:15:03.925397 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48401830-c3ac-4955-a5d1-125bcf6a70a3-config-volume" (OuterVolumeSpecName: "config-volume") pod "48401830-c3ac-4955-a5d1-125bcf6a70a3" (UID: "48401830-c3ac-4955-a5d1-125bcf6a70a3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:15:03 crc kubenswrapper[4856]: I1122 09:15:03.930248 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48401830-c3ac-4955-a5d1-125bcf6a70a3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "48401830-c3ac-4955-a5d1-125bcf6a70a3" (UID: "48401830-c3ac-4955-a5d1-125bcf6a70a3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:15:03 crc kubenswrapper[4856]: I1122 09:15:03.930502 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48401830-c3ac-4955-a5d1-125bcf6a70a3-kube-api-access-jc48c" (OuterVolumeSpecName: "kube-api-access-jc48c") pod "48401830-c3ac-4955-a5d1-125bcf6a70a3" (UID: "48401830-c3ac-4955-a5d1-125bcf6a70a3"). InnerVolumeSpecName "kube-api-access-jc48c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:15:04 crc kubenswrapper[4856]: I1122 09:15:04.026066 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48401830-c3ac-4955-a5d1-125bcf6a70a3-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:15:04 crc kubenswrapper[4856]: I1122 09:15:04.026343 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc48c\" (UniqueName: \"kubernetes.io/projected/48401830-c3ac-4955-a5d1-125bcf6a70a3-kube-api-access-jc48c\") on node \"crc\" DevicePath \"\"" Nov 22 09:15:04 crc kubenswrapper[4856]: I1122 09:15:04.026416 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/48401830-c3ac-4955-a5d1-125bcf6a70a3-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:15:04 crc kubenswrapper[4856]: I1122 09:15:04.475608 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" event={"ID":"48401830-c3ac-4955-a5d1-125bcf6a70a3","Type":"ContainerDied","Data":"c5c9324d12e6cd11a6c591ae13607f00d35b210c5effac1965119e6518b83ef3"} Nov 22 09:15:04 crc kubenswrapper[4856]: I1122 09:15:04.475666 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5c9324d12e6cd11a6c591ae13607f00d35b210c5effac1965119e6518b83ef3" Nov 22 09:15:04 crc kubenswrapper[4856]: I1122 09:15:04.475675 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r" Nov 22 09:15:04 crc kubenswrapper[4856]: I1122 09:15:04.930704 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t"] Nov 22 09:15:04 crc kubenswrapper[4856]: I1122 09:15:04.941797 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396670-wlh6t"] Nov 22 09:15:06 crc kubenswrapper[4856]: I1122 09:15:06.723720 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9036fc97-e929-4add-b263-f40f8374bb33" path="/var/lib/kubelet/pods/9036fc97-e929-4add-b263-f40f8374bb33/volumes" Nov 22 09:15:32 crc kubenswrapper[4856]: I1122 09:15:32.859213 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bnlxn"] Nov 22 09:15:32 crc kubenswrapper[4856]: E1122 09:15:32.860220 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48401830-c3ac-4955-a5d1-125bcf6a70a3" containerName="collect-profiles" Nov 22 09:15:32 crc kubenswrapper[4856]: I1122 09:15:32.860235 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="48401830-c3ac-4955-a5d1-125bcf6a70a3" containerName="collect-profiles" Nov 22 09:15:32 crc kubenswrapper[4856]: I1122 09:15:32.860452 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="48401830-c3ac-4955-a5d1-125bcf6a70a3" containerName="collect-profiles" Nov 22 09:15:32 crc kubenswrapper[4856]: I1122 09:15:32.861989 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:32 crc kubenswrapper[4856]: I1122 09:15:32.886570 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bnlxn"] Nov 22 09:15:32 crc kubenswrapper[4856]: I1122 09:15:32.960557 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071671ee-5ad7-4c28-be44-cb32f1494a76-utilities\") pod \"certified-operators-bnlxn\" (UID: \"071671ee-5ad7-4c28-be44-cb32f1494a76\") " pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:32 crc kubenswrapper[4856]: I1122 09:15:32.960607 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071671ee-5ad7-4c28-be44-cb32f1494a76-catalog-content\") pod \"certified-operators-bnlxn\" (UID: \"071671ee-5ad7-4c28-be44-cb32f1494a76\") " pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:32 crc kubenswrapper[4856]: I1122 09:15:32.960951 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqts8\" (UniqueName: \"kubernetes.io/projected/071671ee-5ad7-4c28-be44-cb32f1494a76-kube-api-access-qqts8\") pod \"certified-operators-bnlxn\" (UID: \"071671ee-5ad7-4c28-be44-cb32f1494a76\") " pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:33 crc kubenswrapper[4856]: I1122 09:15:33.063381 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071671ee-5ad7-4c28-be44-cb32f1494a76-utilities\") pod \"certified-operators-bnlxn\" (UID: \"071671ee-5ad7-4c28-be44-cb32f1494a76\") " pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:33 crc kubenswrapper[4856]: I1122 09:15:33.063461 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071671ee-5ad7-4c28-be44-cb32f1494a76-catalog-content\") pod \"certified-operators-bnlxn\" (UID: \"071671ee-5ad7-4c28-be44-cb32f1494a76\") " pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:33 crc kubenswrapper[4856]: I1122 09:15:33.063610 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqts8\" (UniqueName: \"kubernetes.io/projected/071671ee-5ad7-4c28-be44-cb32f1494a76-kube-api-access-qqts8\") pod \"certified-operators-bnlxn\" (UID: \"071671ee-5ad7-4c28-be44-cb32f1494a76\") " pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:33 crc kubenswrapper[4856]: I1122 09:15:33.064153 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071671ee-5ad7-4c28-be44-cb32f1494a76-utilities\") pod \"certified-operators-bnlxn\" (UID: \"071671ee-5ad7-4c28-be44-cb32f1494a76\") " pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:33 crc kubenswrapper[4856]: I1122 09:15:33.064233 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071671ee-5ad7-4c28-be44-cb32f1494a76-catalog-content\") pod \"certified-operators-bnlxn\" (UID: \"071671ee-5ad7-4c28-be44-cb32f1494a76\") " pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:33 crc kubenswrapper[4856]: I1122 09:15:33.086103 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqts8\" (UniqueName: \"kubernetes.io/projected/071671ee-5ad7-4c28-be44-cb32f1494a76-kube-api-access-qqts8\") pod \"certified-operators-bnlxn\" (UID: \"071671ee-5ad7-4c28-be44-cb32f1494a76\") " pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:33 crc kubenswrapper[4856]: I1122 09:15:33.200088 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:33 crc kubenswrapper[4856]: I1122 09:15:33.775762 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bnlxn"] Nov 22 09:15:34 crc kubenswrapper[4856]: I1122 09:15:34.775453 4856 generic.go:334] "Generic (PLEG): container finished" podID="071671ee-5ad7-4c28-be44-cb32f1494a76" containerID="b9400aca805dd65fc6a4fd214b8cec57acd67af0b966005d5382470db48e5c7e" exitCode=0 Nov 22 09:15:34 crc kubenswrapper[4856]: I1122 09:15:34.775574 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnlxn" event={"ID":"071671ee-5ad7-4c28-be44-cb32f1494a76","Type":"ContainerDied","Data":"b9400aca805dd65fc6a4fd214b8cec57acd67af0b966005d5382470db48e5c7e"} Nov 22 09:15:34 crc kubenswrapper[4856]: I1122 09:15:34.775791 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnlxn" event={"ID":"071671ee-5ad7-4c28-be44-cb32f1494a76","Type":"ContainerStarted","Data":"f35365110e2c5460036545d9f70b34ea7d904cb9a64d4bf67b9359285659f741"} Nov 22 09:15:35 crc kubenswrapper[4856]: I1122 09:15:35.245525 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dx5ds"] Nov 22 09:15:35 crc kubenswrapper[4856]: I1122 09:15:35.248138 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:15:35 crc kubenswrapper[4856]: I1122 09:15:35.266071 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dx5ds"] Nov 22 09:15:35 crc kubenswrapper[4856]: I1122 09:15:35.332398 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a65b3798-5709-45f8-963b-25208a4888c4-utilities\") pod \"redhat-operators-dx5ds\" (UID: \"a65b3798-5709-45f8-963b-25208a4888c4\") " pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:15:35 crc kubenswrapper[4856]: I1122 09:15:35.332649 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdfzn\" (UniqueName: \"kubernetes.io/projected/a65b3798-5709-45f8-963b-25208a4888c4-kube-api-access-cdfzn\") pod \"redhat-operators-dx5ds\" (UID: \"a65b3798-5709-45f8-963b-25208a4888c4\") " pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:15:35 crc kubenswrapper[4856]: I1122 09:15:35.332749 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a65b3798-5709-45f8-963b-25208a4888c4-catalog-content\") pod \"redhat-operators-dx5ds\" (UID: \"a65b3798-5709-45f8-963b-25208a4888c4\") " pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:15:35 crc kubenswrapper[4856]: I1122 09:15:35.435627 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdfzn\" (UniqueName: \"kubernetes.io/projected/a65b3798-5709-45f8-963b-25208a4888c4-kube-api-access-cdfzn\") pod \"redhat-operators-dx5ds\" (UID: \"a65b3798-5709-45f8-963b-25208a4888c4\") " pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:15:35 crc kubenswrapper[4856]: I1122 09:15:35.435781 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a65b3798-5709-45f8-963b-25208a4888c4-catalog-content\") pod \"redhat-operators-dx5ds\" (UID: \"a65b3798-5709-45f8-963b-25208a4888c4\") " pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:15:35 crc kubenswrapper[4856]: I1122 09:15:35.435862 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a65b3798-5709-45f8-963b-25208a4888c4-utilities\") pod \"redhat-operators-dx5ds\" (UID: \"a65b3798-5709-45f8-963b-25208a4888c4\") " pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:15:35 crc kubenswrapper[4856]: I1122 09:15:35.436670 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a65b3798-5709-45f8-963b-25208a4888c4-catalog-content\") pod \"redhat-operators-dx5ds\" (UID: \"a65b3798-5709-45f8-963b-25208a4888c4\") " pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:15:35 crc kubenswrapper[4856]: I1122 09:15:35.436882 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a65b3798-5709-45f8-963b-25208a4888c4-utilities\") pod \"redhat-operators-dx5ds\" (UID: \"a65b3798-5709-45f8-963b-25208a4888c4\") " pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:15:35 crc kubenswrapper[4856]: I1122 09:15:35.455845 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdfzn\" (UniqueName: \"kubernetes.io/projected/a65b3798-5709-45f8-963b-25208a4888c4-kube-api-access-cdfzn\") pod \"redhat-operators-dx5ds\" (UID: \"a65b3798-5709-45f8-963b-25208a4888c4\") " pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:15:35 crc kubenswrapper[4856]: I1122 09:15:35.573038 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:15:36 crc kubenswrapper[4856]: I1122 09:15:36.099365 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dx5ds"] Nov 22 09:15:36 crc kubenswrapper[4856]: W1122 09:15:36.104920 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda65b3798_5709_45f8_963b_25208a4888c4.slice/crio-f1dd2778e8e931128939d972fa5034d374fd0ecc550b1b401c65c90e0bcfed46 WatchSource:0}: Error finding container f1dd2778e8e931128939d972fa5034d374fd0ecc550b1b401c65c90e0bcfed46: Status 404 returned error can't find the container with id f1dd2778e8e931128939d972fa5034d374fd0ecc550b1b401c65c90e0bcfed46 Nov 22 09:15:36 crc kubenswrapper[4856]: I1122 09:15:36.800739 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx5ds" event={"ID":"a65b3798-5709-45f8-963b-25208a4888c4","Type":"ContainerStarted","Data":"f1dd2778e8e931128939d972fa5034d374fd0ecc550b1b401c65c90e0bcfed46"} Nov 22 09:15:37 crc kubenswrapper[4856]: I1122 09:15:37.812400 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnlxn" event={"ID":"071671ee-5ad7-4c28-be44-cb32f1494a76","Type":"ContainerStarted","Data":"30fc0caec43631137df7c5d7e3872bd74ea8e87d6811c680afbc4ea432a77651"} Nov 22 09:15:37 crc kubenswrapper[4856]: I1122 09:15:37.815321 4856 generic.go:334] "Generic (PLEG): container finished" podID="a65b3798-5709-45f8-963b-25208a4888c4" containerID="4d5af3f1436a0e85dfbe9a8bdfa9472f1ef5b77f5da96bbb22a700a27e1b4093" exitCode=0 Nov 22 09:15:37 crc kubenswrapper[4856]: I1122 09:15:37.815364 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx5ds" event={"ID":"a65b3798-5709-45f8-963b-25208a4888c4","Type":"ContainerDied","Data":"4d5af3f1436a0e85dfbe9a8bdfa9472f1ef5b77f5da96bbb22a700a27e1b4093"} Nov 22 09:15:39 crc kubenswrapper[4856]: I1122 09:15:39.839414 4856 generic.go:334] "Generic (PLEG): container finished" podID="071671ee-5ad7-4c28-be44-cb32f1494a76" containerID="30fc0caec43631137df7c5d7e3872bd74ea8e87d6811c680afbc4ea432a77651" exitCode=0 Nov 22 09:15:39 crc kubenswrapper[4856]: I1122 09:15:39.839524 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnlxn" event={"ID":"071671ee-5ad7-4c28-be44-cb32f1494a76","Type":"ContainerDied","Data":"30fc0caec43631137df7c5d7e3872bd74ea8e87d6811c680afbc4ea432a77651"} Nov 22 09:15:39 crc kubenswrapper[4856]: I1122 09:15:39.860220 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:15:41 crc kubenswrapper[4856]: I1122 09:15:41.014404 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx5ds" event={"ID":"a65b3798-5709-45f8-963b-25208a4888c4","Type":"ContainerStarted","Data":"8e31b2be3104ba191dc38f5eab9c512b2f251443e474ab9d42f9285fb91b4310"} Nov 22 09:15:41 crc kubenswrapper[4856]: I1122 09:15:41.869710 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnlxn" event={"ID":"071671ee-5ad7-4c28-be44-cb32f1494a76","Type":"ContainerStarted","Data":"2815eae9c84283d32f71908a1d43a10ffe5c24f27292893a3fca9ab44ec9dfa5"} Nov 22 09:15:41 crc kubenswrapper[4856]: I1122 09:15:41.897463 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bnlxn" podStartSLOduration=3.5808456250000003 podStartE2EDuration="9.897440119s" podCreationTimestamp="2025-11-22 09:15:32 +0000 UTC" firstStartedPulling="2025-11-22 09:15:34.777415682 +0000 UTC m=+7977.190808940" lastFinishedPulling="2025-11-22 09:15:41.094010176 +0000 UTC m=+7983.507403434" observedRunningTime="2025-11-22 09:15:41.889459813 +0000 UTC m=+7984.302853091" watchObservedRunningTime="2025-11-22 09:15:41.897440119 +0000 UTC m=+7984.310833377" Nov 22 09:15:42 crc kubenswrapper[4856]: I1122 09:15:42.880669 4856 generic.go:334] "Generic (PLEG): container finished" podID="a65b3798-5709-45f8-963b-25208a4888c4" containerID="8e31b2be3104ba191dc38f5eab9c512b2f251443e474ab9d42f9285fb91b4310" exitCode=0 Nov 22 09:15:42 crc kubenswrapper[4856]: I1122 09:15:42.880774 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx5ds" event={"ID":"a65b3798-5709-45f8-963b-25208a4888c4","Type":"ContainerDied","Data":"8e31b2be3104ba191dc38f5eab9c512b2f251443e474ab9d42f9285fb91b4310"} Nov 22 09:15:43 crc kubenswrapper[4856]: I1122 09:15:43.200326 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:43 crc kubenswrapper[4856]: I1122 09:15:43.200389 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:43 crc kubenswrapper[4856]: I1122 09:15:43.248569 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:48 crc kubenswrapper[4856]: I1122 09:15:48.632679 4856 scope.go:117] "RemoveContainer" containerID="cda8080a120af309508d245c33e163d8158e8ea617945b08f4b6c9a30ca8b5f6" Nov 22 09:15:53 crc kubenswrapper[4856]: I1122 09:15:53.247716 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:53 crc kubenswrapper[4856]: I1122 09:15:53.294525 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bnlxn"] Nov 22 09:15:53 crc kubenswrapper[4856]: I1122 09:15:53.987391 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bnlxn" podUID="071671ee-5ad7-4c28-be44-cb32f1494a76" containerName="registry-server" containerID="cri-o://2815eae9c84283d32f71908a1d43a10ffe5c24f27292893a3fca9ab44ec9dfa5" gracePeriod=2 Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.008285 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx5ds" event={"ID":"a65b3798-5709-45f8-963b-25208a4888c4","Type":"ContainerStarted","Data":"bf5854250c1afb876838f85af0a19876f7d0e23fddeaebedd5bdc1d38cf6e28b"} Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.013437 4856 generic.go:334] "Generic (PLEG): container finished" podID="071671ee-5ad7-4c28-be44-cb32f1494a76" containerID="2815eae9c84283d32f71908a1d43a10ffe5c24f27292893a3fca9ab44ec9dfa5" exitCode=0 Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.013493 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnlxn" event={"ID":"071671ee-5ad7-4c28-be44-cb32f1494a76","Type":"ContainerDied","Data":"2815eae9c84283d32f71908a1d43a10ffe5c24f27292893a3fca9ab44ec9dfa5"} Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.033191 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dx5ds" podStartSLOduration=3.970777387 podStartE2EDuration="20.033174308s" podCreationTimestamp="2025-11-22 09:15:35 +0000 UTC" firstStartedPulling="2025-11-22 09:15:37.827135721 +0000 UTC m=+7980.240528979" lastFinishedPulling="2025-11-22 09:15:53.889532642 +0000 UTC m=+7996.302925900" observedRunningTime="2025-11-22 09:15:55.032746037 +0000 UTC m=+7997.446139305" watchObservedRunningTime="2025-11-22 09:15:55.033174308 +0000 UTC m=+7997.446567566" Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.237661 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.359540 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071671ee-5ad7-4c28-be44-cb32f1494a76-catalog-content\") pod \"071671ee-5ad7-4c28-be44-cb32f1494a76\" (UID: \"071671ee-5ad7-4c28-be44-cb32f1494a76\") " Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.359857 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071671ee-5ad7-4c28-be44-cb32f1494a76-utilities\") pod \"071671ee-5ad7-4c28-be44-cb32f1494a76\" (UID: \"071671ee-5ad7-4c28-be44-cb32f1494a76\") " Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.359947 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqts8\" (UniqueName: \"kubernetes.io/projected/071671ee-5ad7-4c28-be44-cb32f1494a76-kube-api-access-qqts8\") pod \"071671ee-5ad7-4c28-be44-cb32f1494a76\" (UID: \"071671ee-5ad7-4c28-be44-cb32f1494a76\") " Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.360877 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/071671ee-5ad7-4c28-be44-cb32f1494a76-utilities" (OuterVolumeSpecName: "utilities") pod "071671ee-5ad7-4c28-be44-cb32f1494a76" (UID: "071671ee-5ad7-4c28-be44-cb32f1494a76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.367069 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/071671ee-5ad7-4c28-be44-cb32f1494a76-kube-api-access-qqts8" (OuterVolumeSpecName: "kube-api-access-qqts8") pod "071671ee-5ad7-4c28-be44-cb32f1494a76" (UID: "071671ee-5ad7-4c28-be44-cb32f1494a76"). InnerVolumeSpecName "kube-api-access-qqts8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.405908 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/071671ee-5ad7-4c28-be44-cb32f1494a76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "071671ee-5ad7-4c28-be44-cb32f1494a76" (UID: "071671ee-5ad7-4c28-be44-cb32f1494a76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.462395 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/071671ee-5ad7-4c28-be44-cb32f1494a76-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.462435 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqts8\" (UniqueName: \"kubernetes.io/projected/071671ee-5ad7-4c28-be44-cb32f1494a76-kube-api-access-qqts8\") on node \"crc\" DevicePath \"\"" Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.462446 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/071671ee-5ad7-4c28-be44-cb32f1494a76-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.574254 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:15:55 crc kubenswrapper[4856]: I1122 09:15:55.574393 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:15:56 crc kubenswrapper[4856]: I1122 09:15:56.029873 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bnlxn" Nov 22 09:15:56 crc kubenswrapper[4856]: I1122 09:15:56.030070 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bnlxn" event={"ID":"071671ee-5ad7-4c28-be44-cb32f1494a76","Type":"ContainerDied","Data":"f35365110e2c5460036545d9f70b34ea7d904cb9a64d4bf67b9359285659f741"} Nov 22 09:15:56 crc kubenswrapper[4856]: I1122 09:15:56.030410 4856 scope.go:117] "RemoveContainer" containerID="2815eae9c84283d32f71908a1d43a10ffe5c24f27292893a3fca9ab44ec9dfa5" Nov 22 09:15:56 crc kubenswrapper[4856]: I1122 09:15:56.091737 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bnlxn"] Nov 22 09:15:56 crc kubenswrapper[4856]: I1122 09:15:56.095134 4856 scope.go:117] "RemoveContainer" containerID="30fc0caec43631137df7c5d7e3872bd74ea8e87d6811c680afbc4ea432a77651" Nov 22 09:15:56 crc kubenswrapper[4856]: I1122 09:15:56.106897 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bnlxn"] Nov 22 09:15:56 crc kubenswrapper[4856]: I1122 09:15:56.145288 4856 scope.go:117] "RemoveContainer" containerID="b9400aca805dd65fc6a4fd214b8cec57acd67af0b966005d5382470db48e5c7e" Nov 22 09:15:56 crc kubenswrapper[4856]: I1122 09:15:56.622128 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dx5ds" podUID="a65b3798-5709-45f8-963b-25208a4888c4" containerName="registry-server" probeResult="failure" output=< Nov 22 09:15:56 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 09:15:56 crc kubenswrapper[4856]: > Nov 22 09:15:56 crc kubenswrapper[4856]: I1122 09:15:56.724117 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="071671ee-5ad7-4c28-be44-cb32f1494a76" path="/var/lib/kubelet/pods/071671ee-5ad7-4c28-be44-cb32f1494a76/volumes" Nov 22 09:16:05 crc kubenswrapper[4856]: I1122 09:16:05.647785 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:16:05 crc kubenswrapper[4856]: I1122 09:16:05.812653 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:16:06 crc kubenswrapper[4856]: I1122 09:16:06.446001 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dx5ds"] Nov 22 09:16:07 crc kubenswrapper[4856]: I1122 09:16:07.159261 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dx5ds" podUID="a65b3798-5709-45f8-963b-25208a4888c4" containerName="registry-server" containerID="cri-o://bf5854250c1afb876838f85af0a19876f7d0e23fddeaebedd5bdc1d38cf6e28b" gracePeriod=2 Nov 22 09:16:07 crc kubenswrapper[4856]: I1122 09:16:07.675029 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:16:07 crc kubenswrapper[4856]: I1122 09:16:07.726897 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdfzn\" (UniqueName: \"kubernetes.io/projected/a65b3798-5709-45f8-963b-25208a4888c4-kube-api-access-cdfzn\") pod \"a65b3798-5709-45f8-963b-25208a4888c4\" (UID: \"a65b3798-5709-45f8-963b-25208a4888c4\") " Nov 22 09:16:07 crc kubenswrapper[4856]: I1122 09:16:07.727068 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a65b3798-5709-45f8-963b-25208a4888c4-utilities\") pod \"a65b3798-5709-45f8-963b-25208a4888c4\" (UID: \"a65b3798-5709-45f8-963b-25208a4888c4\") " Nov 22 09:16:07 crc kubenswrapper[4856]: I1122 09:16:07.727149 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a65b3798-5709-45f8-963b-25208a4888c4-catalog-content\") pod \"a65b3798-5709-45f8-963b-25208a4888c4\" (UID: \"a65b3798-5709-45f8-963b-25208a4888c4\") " Nov 22 09:16:07 crc kubenswrapper[4856]: I1122 09:16:07.728156 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a65b3798-5709-45f8-963b-25208a4888c4-utilities" (OuterVolumeSpecName: "utilities") pod "a65b3798-5709-45f8-963b-25208a4888c4" (UID: "a65b3798-5709-45f8-963b-25208a4888c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:16:07 crc kubenswrapper[4856]: I1122 09:16:07.743981 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a65b3798-5709-45f8-963b-25208a4888c4-kube-api-access-cdfzn" (OuterVolumeSpecName: "kube-api-access-cdfzn") pod "a65b3798-5709-45f8-963b-25208a4888c4" (UID: "a65b3798-5709-45f8-963b-25208a4888c4"). InnerVolumeSpecName "kube-api-access-cdfzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:16:07 crc kubenswrapper[4856]: I1122 09:16:07.821821 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a65b3798-5709-45f8-963b-25208a4888c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a65b3798-5709-45f8-963b-25208a4888c4" (UID: "a65b3798-5709-45f8-963b-25208a4888c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:16:07 crc kubenswrapper[4856]: I1122 09:16:07.833942 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a65b3798-5709-45f8-963b-25208a4888c4-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:16:07 crc kubenswrapper[4856]: I1122 09:16:07.834010 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a65b3798-5709-45f8-963b-25208a4888c4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:16:07 crc kubenswrapper[4856]: I1122 09:16:07.834067 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdfzn\" (UniqueName: \"kubernetes.io/projected/a65b3798-5709-45f8-963b-25208a4888c4-kube-api-access-cdfzn\") on node \"crc\" DevicePath \"\"" Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.170736 4856 generic.go:334] "Generic (PLEG): container finished" podID="a65b3798-5709-45f8-963b-25208a4888c4" containerID="bf5854250c1afb876838f85af0a19876f7d0e23fddeaebedd5bdc1d38cf6e28b" exitCode=0 Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.170799 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dx5ds" Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.170824 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx5ds" event={"ID":"a65b3798-5709-45f8-963b-25208a4888c4","Type":"ContainerDied","Data":"bf5854250c1afb876838f85af0a19876f7d0e23fddeaebedd5bdc1d38cf6e28b"} Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.171244 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dx5ds" event={"ID":"a65b3798-5709-45f8-963b-25208a4888c4","Type":"ContainerDied","Data":"f1dd2778e8e931128939d972fa5034d374fd0ecc550b1b401c65c90e0bcfed46"} Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.171263 4856 scope.go:117] "RemoveContainer" containerID="bf5854250c1afb876838f85af0a19876f7d0e23fddeaebedd5bdc1d38cf6e28b" Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.205998 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dx5ds"] Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.206016 4856 scope.go:117] "RemoveContainer" containerID="8e31b2be3104ba191dc38f5eab9c512b2f251443e474ab9d42f9285fb91b4310" Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.217208 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dx5ds"] Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.229107 4856 scope.go:117] "RemoveContainer" containerID="4d5af3f1436a0e85dfbe9a8bdfa9472f1ef5b77f5da96bbb22a700a27e1b4093" Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.284184 4856 scope.go:117] "RemoveContainer" containerID="bf5854250c1afb876838f85af0a19876f7d0e23fddeaebedd5bdc1d38cf6e28b" Nov 22 09:16:08 crc kubenswrapper[4856]: E1122 09:16:08.284704 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf5854250c1afb876838f85af0a19876f7d0e23fddeaebedd5bdc1d38cf6e28b\": container with ID starting with bf5854250c1afb876838f85af0a19876f7d0e23fddeaebedd5bdc1d38cf6e28b not found: ID does not exist" containerID="bf5854250c1afb876838f85af0a19876f7d0e23fddeaebedd5bdc1d38cf6e28b" Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.284747 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf5854250c1afb876838f85af0a19876f7d0e23fddeaebedd5bdc1d38cf6e28b"} err="failed to get container status \"bf5854250c1afb876838f85af0a19876f7d0e23fddeaebedd5bdc1d38cf6e28b\": rpc error: code = NotFound desc = could not find container \"bf5854250c1afb876838f85af0a19876f7d0e23fddeaebedd5bdc1d38cf6e28b\": container with ID starting with bf5854250c1afb876838f85af0a19876f7d0e23fddeaebedd5bdc1d38cf6e28b not found: ID does not exist" Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.284777 4856 scope.go:117] "RemoveContainer" containerID="8e31b2be3104ba191dc38f5eab9c512b2f251443e474ab9d42f9285fb91b4310" Nov 22 09:16:08 crc kubenswrapper[4856]: E1122 09:16:08.285016 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e31b2be3104ba191dc38f5eab9c512b2f251443e474ab9d42f9285fb91b4310\": container with ID starting with 8e31b2be3104ba191dc38f5eab9c512b2f251443e474ab9d42f9285fb91b4310 not found: ID does not exist" containerID="8e31b2be3104ba191dc38f5eab9c512b2f251443e474ab9d42f9285fb91b4310" Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.285045 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e31b2be3104ba191dc38f5eab9c512b2f251443e474ab9d42f9285fb91b4310"} err="failed to get container status \"8e31b2be3104ba191dc38f5eab9c512b2f251443e474ab9d42f9285fb91b4310\": rpc error: code = NotFound desc = could not find container \"8e31b2be3104ba191dc38f5eab9c512b2f251443e474ab9d42f9285fb91b4310\": container with ID starting with 8e31b2be3104ba191dc38f5eab9c512b2f251443e474ab9d42f9285fb91b4310 not found: ID does not exist" Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.285061 4856 scope.go:117] "RemoveContainer" containerID="4d5af3f1436a0e85dfbe9a8bdfa9472f1ef5b77f5da96bbb22a700a27e1b4093" Nov 22 09:16:08 crc kubenswrapper[4856]: E1122 09:16:08.285455 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5af3f1436a0e85dfbe9a8bdfa9472f1ef5b77f5da96bbb22a700a27e1b4093\": container with ID starting with 4d5af3f1436a0e85dfbe9a8bdfa9472f1ef5b77f5da96bbb22a700a27e1b4093 not found: ID does not exist" containerID="4d5af3f1436a0e85dfbe9a8bdfa9472f1ef5b77f5da96bbb22a700a27e1b4093" Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.285483 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5af3f1436a0e85dfbe9a8bdfa9472f1ef5b77f5da96bbb22a700a27e1b4093"} err="failed to get container status \"4d5af3f1436a0e85dfbe9a8bdfa9472f1ef5b77f5da96bbb22a700a27e1b4093\": rpc error: code = NotFound desc = could not find container \"4d5af3f1436a0e85dfbe9a8bdfa9472f1ef5b77f5da96bbb22a700a27e1b4093\": container with ID starting with 4d5af3f1436a0e85dfbe9a8bdfa9472f1ef5b77f5da96bbb22a700a27e1b4093 not found: ID does not exist" Nov 22 09:16:08 crc kubenswrapper[4856]: I1122 09:16:08.721794 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a65b3798-5709-45f8-963b-25208a4888c4" path="/var/lib/kubelet/pods/a65b3798-5709-45f8-963b-25208a4888c4/volumes" Nov 22 09:17:29 crc kubenswrapper[4856]: I1122 09:17:29.754209 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:17:29 crc kubenswrapper[4856]: I1122 09:17:29.754896 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:17:59 crc kubenswrapper[4856]: I1122 09:17:59.755106 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:17:59 crc kubenswrapper[4856]: I1122 09:17:59.755771 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:18:29 crc kubenswrapper[4856]: I1122 09:18:29.754567 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:18:29 crc kubenswrapper[4856]: I1122 09:18:29.755290 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:18:29 crc kubenswrapper[4856]: I1122 09:18:29.755351 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 09:18:29 crc kubenswrapper[4856]: I1122 09:18:29.756301 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:18:29 crc kubenswrapper[4856]: I1122 09:18:29.756370 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" gracePeriod=600 Nov 22 09:18:29 crc kubenswrapper[4856]: E1122 09:18:29.878172 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:18:30 crc kubenswrapper[4856]: I1122 09:18:30.553214 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" exitCode=0 Nov 22 09:18:30 crc kubenswrapper[4856]: I1122 09:18:30.553297 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313"} Nov 22 09:18:30 crc kubenswrapper[4856]: I1122 09:18:30.553763 4856 scope.go:117] "RemoveContainer" containerID="fc11a243af1a19cf535ce76d7bb4962a44374e57856bbefc7f1a740aa36c0387" Nov 22 09:18:30 crc kubenswrapper[4856]: I1122 09:18:30.554192 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:18:30 crc kubenswrapper[4856]: E1122 09:18:30.554479 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:18:45 crc kubenswrapper[4856]: I1122 09:18:45.710430 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:18:45 crc kubenswrapper[4856]: E1122 09:18:45.711241 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:18:59 crc kubenswrapper[4856]: I1122 09:18:59.710137 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:18:59 crc kubenswrapper[4856]: E1122 09:18:59.710929 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:19:14 crc kubenswrapper[4856]: I1122 09:19:14.709817 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:19:14 crc kubenswrapper[4856]: E1122 09:19:14.710665 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:19:24 crc kubenswrapper[4856]: I1122 09:19:24.050268 4856 generic.go:334] "Generic (PLEG): container finished" podID="8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44" containerID="ea23717aabb1f84461c9d699bccefbfd10d9d16dc3633098b406765a78aaa8fe" exitCode=0 Nov 22 09:19:24 crc kubenswrapper[4856]: I1122 09:19:24.050362 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-bld8w" event={"ID":"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44","Type":"ContainerDied","Data":"ea23717aabb1f84461c9d699bccefbfd10d9d16dc3633098b406765a78aaa8fe"} Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.503776 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.612757 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-libvirt-combined-ca-bundle\") pod \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.613377 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-libvirt-secret-0\") pod \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.613423 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7gk4\" (UniqueName: \"kubernetes.io/projected/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-kube-api-access-v7gk4\") pod \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.613587 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-inventory\") pod \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.613720 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-ssh-key\") pod \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\" (UID: \"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44\") " Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.634015 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-kube-api-access-v7gk4" (OuterVolumeSpecName: "kube-api-access-v7gk4") pod "8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44" (UID: "8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44"). InnerVolumeSpecName "kube-api-access-v7gk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.637912 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44" (UID: "8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.685703 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44" (UID: "8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.693793 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44" (UID: "8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.717575 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7gk4\" (UniqueName: \"kubernetes.io/projected/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-kube-api-access-v7gk4\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.717610 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.717623 4856 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.717635 4856 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.734677 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-inventory" (OuterVolumeSpecName: "inventory") pod "8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44" (UID: "8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:19:25 crc kubenswrapper[4856]: I1122 09:19:25.819501 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.073605 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-openstack-openstack-cell1-bld8w" event={"ID":"8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44","Type":"ContainerDied","Data":"b7c88272d1f16c13c953669938829065da4f428f4ae37b3f836d5ee097b2f224"} Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.073685 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7c88272d1f16c13c953669938829065da4f428f4ae37b3f836d5ee097b2f224" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.073775 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-openstack-openstack-cell1-bld8w" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.180950 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-2znnb"] Nov 22 09:19:26 crc kubenswrapper[4856]: E1122 09:19:26.181371 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071671ee-5ad7-4c28-be44-cb32f1494a76" containerName="extract-content" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.181391 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="071671ee-5ad7-4c28-be44-cb32f1494a76" containerName="extract-content" Nov 22 09:19:26 crc kubenswrapper[4856]: E1122 09:19:26.181400 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44" containerName="libvirt-openstack-openstack-cell1" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.181407 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44" containerName="libvirt-openstack-openstack-cell1" Nov 22 09:19:26 crc kubenswrapper[4856]: E1122 09:19:26.181423 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071671ee-5ad7-4c28-be44-cb32f1494a76" containerName="extract-utilities" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.181430 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="071671ee-5ad7-4c28-be44-cb32f1494a76" containerName="extract-utilities" Nov 22 09:19:26 crc kubenswrapper[4856]: E1122 09:19:26.181449 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="071671ee-5ad7-4c28-be44-cb32f1494a76" containerName="registry-server" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.181455 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="071671ee-5ad7-4c28-be44-cb32f1494a76" containerName="registry-server" Nov 22 09:19:26 crc kubenswrapper[4856]: E1122 09:19:26.181469 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65b3798-5709-45f8-963b-25208a4888c4" containerName="extract-content" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.181475 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65b3798-5709-45f8-963b-25208a4888c4" containerName="extract-content" Nov 22 09:19:26 crc kubenswrapper[4856]: E1122 09:19:26.181491 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65b3798-5709-45f8-963b-25208a4888c4" containerName="extract-utilities" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.181497 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65b3798-5709-45f8-963b-25208a4888c4" containerName="extract-utilities" Nov 22 09:19:26 crc kubenswrapper[4856]: E1122 09:19:26.181505 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65b3798-5709-45f8-963b-25208a4888c4" containerName="registry-server" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.181525 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65b3798-5709-45f8-963b-25208a4888c4" containerName="registry-server" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.181732 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44" containerName="libvirt-openstack-openstack-cell1" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.181754 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="071671ee-5ad7-4c28-be44-cb32f1494a76" containerName="registry-server" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.181762 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="a65b3798-5709-45f8-963b-25208a4888c4" containerName="registry-server" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.182629 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.192243 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-2znnb"] Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.193655 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.193801 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.193879 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.194182 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.194467 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.195042 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.195217 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.328990 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgjzr\" (UniqueName: \"kubernetes.io/projected/30440752-c1e1-4e68-b1af-ac6ee184d1c6-kube-api-access-fgjzr\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.329088 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.329266 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.329319 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.329377 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.329429 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.329633 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.329784 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.329891 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-inventory\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.431573 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.431645 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.431706 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-inventory\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.431764 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgjzr\" (UniqueName: \"kubernetes.io/projected/30440752-c1e1-4e68-b1af-ac6ee184d1c6-kube-api-access-fgjzr\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.431832 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.431874 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.431894 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.431953 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.432009 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.435215 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cells-global-config-0\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.437016 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.437228 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.438223 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.438252 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.438844 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-inventory\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.440392 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.440664 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-ssh-key\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.454160 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgjzr\" (UniqueName: \"kubernetes.io/projected/30440752-c1e1-4e68-b1af-ac6ee184d1c6-kube-api-access-fgjzr\") pod \"nova-cell1-openstack-openstack-cell1-2znnb\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:26 crc kubenswrapper[4856]: I1122 09:19:26.508538 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:19:27 crc kubenswrapper[4856]: I1122 09:19:27.070461 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-openstack-cell1-2znnb"] Nov 22 09:19:27 crc kubenswrapper[4856]: I1122 09:19:27.091557 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" event={"ID":"30440752-c1e1-4e68-b1af-ac6ee184d1c6","Type":"ContainerStarted","Data":"68f09c190a47eba5816e07fcec2aeb484eabc06dea55ef5e005546d415e69dea"} Nov 22 09:19:28 crc kubenswrapper[4856]: I1122 09:19:28.104284 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" event={"ID":"30440752-c1e1-4e68-b1af-ac6ee184d1c6","Type":"ContainerStarted","Data":"e0fec45c079857b0b96954c0c33dc74b45d4fb1953cee1f63f4038286b080d14"} Nov 22 09:19:28 crc kubenswrapper[4856]: I1122 09:19:28.131024 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" podStartSLOduration=1.6268072679999999 podStartE2EDuration="2.131001735s" podCreationTimestamp="2025-11-22 09:19:26 +0000 UTC" firstStartedPulling="2025-11-22 09:19:27.080193076 +0000 UTC m=+8209.493586334" lastFinishedPulling="2025-11-22 09:19:27.584387543 +0000 UTC m=+8209.997780801" observedRunningTime="2025-11-22 09:19:28.124206721 +0000 UTC m=+8210.537599979" watchObservedRunningTime="2025-11-22 09:19:28.131001735 +0000 UTC m=+8210.544394993" Nov 22 09:19:29 crc kubenswrapper[4856]: I1122 09:19:29.710933 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:19:29 crc kubenswrapper[4856]: E1122 09:19:29.711963 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:19:43 crc kubenswrapper[4856]: I1122 09:19:43.710404 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:19:43 crc kubenswrapper[4856]: E1122 09:19:43.711162 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:19:55 crc kubenswrapper[4856]: I1122 09:19:55.709642 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:19:55 crc kubenswrapper[4856]: E1122 09:19:55.710391 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:20:07 crc kubenswrapper[4856]: I1122 09:20:07.710020 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:20:07 crc kubenswrapper[4856]: E1122 09:20:07.711923 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:20:20 crc kubenswrapper[4856]: I1122 09:20:20.710027 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:20:20 crc kubenswrapper[4856]: E1122 09:20:20.710796 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:20:33 crc kubenswrapper[4856]: I1122 09:20:33.710012 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:20:33 crc kubenswrapper[4856]: E1122 09:20:33.710885 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:20:44 crc kubenswrapper[4856]: I1122 09:20:44.710562 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:20:44 crc kubenswrapper[4856]: E1122 09:20:44.711405 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:20:59 crc kubenswrapper[4856]: I1122 09:20:59.710551 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:20:59 crc kubenswrapper[4856]: E1122 09:20:59.712479 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:21:13 crc kubenswrapper[4856]: I1122 09:21:13.710270 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:21:13 crc kubenswrapper[4856]: E1122 09:21:13.711025 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:21:26 crc kubenswrapper[4856]: I1122 09:21:26.710436 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:21:26 crc kubenswrapper[4856]: E1122 09:21:26.711217 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:21:39 crc kubenswrapper[4856]: I1122 09:21:39.710075 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:21:39 crc kubenswrapper[4856]: E1122 09:21:39.710848 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:21:53 crc kubenswrapper[4856]: I1122 09:21:53.710000 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:21:53 crc kubenswrapper[4856]: E1122 09:21:53.710483 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:22:05 crc kubenswrapper[4856]: I1122 09:22:05.711411 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:22:05 crc kubenswrapper[4856]: E1122 09:22:05.712739 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:22:16 crc kubenswrapper[4856]: I1122 09:22:16.709715 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:22:16 crc kubenswrapper[4856]: E1122 09:22:16.710641 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:22:28 crc kubenswrapper[4856]: I1122 09:22:28.818887 4856 generic.go:334] "Generic (PLEG): container finished" podID="30440752-c1e1-4e68-b1af-ac6ee184d1c6" containerID="e0fec45c079857b0b96954c0c33dc74b45d4fb1953cee1f63f4038286b080d14" exitCode=0 Nov 22 09:22:28 crc kubenswrapper[4856]: I1122 09:22:28.819496 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" event={"ID":"30440752-c1e1-4e68-b1af-ac6ee184d1c6","Type":"ContainerDied","Data":"e0fec45c079857b0b96954c0c33dc74b45d4fb1953cee1f63f4038286b080d14"} Nov 22 09:22:29 crc kubenswrapper[4856]: I1122 09:22:29.710159 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:22:29 crc kubenswrapper[4856]: E1122 09:22:29.710452 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.273280 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.391058 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-migration-ssh-key-1\") pod \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.391256 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cells-global-config-0\") pod \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.391290 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-ssh-key\") pod \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.391321 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-combined-ca-bundle\") pod \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.391359 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-migration-ssh-key-0\") pod \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.391416 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-compute-config-1\") pod \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.391461 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgjzr\" (UniqueName: \"kubernetes.io/projected/30440752-c1e1-4e68-b1af-ac6ee184d1c6-kube-api-access-fgjzr\") pod \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.391486 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-compute-config-0\") pod \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.391549 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-inventory\") pod \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\" (UID: \"30440752-c1e1-4e68-b1af-ac6ee184d1c6\") " Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.396924 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "30440752-c1e1-4e68-b1af-ac6ee184d1c6" (UID: "30440752-c1e1-4e68-b1af-ac6ee184d1c6"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.397176 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30440752-c1e1-4e68-b1af-ac6ee184d1c6-kube-api-access-fgjzr" (OuterVolumeSpecName: "kube-api-access-fgjzr") pod "30440752-c1e1-4e68-b1af-ac6ee184d1c6" (UID: "30440752-c1e1-4e68-b1af-ac6ee184d1c6"). InnerVolumeSpecName "kube-api-access-fgjzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.418735 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "30440752-c1e1-4e68-b1af-ac6ee184d1c6" (UID: "30440752-c1e1-4e68-b1af-ac6ee184d1c6"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.421127 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-inventory" (OuterVolumeSpecName: "inventory") pod "30440752-c1e1-4e68-b1af-ac6ee184d1c6" (UID: "30440752-c1e1-4e68-b1af-ac6ee184d1c6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.422027 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "30440752-c1e1-4e68-b1af-ac6ee184d1c6" (UID: "30440752-c1e1-4e68-b1af-ac6ee184d1c6"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.426421 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "30440752-c1e1-4e68-b1af-ac6ee184d1c6" (UID: "30440752-c1e1-4e68-b1af-ac6ee184d1c6"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.427153 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "30440752-c1e1-4e68-b1af-ac6ee184d1c6" (UID: "30440752-c1e1-4e68-b1af-ac6ee184d1c6"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.427200 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "30440752-c1e1-4e68-b1af-ac6ee184d1c6" (UID: "30440752-c1e1-4e68-b1af-ac6ee184d1c6"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.427909 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "30440752-c1e1-4e68-b1af-ac6ee184d1c6" (UID: "30440752-c1e1-4e68-b1af-ac6ee184d1c6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.494092 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgjzr\" (UniqueName: \"kubernetes.io/projected/30440752-c1e1-4e68-b1af-ac6ee184d1c6-kube-api-access-fgjzr\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.494132 4856 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.494148 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.494160 4856 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.494172 4856 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.494182 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.494194 4856 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.494207 4856 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.494221 4856 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/30440752-c1e1-4e68-b1af-ac6ee184d1c6-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.841494 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" event={"ID":"30440752-c1e1-4e68-b1af-ac6ee184d1c6","Type":"ContainerDied","Data":"68f09c190a47eba5816e07fcec2aeb484eabc06dea55ef5e005546d415e69dea"} Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.841819 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68f09c190a47eba5816e07fcec2aeb484eabc06dea55ef5e005546d415e69dea" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.841568 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-openstack-cell1-2znnb" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.933027 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-rhs7q"] Nov 22 09:22:30 crc kubenswrapper[4856]: E1122 09:22:30.934342 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30440752-c1e1-4e68-b1af-ac6ee184d1c6" containerName="nova-cell1-openstack-openstack-cell1" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.934544 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="30440752-c1e1-4e68-b1af-ac6ee184d1c6" containerName="nova-cell1-openstack-openstack-cell1" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.934902 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="30440752-c1e1-4e68-b1af-ac6ee184d1c6" containerName="nova-cell1-openstack-openstack-cell1" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.936117 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.938591 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.939034 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.939175 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.939195 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.939402 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:22:30 crc kubenswrapper[4856]: I1122 09:22:30.945398 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-rhs7q"] Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.108094 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.108144 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-inventory\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.108253 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.108286 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.108315 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ssh-key\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.108332 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.108550 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxcrv\" (UniqueName: \"kubernetes.io/projected/0845a70f-bedf-4495-8e38-207547e02a31-kube-api-access-zxcrv\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.211101 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.211166 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.211197 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ssh-key\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.211222 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.211278 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxcrv\" (UniqueName: \"kubernetes.io/projected/0845a70f-bedf-4495-8e38-207547e02a31-kube-api-access-zxcrv\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.211366 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.211392 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-inventory\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.215094 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-1\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.215293 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-0\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.215858 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ssh-key\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.215848 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-telemetry-combined-ca-bundle\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.216233 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-inventory\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.216809 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-2\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.228045 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxcrv\" (UniqueName: \"kubernetes.io/projected/0845a70f-bedf-4495-8e38-207547e02a31-kube-api-access-zxcrv\") pod \"telemetry-openstack-openstack-cell1-rhs7q\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.260440 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.768824 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-openstack-openstack-cell1-rhs7q"] Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.776036 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:22:31 crc kubenswrapper[4856]: I1122 09:22:31.850478 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" event={"ID":"0845a70f-bedf-4495-8e38-207547e02a31","Type":"ContainerStarted","Data":"72827997439b3c3587c52206cd17e62f865b6ec64f43870d40f7fb0e593708ff"} Nov 22 09:22:33 crc kubenswrapper[4856]: I1122 09:22:33.869099 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" event={"ID":"0845a70f-bedf-4495-8e38-207547e02a31","Type":"ContainerStarted","Data":"82a2caa8a40ce0803164fa622a39401b7d15654559aecfbfba50e5fdab63b740"} Nov 22 09:22:33 crc kubenswrapper[4856]: I1122 09:22:33.892166 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" podStartSLOduration=2.607446431 podStartE2EDuration="3.892149746s" podCreationTimestamp="2025-11-22 09:22:30 +0000 UTC" firstStartedPulling="2025-11-22 09:22:31.77577057 +0000 UTC m=+8394.189163828" lastFinishedPulling="2025-11-22 09:22:33.060473885 +0000 UTC m=+8395.473867143" observedRunningTime="2025-11-22 09:22:33.883184065 +0000 UTC m=+8396.296577323" watchObservedRunningTime="2025-11-22 09:22:33.892149746 +0000 UTC m=+8396.305542994" Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.021625 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8mv9b"] Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.024870 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.075294 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8mv9b"] Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.201067 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e3ae3d-0770-4ee0-a03e-279941539afc-catalog-content\") pod \"community-operators-8mv9b\" (UID: \"f6e3ae3d-0770-4ee0-a03e-279941539afc\") " pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.201357 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqvrv\" (UniqueName: \"kubernetes.io/projected/f6e3ae3d-0770-4ee0-a03e-279941539afc-kube-api-access-xqvrv\") pod \"community-operators-8mv9b\" (UID: \"f6e3ae3d-0770-4ee0-a03e-279941539afc\") " pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.201719 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e3ae3d-0770-4ee0-a03e-279941539afc-utilities\") pod \"community-operators-8mv9b\" (UID: \"f6e3ae3d-0770-4ee0-a03e-279941539afc\") " pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.304609 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e3ae3d-0770-4ee0-a03e-279941539afc-utilities\") pod \"community-operators-8mv9b\" (UID: \"f6e3ae3d-0770-4ee0-a03e-279941539afc\") " pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.304789 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e3ae3d-0770-4ee0-a03e-279941539afc-catalog-content\") pod \"community-operators-8mv9b\" (UID: \"f6e3ae3d-0770-4ee0-a03e-279941539afc\") " pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.304853 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqvrv\" (UniqueName: \"kubernetes.io/projected/f6e3ae3d-0770-4ee0-a03e-279941539afc-kube-api-access-xqvrv\") pod \"community-operators-8mv9b\" (UID: \"f6e3ae3d-0770-4ee0-a03e-279941539afc\") " pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.305078 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e3ae3d-0770-4ee0-a03e-279941539afc-utilities\") pod \"community-operators-8mv9b\" (UID: \"f6e3ae3d-0770-4ee0-a03e-279941539afc\") " pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.305390 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e3ae3d-0770-4ee0-a03e-279941539afc-catalog-content\") pod \"community-operators-8mv9b\" (UID: \"f6e3ae3d-0770-4ee0-a03e-279941539afc\") " pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.340027 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqvrv\" (UniqueName: \"kubernetes.io/projected/f6e3ae3d-0770-4ee0-a03e-279941539afc-kube-api-access-xqvrv\") pod \"community-operators-8mv9b\" (UID: \"f6e3ae3d-0770-4ee0-a03e-279941539afc\") " pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.350713 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.862738 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8mv9b"] Nov 22 09:22:39 crc kubenswrapper[4856]: I1122 09:22:39.947662 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8mv9b" event={"ID":"f6e3ae3d-0770-4ee0-a03e-279941539afc","Type":"ContainerStarted","Data":"1e567beddfbb0c33fa4baa3f7ea6e9fef3c10f0b8d9c38a1d3cf9ce90ddb983e"} Nov 22 09:22:40 crc kubenswrapper[4856]: I1122 09:22:40.960313 4856 generic.go:334] "Generic (PLEG): container finished" podID="f6e3ae3d-0770-4ee0-a03e-279941539afc" containerID="0c8a0d32992e66dfdd80f43fa624175cceaadecd83f37822151f82c210c5fabb" exitCode=0 Nov 22 09:22:40 crc kubenswrapper[4856]: I1122 09:22:40.960432 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8mv9b" event={"ID":"f6e3ae3d-0770-4ee0-a03e-279941539afc","Type":"ContainerDied","Data":"0c8a0d32992e66dfdd80f43fa624175cceaadecd83f37822151f82c210c5fabb"} Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.631631 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c2tjt"] Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.635616 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.640056 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2tjt"] Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.710790 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:22:41 crc kubenswrapper[4856]: E1122 09:22:41.711134 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.770835 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc20166-a226-436e-8401-899b1eae0e42-utilities\") pod \"redhat-marketplace-c2tjt\" (UID: \"7dc20166-a226-436e-8401-899b1eae0e42\") " pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.770934 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn88c\" (UniqueName: \"kubernetes.io/projected/7dc20166-a226-436e-8401-899b1eae0e42-kube-api-access-dn88c\") pod \"redhat-marketplace-c2tjt\" (UID: \"7dc20166-a226-436e-8401-899b1eae0e42\") " pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.771433 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc20166-a226-436e-8401-899b1eae0e42-catalog-content\") pod \"redhat-marketplace-c2tjt\" (UID: \"7dc20166-a226-436e-8401-899b1eae0e42\") " pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.873545 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc20166-a226-436e-8401-899b1eae0e42-utilities\") pod \"redhat-marketplace-c2tjt\" (UID: \"7dc20166-a226-436e-8401-899b1eae0e42\") " pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.873627 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn88c\" (UniqueName: \"kubernetes.io/projected/7dc20166-a226-436e-8401-899b1eae0e42-kube-api-access-dn88c\") pod \"redhat-marketplace-c2tjt\" (UID: \"7dc20166-a226-436e-8401-899b1eae0e42\") " pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.873757 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc20166-a226-436e-8401-899b1eae0e42-catalog-content\") pod \"redhat-marketplace-c2tjt\" (UID: \"7dc20166-a226-436e-8401-899b1eae0e42\") " pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.874187 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc20166-a226-436e-8401-899b1eae0e42-utilities\") pod \"redhat-marketplace-c2tjt\" (UID: \"7dc20166-a226-436e-8401-899b1eae0e42\") " pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.874327 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc20166-a226-436e-8401-899b1eae0e42-catalog-content\") pod \"redhat-marketplace-c2tjt\" (UID: \"7dc20166-a226-436e-8401-899b1eae0e42\") " pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.895051 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn88c\" (UniqueName: \"kubernetes.io/projected/7dc20166-a226-436e-8401-899b1eae0e42-kube-api-access-dn88c\") pod \"redhat-marketplace-c2tjt\" (UID: \"7dc20166-a226-436e-8401-899b1eae0e42\") " pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:41 crc kubenswrapper[4856]: I1122 09:22:41.968009 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:42 crc kubenswrapper[4856]: I1122 09:22:42.470323 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2tjt"] Nov 22 09:22:42 crc kubenswrapper[4856]: W1122 09:22:42.470343 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dc20166_a226_436e_8401_899b1eae0e42.slice/crio-02b1443405248b5c0354ab4ab71c52c7465db6b360d263241c4b65e8b1db7161 WatchSource:0}: Error finding container 02b1443405248b5c0354ab4ab71c52c7465db6b360d263241c4b65e8b1db7161: Status 404 returned error can't find the container with id 02b1443405248b5c0354ab4ab71c52c7465db6b360d263241c4b65e8b1db7161 Nov 22 09:22:42 crc kubenswrapper[4856]: I1122 09:22:42.984645 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8mv9b" event={"ID":"f6e3ae3d-0770-4ee0-a03e-279941539afc","Type":"ContainerStarted","Data":"ba5d7dffefc0cabef303f58ea0359079548b4eb4764a048ec8fd8b6339049a0c"} Nov 22 09:22:42 crc kubenswrapper[4856]: I1122 09:22:42.986895 4856 generic.go:334] "Generic (PLEG): container finished" podID="7dc20166-a226-436e-8401-899b1eae0e42" containerID="1f548107a2fda58316336588805417fc3abec0f7f8a52612703fafb1f4c1898e" exitCode=0 Nov 22 09:22:42 crc kubenswrapper[4856]: I1122 09:22:42.986923 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2tjt" event={"ID":"7dc20166-a226-436e-8401-899b1eae0e42","Type":"ContainerDied","Data":"1f548107a2fda58316336588805417fc3abec0f7f8a52612703fafb1f4c1898e"} Nov 22 09:22:42 crc kubenswrapper[4856]: I1122 09:22:42.986940 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2tjt" event={"ID":"7dc20166-a226-436e-8401-899b1eae0e42","Type":"ContainerStarted","Data":"02b1443405248b5c0354ab4ab71c52c7465db6b360d263241c4b65e8b1db7161"} Nov 22 09:22:46 crc kubenswrapper[4856]: I1122 09:22:46.016230 4856 generic.go:334] "Generic (PLEG): container finished" podID="f6e3ae3d-0770-4ee0-a03e-279941539afc" containerID="ba5d7dffefc0cabef303f58ea0359079548b4eb4764a048ec8fd8b6339049a0c" exitCode=0 Nov 22 09:22:46 crc kubenswrapper[4856]: I1122 09:22:46.016346 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8mv9b" event={"ID":"f6e3ae3d-0770-4ee0-a03e-279941539afc","Type":"ContainerDied","Data":"ba5d7dffefc0cabef303f58ea0359079548b4eb4764a048ec8fd8b6339049a0c"} Nov 22 09:22:46 crc kubenswrapper[4856]: I1122 09:22:46.021232 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2tjt" event={"ID":"7dc20166-a226-436e-8401-899b1eae0e42","Type":"ContainerStarted","Data":"a628da4b9b36db458a9b04b1ca749ce539eb45571266054af8fa9747c76f94a8"} Nov 22 09:22:47 crc kubenswrapper[4856]: I1122 09:22:47.032962 4856 generic.go:334] "Generic (PLEG): container finished" podID="7dc20166-a226-436e-8401-899b1eae0e42" containerID="a628da4b9b36db458a9b04b1ca749ce539eb45571266054af8fa9747c76f94a8" exitCode=0 Nov 22 09:22:47 crc kubenswrapper[4856]: I1122 09:22:47.033021 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2tjt" event={"ID":"7dc20166-a226-436e-8401-899b1eae0e42","Type":"ContainerDied","Data":"a628da4b9b36db458a9b04b1ca749ce539eb45571266054af8fa9747c76f94a8"} Nov 22 09:22:48 crc kubenswrapper[4856]: I1122 09:22:48.042973 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8mv9b" event={"ID":"f6e3ae3d-0770-4ee0-a03e-279941539afc","Type":"ContainerStarted","Data":"7e08fe1de0de7a4f2767f179da2dc5a6295b67aa6bc3fc699d64730cf570bd35"} Nov 22 09:22:48 crc kubenswrapper[4856]: I1122 09:22:48.065233 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8mv9b" podStartSLOduration=3.487914096 podStartE2EDuration="10.065217618s" podCreationTimestamp="2025-11-22 09:22:38 +0000 UTC" firstStartedPulling="2025-11-22 09:22:40.962310661 +0000 UTC m=+8403.375703919" lastFinishedPulling="2025-11-22 09:22:47.539614183 +0000 UTC m=+8409.953007441" observedRunningTime="2025-11-22 09:22:48.060353186 +0000 UTC m=+8410.473746454" watchObservedRunningTime="2025-11-22 09:22:48.065217618 +0000 UTC m=+8410.478610876" Nov 22 09:22:49 crc kubenswrapper[4856]: I1122 09:22:49.074453 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2tjt" event={"ID":"7dc20166-a226-436e-8401-899b1eae0e42","Type":"ContainerStarted","Data":"3c1bc60269f5bd1cfa7e9a416b53f9e4a14b9c3ae19c6ef7e4858b0a66805d2d"} Nov 22 09:22:49 crc kubenswrapper[4856]: I1122 09:22:49.104265 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c2tjt" podStartSLOduration=2.554958401 podStartE2EDuration="8.104214568s" podCreationTimestamp="2025-11-22 09:22:41 +0000 UTC" firstStartedPulling="2025-11-22 09:22:42.988653566 +0000 UTC m=+8405.402046824" lastFinishedPulling="2025-11-22 09:22:48.537909733 +0000 UTC m=+8410.951302991" observedRunningTime="2025-11-22 09:22:49.096663324 +0000 UTC m=+8411.510056582" watchObservedRunningTime="2025-11-22 09:22:49.104214568 +0000 UTC m=+8411.517607826" Nov 22 09:22:49 crc kubenswrapper[4856]: I1122 09:22:49.351174 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:49 crc kubenswrapper[4856]: I1122 09:22:49.351472 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:50 crc kubenswrapper[4856]: I1122 09:22:50.397493 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-8mv9b" podUID="f6e3ae3d-0770-4ee0-a03e-279941539afc" containerName="registry-server" probeResult="failure" output=< Nov 22 09:22:50 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 09:22:50 crc kubenswrapper[4856]: > Nov 22 09:22:51 crc kubenswrapper[4856]: I1122 09:22:51.969013 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:51 crc kubenswrapper[4856]: I1122 09:22:51.969376 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:52 crc kubenswrapper[4856]: I1122 09:22:52.014274 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:22:54 crc kubenswrapper[4856]: I1122 09:22:54.709448 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:22:54 crc kubenswrapper[4856]: E1122 09:22:54.710027 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:22:59 crc kubenswrapper[4856]: I1122 09:22:59.404918 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:59 crc kubenswrapper[4856]: I1122 09:22:59.462189 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:22:59 crc kubenswrapper[4856]: I1122 09:22:59.644655 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8mv9b"] Nov 22 09:23:01 crc kubenswrapper[4856]: I1122 09:23:01.188013 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8mv9b" podUID="f6e3ae3d-0770-4ee0-a03e-279941539afc" containerName="registry-server" containerID="cri-o://7e08fe1de0de7a4f2767f179da2dc5a6295b67aa6bc3fc699d64730cf570bd35" gracePeriod=2 Nov 22 09:23:01 crc kubenswrapper[4856]: I1122 09:23:01.744613 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:23:01 crc kubenswrapper[4856]: I1122 09:23:01.895521 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e3ae3d-0770-4ee0-a03e-279941539afc-catalog-content\") pod \"f6e3ae3d-0770-4ee0-a03e-279941539afc\" (UID: \"f6e3ae3d-0770-4ee0-a03e-279941539afc\") " Nov 22 09:23:01 crc kubenswrapper[4856]: I1122 09:23:01.895619 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e3ae3d-0770-4ee0-a03e-279941539afc-utilities\") pod \"f6e3ae3d-0770-4ee0-a03e-279941539afc\" (UID: \"f6e3ae3d-0770-4ee0-a03e-279941539afc\") " Nov 22 09:23:01 crc kubenswrapper[4856]: I1122 09:23:01.895816 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqvrv\" (UniqueName: \"kubernetes.io/projected/f6e3ae3d-0770-4ee0-a03e-279941539afc-kube-api-access-xqvrv\") pod \"f6e3ae3d-0770-4ee0-a03e-279941539afc\" (UID: \"f6e3ae3d-0770-4ee0-a03e-279941539afc\") " Nov 22 09:23:01 crc kubenswrapper[4856]: I1122 09:23:01.896718 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6e3ae3d-0770-4ee0-a03e-279941539afc-utilities" (OuterVolumeSpecName: "utilities") pod "f6e3ae3d-0770-4ee0-a03e-279941539afc" (UID: "f6e3ae3d-0770-4ee0-a03e-279941539afc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:23:01 crc kubenswrapper[4856]: I1122 09:23:01.901405 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6e3ae3d-0770-4ee0-a03e-279941539afc-kube-api-access-xqvrv" (OuterVolumeSpecName: "kube-api-access-xqvrv") pod "f6e3ae3d-0770-4ee0-a03e-279941539afc" (UID: "f6e3ae3d-0770-4ee0-a03e-279941539afc"). InnerVolumeSpecName "kube-api-access-xqvrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:23:01 crc kubenswrapper[4856]: I1122 09:23:01.939063 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6e3ae3d-0770-4ee0-a03e-279941539afc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6e3ae3d-0770-4ee0-a03e-279941539afc" (UID: "f6e3ae3d-0770-4ee0-a03e-279941539afc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:23:01 crc kubenswrapper[4856]: I1122 09:23:01.998846 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6e3ae3d-0770-4ee0-a03e-279941539afc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:01 crc kubenswrapper[4856]: I1122 09:23:01.998886 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6e3ae3d-0770-4ee0-a03e-279941539afc-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:01 crc kubenswrapper[4856]: I1122 09:23:01.998900 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqvrv\" (UniqueName: \"kubernetes.io/projected/f6e3ae3d-0770-4ee0-a03e-279941539afc-kube-api-access-xqvrv\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.017792 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.197543 4856 generic.go:334] "Generic (PLEG): container finished" podID="f6e3ae3d-0770-4ee0-a03e-279941539afc" containerID="7e08fe1de0de7a4f2767f179da2dc5a6295b67aa6bc3fc699d64730cf570bd35" exitCode=0 Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.197590 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8mv9b" event={"ID":"f6e3ae3d-0770-4ee0-a03e-279941539afc","Type":"ContainerDied","Data":"7e08fe1de0de7a4f2767f179da2dc5a6295b67aa6bc3fc699d64730cf570bd35"} Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.197622 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8mv9b" event={"ID":"f6e3ae3d-0770-4ee0-a03e-279941539afc","Type":"ContainerDied","Data":"1e567beddfbb0c33fa4baa3f7ea6e9fef3c10f0b8d9c38a1d3cf9ce90ddb983e"} Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.197644 4856 scope.go:117] "RemoveContainer" containerID="7e08fe1de0de7a4f2767f179da2dc5a6295b67aa6bc3fc699d64730cf570bd35" Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.197660 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8mv9b" Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.221642 4856 scope.go:117] "RemoveContainer" containerID="ba5d7dffefc0cabef303f58ea0359079548b4eb4764a048ec8fd8b6339049a0c" Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.230473 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8mv9b"] Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.240795 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8mv9b"] Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.247161 4856 scope.go:117] "RemoveContainer" containerID="0c8a0d32992e66dfdd80f43fa624175cceaadecd83f37822151f82c210c5fabb" Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.291435 4856 scope.go:117] "RemoveContainer" containerID="7e08fe1de0de7a4f2767f179da2dc5a6295b67aa6bc3fc699d64730cf570bd35" Nov 22 09:23:02 crc kubenswrapper[4856]: E1122 09:23:02.292019 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e08fe1de0de7a4f2767f179da2dc5a6295b67aa6bc3fc699d64730cf570bd35\": container with ID starting with 7e08fe1de0de7a4f2767f179da2dc5a6295b67aa6bc3fc699d64730cf570bd35 not found: ID does not exist" containerID="7e08fe1de0de7a4f2767f179da2dc5a6295b67aa6bc3fc699d64730cf570bd35" Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.292058 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e08fe1de0de7a4f2767f179da2dc5a6295b67aa6bc3fc699d64730cf570bd35"} err="failed to get container status \"7e08fe1de0de7a4f2767f179da2dc5a6295b67aa6bc3fc699d64730cf570bd35\": rpc error: code = NotFound desc = could not find container \"7e08fe1de0de7a4f2767f179da2dc5a6295b67aa6bc3fc699d64730cf570bd35\": container with ID starting with 7e08fe1de0de7a4f2767f179da2dc5a6295b67aa6bc3fc699d64730cf570bd35 not found: ID does not exist" Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.292084 4856 scope.go:117] "RemoveContainer" containerID="ba5d7dffefc0cabef303f58ea0359079548b4eb4764a048ec8fd8b6339049a0c" Nov 22 09:23:02 crc kubenswrapper[4856]: E1122 09:23:02.292470 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba5d7dffefc0cabef303f58ea0359079548b4eb4764a048ec8fd8b6339049a0c\": container with ID starting with ba5d7dffefc0cabef303f58ea0359079548b4eb4764a048ec8fd8b6339049a0c not found: ID does not exist" containerID="ba5d7dffefc0cabef303f58ea0359079548b4eb4764a048ec8fd8b6339049a0c" Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.292536 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba5d7dffefc0cabef303f58ea0359079548b4eb4764a048ec8fd8b6339049a0c"} err="failed to get container status \"ba5d7dffefc0cabef303f58ea0359079548b4eb4764a048ec8fd8b6339049a0c\": rpc error: code = NotFound desc = could not find container \"ba5d7dffefc0cabef303f58ea0359079548b4eb4764a048ec8fd8b6339049a0c\": container with ID starting with ba5d7dffefc0cabef303f58ea0359079548b4eb4764a048ec8fd8b6339049a0c not found: ID does not exist" Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.292562 4856 scope.go:117] "RemoveContainer" containerID="0c8a0d32992e66dfdd80f43fa624175cceaadecd83f37822151f82c210c5fabb" Nov 22 09:23:02 crc kubenswrapper[4856]: E1122 09:23:02.292929 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c8a0d32992e66dfdd80f43fa624175cceaadecd83f37822151f82c210c5fabb\": container with ID starting with 0c8a0d32992e66dfdd80f43fa624175cceaadecd83f37822151f82c210c5fabb not found: ID does not exist" containerID="0c8a0d32992e66dfdd80f43fa624175cceaadecd83f37822151f82c210c5fabb" Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.292959 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c8a0d32992e66dfdd80f43fa624175cceaadecd83f37822151f82c210c5fabb"} err="failed to get container status \"0c8a0d32992e66dfdd80f43fa624175cceaadecd83f37822151f82c210c5fabb\": rpc error: code = NotFound desc = could not find container \"0c8a0d32992e66dfdd80f43fa624175cceaadecd83f37822151f82c210c5fabb\": container with ID starting with 0c8a0d32992e66dfdd80f43fa624175cceaadecd83f37822151f82c210c5fabb not found: ID does not exist" Nov 22 09:23:02 crc kubenswrapper[4856]: I1122 09:23:02.721977 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6e3ae3d-0770-4ee0-a03e-279941539afc" path="/var/lib/kubelet/pods/f6e3ae3d-0770-4ee0-a03e-279941539afc/volumes" Nov 22 09:23:03 crc kubenswrapper[4856]: I1122 09:23:03.842356 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2tjt"] Nov 22 09:23:03 crc kubenswrapper[4856]: I1122 09:23:03.843775 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c2tjt" podUID="7dc20166-a226-436e-8401-899b1eae0e42" containerName="registry-server" containerID="cri-o://3c1bc60269f5bd1cfa7e9a416b53f9e4a14b9c3ae19c6ef7e4858b0a66805d2d" gracePeriod=2 Nov 22 09:23:04 crc kubenswrapper[4856]: I1122 09:23:04.225360 4856 generic.go:334] "Generic (PLEG): container finished" podID="7dc20166-a226-436e-8401-899b1eae0e42" containerID="3c1bc60269f5bd1cfa7e9a416b53f9e4a14b9c3ae19c6ef7e4858b0a66805d2d" exitCode=0 Nov 22 09:23:04 crc kubenswrapper[4856]: I1122 09:23:04.225407 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2tjt" event={"ID":"7dc20166-a226-436e-8401-899b1eae0e42","Type":"ContainerDied","Data":"3c1bc60269f5bd1cfa7e9a416b53f9e4a14b9c3ae19c6ef7e4858b0a66805d2d"} Nov 22 09:23:04 crc kubenswrapper[4856]: I1122 09:23:04.373339 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:23:04 crc kubenswrapper[4856]: I1122 09:23:04.450750 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn88c\" (UniqueName: \"kubernetes.io/projected/7dc20166-a226-436e-8401-899b1eae0e42-kube-api-access-dn88c\") pod \"7dc20166-a226-436e-8401-899b1eae0e42\" (UID: \"7dc20166-a226-436e-8401-899b1eae0e42\") " Nov 22 09:23:04 crc kubenswrapper[4856]: I1122 09:23:04.450984 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc20166-a226-436e-8401-899b1eae0e42-catalog-content\") pod \"7dc20166-a226-436e-8401-899b1eae0e42\" (UID: \"7dc20166-a226-436e-8401-899b1eae0e42\") " Nov 22 09:23:04 crc kubenswrapper[4856]: I1122 09:23:04.451019 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc20166-a226-436e-8401-899b1eae0e42-utilities\") pod \"7dc20166-a226-436e-8401-899b1eae0e42\" (UID: \"7dc20166-a226-436e-8401-899b1eae0e42\") " Nov 22 09:23:04 crc kubenswrapper[4856]: I1122 09:23:04.452118 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dc20166-a226-436e-8401-899b1eae0e42-utilities" (OuterVolumeSpecName: "utilities") pod "7dc20166-a226-436e-8401-899b1eae0e42" (UID: "7dc20166-a226-436e-8401-899b1eae0e42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:23:04 crc kubenswrapper[4856]: I1122 09:23:04.458146 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc20166-a226-436e-8401-899b1eae0e42-kube-api-access-dn88c" (OuterVolumeSpecName: "kube-api-access-dn88c") pod "7dc20166-a226-436e-8401-899b1eae0e42" (UID: "7dc20166-a226-436e-8401-899b1eae0e42"). InnerVolumeSpecName "kube-api-access-dn88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:23:04 crc kubenswrapper[4856]: I1122 09:23:04.467949 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dc20166-a226-436e-8401-899b1eae0e42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dc20166-a226-436e-8401-899b1eae0e42" (UID: "7dc20166-a226-436e-8401-899b1eae0e42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:23:04 crc kubenswrapper[4856]: I1122 09:23:04.553141 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dc20166-a226-436e-8401-899b1eae0e42-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:04 crc kubenswrapper[4856]: I1122 09:23:04.553175 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dc20166-a226-436e-8401-899b1eae0e42-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:04 crc kubenswrapper[4856]: I1122 09:23:04.553186 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn88c\" (UniqueName: \"kubernetes.io/projected/7dc20166-a226-436e-8401-899b1eae0e42-kube-api-access-dn88c\") on node \"crc\" DevicePath \"\"" Nov 22 09:23:05 crc kubenswrapper[4856]: I1122 09:23:05.236430 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2tjt" event={"ID":"7dc20166-a226-436e-8401-899b1eae0e42","Type":"ContainerDied","Data":"02b1443405248b5c0354ab4ab71c52c7465db6b360d263241c4b65e8b1db7161"} Nov 22 09:23:05 crc kubenswrapper[4856]: I1122 09:23:05.236487 4856 scope.go:117] "RemoveContainer" containerID="3c1bc60269f5bd1cfa7e9a416b53f9e4a14b9c3ae19c6ef7e4858b0a66805d2d" Nov 22 09:23:05 crc kubenswrapper[4856]: I1122 09:23:05.237180 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2tjt" Nov 22 09:23:05 crc kubenswrapper[4856]: I1122 09:23:05.260389 4856 scope.go:117] "RemoveContainer" containerID="a628da4b9b36db458a9b04b1ca749ce539eb45571266054af8fa9747c76f94a8" Nov 22 09:23:05 crc kubenswrapper[4856]: I1122 09:23:05.262477 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2tjt"] Nov 22 09:23:05 crc kubenswrapper[4856]: I1122 09:23:05.271890 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2tjt"] Nov 22 09:23:05 crc kubenswrapper[4856]: I1122 09:23:05.287972 4856 scope.go:117] "RemoveContainer" containerID="1f548107a2fda58316336588805417fc3abec0f7f8a52612703fafb1f4c1898e" Nov 22 09:23:06 crc kubenswrapper[4856]: I1122 09:23:06.709855 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:23:06 crc kubenswrapper[4856]: E1122 09:23:06.710680 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:23:06 crc kubenswrapper[4856]: I1122 09:23:06.723493 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dc20166-a226-436e-8401-899b1eae0e42" path="/var/lib/kubelet/pods/7dc20166-a226-436e-8401-899b1eae0e42/volumes" Nov 22 09:23:17 crc kubenswrapper[4856]: I1122 09:23:17.710330 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:23:17 crc kubenswrapper[4856]: E1122 09:23:17.711268 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:23:31 crc kubenswrapper[4856]: I1122 09:23:31.712932 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:23:32 crc kubenswrapper[4856]: I1122 09:23:32.502326 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"7a460e306aceef36bf4d6d67ac2c47b2dc59859e0fadc443f2daba9cf3e50d6a"} Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.342493 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-shksv"] Nov 22 09:25:43 crc kubenswrapper[4856]: E1122 09:25:43.343556 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e3ae3d-0770-4ee0-a03e-279941539afc" containerName="registry-server" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.343575 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e3ae3d-0770-4ee0-a03e-279941539afc" containerName="registry-server" Nov 22 09:25:43 crc kubenswrapper[4856]: E1122 09:25:43.343591 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc20166-a226-436e-8401-899b1eae0e42" containerName="registry-server" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.343598 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc20166-a226-436e-8401-899b1eae0e42" containerName="registry-server" Nov 22 09:25:43 crc kubenswrapper[4856]: E1122 09:25:43.343618 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e3ae3d-0770-4ee0-a03e-279941539afc" containerName="extract-content" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.343626 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e3ae3d-0770-4ee0-a03e-279941539afc" containerName="extract-content" Nov 22 09:25:43 crc kubenswrapper[4856]: E1122 09:25:43.343666 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6e3ae3d-0770-4ee0-a03e-279941539afc" containerName="extract-utilities" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.343674 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6e3ae3d-0770-4ee0-a03e-279941539afc" containerName="extract-utilities" Nov 22 09:25:43 crc kubenswrapper[4856]: E1122 09:25:43.343690 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc20166-a226-436e-8401-899b1eae0e42" containerName="extract-utilities" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.343696 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc20166-a226-436e-8401-899b1eae0e42" containerName="extract-utilities" Nov 22 09:25:43 crc kubenswrapper[4856]: E1122 09:25:43.343711 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc20166-a226-436e-8401-899b1eae0e42" containerName="extract-content" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.343718 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc20166-a226-436e-8401-899b1eae0e42" containerName="extract-content" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.343978 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc20166-a226-436e-8401-899b1eae0e42" containerName="registry-server" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.344006 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6e3ae3d-0770-4ee0-a03e-279941539afc" containerName="registry-server" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.345840 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.355593 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shksv"] Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.420404 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-utilities\") pod \"redhat-operators-shksv\" (UID: \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\") " pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.420495 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8glz\" (UniqueName: \"kubernetes.io/projected/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-kube-api-access-m8glz\") pod \"redhat-operators-shksv\" (UID: \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\") " pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.420562 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-catalog-content\") pod \"redhat-operators-shksv\" (UID: \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\") " pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.523154 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-utilities\") pod \"redhat-operators-shksv\" (UID: \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\") " pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.523278 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8glz\" (UniqueName: \"kubernetes.io/projected/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-kube-api-access-m8glz\") pod \"redhat-operators-shksv\" (UID: \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\") " pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.523343 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-catalog-content\") pod \"redhat-operators-shksv\" (UID: \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\") " pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.523908 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-utilities\") pod \"redhat-operators-shksv\" (UID: \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\") " pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.523908 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-catalog-content\") pod \"redhat-operators-shksv\" (UID: \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\") " pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.541453 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8glz\" (UniqueName: \"kubernetes.io/projected/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-kube-api-access-m8glz\") pod \"redhat-operators-shksv\" (UID: \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\") " pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:25:43 crc kubenswrapper[4856]: I1122 09:25:43.670056 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:25:44 crc kubenswrapper[4856]: I1122 09:25:44.115384 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-shksv"] Nov 22 09:25:44 crc kubenswrapper[4856]: W1122 09:25:44.124938 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod808f76ff_83f1_4528_a0ed_d7a7a7a7075c.slice/crio-6cc113c2bae65fa8c861c47979a8f18a92e58cbee61addd9abbbf70d790340e7 WatchSource:0}: Error finding container 6cc113c2bae65fa8c861c47979a8f18a92e58cbee61addd9abbbf70d790340e7: Status 404 returned error can't find the container with id 6cc113c2bae65fa8c861c47979a8f18a92e58cbee61addd9abbbf70d790340e7 Nov 22 09:25:44 crc kubenswrapper[4856]: I1122 09:25:44.748049 4856 generic.go:334] "Generic (PLEG): container finished" podID="808f76ff-83f1-4528-a0ed-d7a7a7a7075c" containerID="261d4d729fff7501059dd39e256f2bc688a381f204ed66955c6974b7221f7d69" exitCode=0 Nov 22 09:25:44 crc kubenswrapper[4856]: I1122 09:25:44.748098 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shksv" event={"ID":"808f76ff-83f1-4528-a0ed-d7a7a7a7075c","Type":"ContainerDied","Data":"261d4d729fff7501059dd39e256f2bc688a381f204ed66955c6974b7221f7d69"} Nov 22 09:25:44 crc kubenswrapper[4856]: I1122 09:25:44.748337 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shksv" event={"ID":"808f76ff-83f1-4528-a0ed-d7a7a7a7075c","Type":"ContainerStarted","Data":"6cc113c2bae65fa8c861c47979a8f18a92e58cbee61addd9abbbf70d790340e7"} Nov 22 09:25:45 crc kubenswrapper[4856]: I1122 09:25:45.762760 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shksv" event={"ID":"808f76ff-83f1-4528-a0ed-d7a7a7a7075c","Type":"ContainerStarted","Data":"f978933b54638195a31d9062951c515a1ad15230ad98a3340bb85e664593f835"} Nov 22 09:25:49 crc kubenswrapper[4856]: I1122 09:25:49.804338 4856 generic.go:334] "Generic (PLEG): container finished" podID="808f76ff-83f1-4528-a0ed-d7a7a7a7075c" containerID="f978933b54638195a31d9062951c515a1ad15230ad98a3340bb85e664593f835" exitCode=0 Nov 22 09:25:49 crc kubenswrapper[4856]: I1122 09:25:49.804429 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shksv" event={"ID":"808f76ff-83f1-4528-a0ed-d7a7a7a7075c","Type":"ContainerDied","Data":"f978933b54638195a31d9062951c515a1ad15230ad98a3340bb85e664593f835"} Nov 22 09:25:50 crc kubenswrapper[4856]: I1122 09:25:50.816325 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shksv" event={"ID":"808f76ff-83f1-4528-a0ed-d7a7a7a7075c","Type":"ContainerStarted","Data":"2c7eaedb27fb2bb2c2f57c2ebe7f6efc2c1ab439b561c94444b06462d2e72bbb"} Nov 22 09:25:50 crc kubenswrapper[4856]: I1122 09:25:50.838238 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-shksv" podStartSLOduration=2.099528671 podStartE2EDuration="7.838220066s" podCreationTimestamp="2025-11-22 09:25:43 +0000 UTC" firstStartedPulling="2025-11-22 09:25:44.749910199 +0000 UTC m=+8587.163303457" lastFinishedPulling="2025-11-22 09:25:50.488601594 +0000 UTC m=+8592.901994852" observedRunningTime="2025-11-22 09:25:50.832366188 +0000 UTC m=+8593.245759446" watchObservedRunningTime="2025-11-22 09:25:50.838220066 +0000 UTC m=+8593.251613324" Nov 22 09:25:53 crc kubenswrapper[4856]: I1122 09:25:53.671329 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:25:53 crc kubenswrapper[4856]: I1122 09:25:53.671737 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:25:54 crc kubenswrapper[4856]: I1122 09:25:54.716297 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-shksv" podUID="808f76ff-83f1-4528-a0ed-d7a7a7a7075c" containerName="registry-server" probeResult="failure" output=< Nov 22 09:25:54 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 09:25:54 crc kubenswrapper[4856]: > Nov 22 09:25:59 crc kubenswrapper[4856]: I1122 09:25:59.754253 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:25:59 crc kubenswrapper[4856]: I1122 09:25:59.755356 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:26:03 crc kubenswrapper[4856]: I1122 09:26:03.736816 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:26:03 crc kubenswrapper[4856]: I1122 09:26:03.795306 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:26:03 crc kubenswrapper[4856]: I1122 09:26:03.986010 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shksv"] Nov 22 09:26:04 crc kubenswrapper[4856]: I1122 09:26:04.951938 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-shksv" podUID="808f76ff-83f1-4528-a0ed-d7a7a7a7075c" containerName="registry-server" containerID="cri-o://2c7eaedb27fb2bb2c2f57c2ebe7f6efc2c1ab439b561c94444b06462d2e72bbb" gracePeriod=2 Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.446872 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.602402 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-utilities\") pod \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\" (UID: \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\") " Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.602602 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8glz\" (UniqueName: \"kubernetes.io/projected/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-kube-api-access-m8glz\") pod \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\" (UID: \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\") " Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.602679 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-catalog-content\") pod \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\" (UID: \"808f76ff-83f1-4528-a0ed-d7a7a7a7075c\") " Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.603880 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-utilities" (OuterVolumeSpecName: "utilities") pod "808f76ff-83f1-4528-a0ed-d7a7a7a7075c" (UID: "808f76ff-83f1-4528-a0ed-d7a7a7a7075c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.610171 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-kube-api-access-m8glz" (OuterVolumeSpecName: "kube-api-access-m8glz") pod "808f76ff-83f1-4528-a0ed-d7a7a7a7075c" (UID: "808f76ff-83f1-4528-a0ed-d7a7a7a7075c"). InnerVolumeSpecName "kube-api-access-m8glz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.697215 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "808f76ff-83f1-4528-a0ed-d7a7a7a7075c" (UID: "808f76ff-83f1-4528-a0ed-d7a7a7a7075c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.705495 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8glz\" (UniqueName: \"kubernetes.io/projected/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-kube-api-access-m8glz\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.705593 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.705608 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/808f76ff-83f1-4528-a0ed-d7a7a7a7075c-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.963279 4856 generic.go:334] "Generic (PLEG): container finished" podID="808f76ff-83f1-4528-a0ed-d7a7a7a7075c" containerID="2c7eaedb27fb2bb2c2f57c2ebe7f6efc2c1ab439b561c94444b06462d2e72bbb" exitCode=0 Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.963324 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shksv" event={"ID":"808f76ff-83f1-4528-a0ed-d7a7a7a7075c","Type":"ContainerDied","Data":"2c7eaedb27fb2bb2c2f57c2ebe7f6efc2c1ab439b561c94444b06462d2e72bbb"} Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.963355 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-shksv" event={"ID":"808f76ff-83f1-4528-a0ed-d7a7a7a7075c","Type":"ContainerDied","Data":"6cc113c2bae65fa8c861c47979a8f18a92e58cbee61addd9abbbf70d790340e7"} Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.963345 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-shksv" Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.963373 4856 scope.go:117] "RemoveContainer" containerID="2c7eaedb27fb2bb2c2f57c2ebe7f6efc2c1ab439b561c94444b06462d2e72bbb" Nov 22 09:26:05 crc kubenswrapper[4856]: I1122 09:26:05.990070 4856 scope.go:117] "RemoveContainer" containerID="f978933b54638195a31d9062951c515a1ad15230ad98a3340bb85e664593f835" Nov 22 09:26:06 crc kubenswrapper[4856]: I1122 09:26:06.003793 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-shksv"] Nov 22 09:26:06 crc kubenswrapper[4856]: I1122 09:26:06.012388 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-shksv"] Nov 22 09:26:06 crc kubenswrapper[4856]: I1122 09:26:06.013088 4856 scope.go:117] "RemoveContainer" containerID="261d4d729fff7501059dd39e256f2bc688a381f204ed66955c6974b7221f7d69" Nov 22 09:26:06 crc kubenswrapper[4856]: I1122 09:26:06.057421 4856 scope.go:117] "RemoveContainer" containerID="2c7eaedb27fb2bb2c2f57c2ebe7f6efc2c1ab439b561c94444b06462d2e72bbb" Nov 22 09:26:06 crc kubenswrapper[4856]: E1122 09:26:06.058057 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c7eaedb27fb2bb2c2f57c2ebe7f6efc2c1ab439b561c94444b06462d2e72bbb\": container with ID starting with 2c7eaedb27fb2bb2c2f57c2ebe7f6efc2c1ab439b561c94444b06462d2e72bbb not found: ID does not exist" containerID="2c7eaedb27fb2bb2c2f57c2ebe7f6efc2c1ab439b561c94444b06462d2e72bbb" Nov 22 09:26:06 crc kubenswrapper[4856]: I1122 09:26:06.058103 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7eaedb27fb2bb2c2f57c2ebe7f6efc2c1ab439b561c94444b06462d2e72bbb"} err="failed to get container status \"2c7eaedb27fb2bb2c2f57c2ebe7f6efc2c1ab439b561c94444b06462d2e72bbb\": rpc error: code = NotFound desc = could not find container \"2c7eaedb27fb2bb2c2f57c2ebe7f6efc2c1ab439b561c94444b06462d2e72bbb\": container with ID starting with 2c7eaedb27fb2bb2c2f57c2ebe7f6efc2c1ab439b561c94444b06462d2e72bbb not found: ID does not exist" Nov 22 09:26:06 crc kubenswrapper[4856]: I1122 09:26:06.058131 4856 scope.go:117] "RemoveContainer" containerID="f978933b54638195a31d9062951c515a1ad15230ad98a3340bb85e664593f835" Nov 22 09:26:06 crc kubenswrapper[4856]: E1122 09:26:06.058440 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f978933b54638195a31d9062951c515a1ad15230ad98a3340bb85e664593f835\": container with ID starting with f978933b54638195a31d9062951c515a1ad15230ad98a3340bb85e664593f835 not found: ID does not exist" containerID="f978933b54638195a31d9062951c515a1ad15230ad98a3340bb85e664593f835" Nov 22 09:26:06 crc kubenswrapper[4856]: I1122 09:26:06.058472 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f978933b54638195a31d9062951c515a1ad15230ad98a3340bb85e664593f835"} err="failed to get container status \"f978933b54638195a31d9062951c515a1ad15230ad98a3340bb85e664593f835\": rpc error: code = NotFound desc = could not find container \"f978933b54638195a31d9062951c515a1ad15230ad98a3340bb85e664593f835\": container with ID starting with f978933b54638195a31d9062951c515a1ad15230ad98a3340bb85e664593f835 not found: ID does not exist" Nov 22 09:26:06 crc kubenswrapper[4856]: I1122 09:26:06.058488 4856 scope.go:117] "RemoveContainer" containerID="261d4d729fff7501059dd39e256f2bc688a381f204ed66955c6974b7221f7d69" Nov 22 09:26:06 crc kubenswrapper[4856]: E1122 09:26:06.059114 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"261d4d729fff7501059dd39e256f2bc688a381f204ed66955c6974b7221f7d69\": container with ID starting with 261d4d729fff7501059dd39e256f2bc688a381f204ed66955c6974b7221f7d69 not found: ID does not exist" containerID="261d4d729fff7501059dd39e256f2bc688a381f204ed66955c6974b7221f7d69" Nov 22 09:26:06 crc kubenswrapper[4856]: I1122 09:26:06.059158 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"261d4d729fff7501059dd39e256f2bc688a381f204ed66955c6974b7221f7d69"} err="failed to get container status \"261d4d729fff7501059dd39e256f2bc688a381f204ed66955c6974b7221f7d69\": rpc error: code = NotFound desc = could not find container \"261d4d729fff7501059dd39e256f2bc688a381f204ed66955c6974b7221f7d69\": container with ID starting with 261d4d729fff7501059dd39e256f2bc688a381f204ed66955c6974b7221f7d69 not found: ID does not exist" Nov 22 09:26:06 crc kubenswrapper[4856]: I1122 09:26:06.730372 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="808f76ff-83f1-4528-a0ed-d7a7a7a7075c" path="/var/lib/kubelet/pods/808f76ff-83f1-4528-a0ed-d7a7a7a7075c/volumes" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.193399 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d8gn8"] Nov 22 09:26:07 crc kubenswrapper[4856]: E1122 09:26:07.194199 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808f76ff-83f1-4528-a0ed-d7a7a7a7075c" containerName="registry-server" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.194216 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="808f76ff-83f1-4528-a0ed-d7a7a7a7075c" containerName="registry-server" Nov 22 09:26:07 crc kubenswrapper[4856]: E1122 09:26:07.194262 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808f76ff-83f1-4528-a0ed-d7a7a7a7075c" containerName="extract-utilities" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.194276 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="808f76ff-83f1-4528-a0ed-d7a7a7a7075c" containerName="extract-utilities" Nov 22 09:26:07 crc kubenswrapper[4856]: E1122 09:26:07.194306 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="808f76ff-83f1-4528-a0ed-d7a7a7a7075c" containerName="extract-content" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.194321 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="808f76ff-83f1-4528-a0ed-d7a7a7a7075c" containerName="extract-content" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.194667 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="808f76ff-83f1-4528-a0ed-d7a7a7a7075c" containerName="registry-server" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.196982 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.209826 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d8gn8"] Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.346219 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85c8e294-79ba-4970-9655-55a5d960a928-utilities\") pod \"certified-operators-d8gn8\" (UID: \"85c8e294-79ba-4970-9655-55a5d960a928\") " pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.346738 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85c8e294-79ba-4970-9655-55a5d960a928-catalog-content\") pod \"certified-operators-d8gn8\" (UID: \"85c8e294-79ba-4970-9655-55a5d960a928\") " pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.346948 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlq65\" (UniqueName: \"kubernetes.io/projected/85c8e294-79ba-4970-9655-55a5d960a928-kube-api-access-xlq65\") pod \"certified-operators-d8gn8\" (UID: \"85c8e294-79ba-4970-9655-55a5d960a928\") " pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.449755 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlq65\" (UniqueName: \"kubernetes.io/projected/85c8e294-79ba-4970-9655-55a5d960a928-kube-api-access-xlq65\") pod \"certified-operators-d8gn8\" (UID: \"85c8e294-79ba-4970-9655-55a5d960a928\") " pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.449938 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85c8e294-79ba-4970-9655-55a5d960a928-utilities\") pod \"certified-operators-d8gn8\" (UID: \"85c8e294-79ba-4970-9655-55a5d960a928\") " pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.450027 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85c8e294-79ba-4970-9655-55a5d960a928-catalog-content\") pod \"certified-operators-d8gn8\" (UID: \"85c8e294-79ba-4970-9655-55a5d960a928\") " pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.450682 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85c8e294-79ba-4970-9655-55a5d960a928-utilities\") pod \"certified-operators-d8gn8\" (UID: \"85c8e294-79ba-4970-9655-55a5d960a928\") " pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.450763 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85c8e294-79ba-4970-9655-55a5d960a928-catalog-content\") pod \"certified-operators-d8gn8\" (UID: \"85c8e294-79ba-4970-9655-55a5d960a928\") " pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.476233 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlq65\" (UniqueName: \"kubernetes.io/projected/85c8e294-79ba-4970-9655-55a5d960a928-kube-api-access-xlq65\") pod \"certified-operators-d8gn8\" (UID: \"85c8e294-79ba-4970-9655-55a5d960a928\") " pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:07 crc kubenswrapper[4856]: I1122 09:26:07.522394 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:08 crc kubenswrapper[4856]: I1122 09:26:08.081583 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d8gn8"] Nov 22 09:26:08 crc kubenswrapper[4856]: I1122 09:26:08.997278 4856 generic.go:334] "Generic (PLEG): container finished" podID="85c8e294-79ba-4970-9655-55a5d960a928" containerID="66eb0fbcdc2e383fe14c29809321809b5a9b9fc68be840b2a75e30728a4bd741" exitCode=0 Nov 22 09:26:08 crc kubenswrapper[4856]: I1122 09:26:08.997452 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8gn8" event={"ID":"85c8e294-79ba-4970-9655-55a5d960a928","Type":"ContainerDied","Data":"66eb0fbcdc2e383fe14c29809321809b5a9b9fc68be840b2a75e30728a4bd741"} Nov 22 09:26:08 crc kubenswrapper[4856]: I1122 09:26:08.997747 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8gn8" event={"ID":"85c8e294-79ba-4970-9655-55a5d960a928","Type":"ContainerStarted","Data":"f519daf9b22988597f98fc9fd6bcda1dbd3f1129d2ad0eafa0f14b701f120a23"} Nov 22 09:26:10 crc kubenswrapper[4856]: I1122 09:26:10.012232 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8gn8" event={"ID":"85c8e294-79ba-4970-9655-55a5d960a928","Type":"ContainerStarted","Data":"d02036c225186ad35a757f567463d03292b86a853027acf52f05166ae03409fc"} Nov 22 09:26:11 crc kubenswrapper[4856]: I1122 09:26:11.027322 4856 generic.go:334] "Generic (PLEG): container finished" podID="85c8e294-79ba-4970-9655-55a5d960a928" containerID="d02036c225186ad35a757f567463d03292b86a853027acf52f05166ae03409fc" exitCode=0 Nov 22 09:26:11 crc kubenswrapper[4856]: I1122 09:26:11.027408 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8gn8" event={"ID":"85c8e294-79ba-4970-9655-55a5d960a928","Type":"ContainerDied","Data":"d02036c225186ad35a757f567463d03292b86a853027acf52f05166ae03409fc"} Nov 22 09:26:12 crc kubenswrapper[4856]: I1122 09:26:12.041762 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8gn8" event={"ID":"85c8e294-79ba-4970-9655-55a5d960a928","Type":"ContainerStarted","Data":"735bfce67f06fe82d085223e2ff02b9e5efc947fc7560db2835fd43887919c60"} Nov 22 09:26:12 crc kubenswrapper[4856]: I1122 09:26:12.070008 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d8gn8" podStartSLOduration=2.603626611 podStartE2EDuration="5.069987041s" podCreationTimestamp="2025-11-22 09:26:07 +0000 UTC" firstStartedPulling="2025-11-22 09:26:09.00075001 +0000 UTC m=+8611.414143338" lastFinishedPulling="2025-11-22 09:26:11.4671105 +0000 UTC m=+8613.880503768" observedRunningTime="2025-11-22 09:26:12.06254505 +0000 UTC m=+8614.475938328" watchObservedRunningTime="2025-11-22 09:26:12.069987041 +0000 UTC m=+8614.483380299" Nov 22 09:26:17 crc kubenswrapper[4856]: I1122 09:26:17.523138 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:17 crc kubenswrapper[4856]: I1122 09:26:17.523713 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:17 crc kubenswrapper[4856]: I1122 09:26:17.593138 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:18 crc kubenswrapper[4856]: I1122 09:26:18.169024 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:18 crc kubenswrapper[4856]: I1122 09:26:18.224638 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d8gn8"] Nov 22 09:26:20 crc kubenswrapper[4856]: I1122 09:26:20.126766 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d8gn8" podUID="85c8e294-79ba-4970-9655-55a5d960a928" containerName="registry-server" containerID="cri-o://735bfce67f06fe82d085223e2ff02b9e5efc947fc7560db2835fd43887919c60" gracePeriod=2 Nov 22 09:26:21 crc kubenswrapper[4856]: I1122 09:26:21.143905 4856 generic.go:334] "Generic (PLEG): container finished" podID="85c8e294-79ba-4970-9655-55a5d960a928" containerID="735bfce67f06fe82d085223e2ff02b9e5efc947fc7560db2835fd43887919c60" exitCode=0 Nov 22 09:26:21 crc kubenswrapper[4856]: I1122 09:26:21.144019 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8gn8" event={"ID":"85c8e294-79ba-4970-9655-55a5d960a928","Type":"ContainerDied","Data":"735bfce67f06fe82d085223e2ff02b9e5efc947fc7560db2835fd43887919c60"} Nov 22 09:26:21 crc kubenswrapper[4856]: I1122 09:26:21.287183 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:21 crc kubenswrapper[4856]: I1122 09:26:21.458586 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85c8e294-79ba-4970-9655-55a5d960a928-catalog-content\") pod \"85c8e294-79ba-4970-9655-55a5d960a928\" (UID: \"85c8e294-79ba-4970-9655-55a5d960a928\") " Nov 22 09:26:21 crc kubenswrapper[4856]: I1122 09:26:21.458729 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85c8e294-79ba-4970-9655-55a5d960a928-utilities\") pod \"85c8e294-79ba-4970-9655-55a5d960a928\" (UID: \"85c8e294-79ba-4970-9655-55a5d960a928\") " Nov 22 09:26:21 crc kubenswrapper[4856]: I1122 09:26:21.458814 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlq65\" (UniqueName: \"kubernetes.io/projected/85c8e294-79ba-4970-9655-55a5d960a928-kube-api-access-xlq65\") pod \"85c8e294-79ba-4970-9655-55a5d960a928\" (UID: \"85c8e294-79ba-4970-9655-55a5d960a928\") " Nov 22 09:26:21 crc kubenswrapper[4856]: I1122 09:26:21.460020 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85c8e294-79ba-4970-9655-55a5d960a928-utilities" (OuterVolumeSpecName: "utilities") pod "85c8e294-79ba-4970-9655-55a5d960a928" (UID: "85c8e294-79ba-4970-9655-55a5d960a928"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:26:21 crc kubenswrapper[4856]: I1122 09:26:21.469799 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85c8e294-79ba-4970-9655-55a5d960a928-kube-api-access-xlq65" (OuterVolumeSpecName: "kube-api-access-xlq65") pod "85c8e294-79ba-4970-9655-55a5d960a928" (UID: "85c8e294-79ba-4970-9655-55a5d960a928"). InnerVolumeSpecName "kube-api-access-xlq65". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:26:21 crc kubenswrapper[4856]: I1122 09:26:21.511271 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85c8e294-79ba-4970-9655-55a5d960a928-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85c8e294-79ba-4970-9655-55a5d960a928" (UID: "85c8e294-79ba-4970-9655-55a5d960a928"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:26:21 crc kubenswrapper[4856]: I1122 09:26:21.561640 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlq65\" (UniqueName: \"kubernetes.io/projected/85c8e294-79ba-4970-9655-55a5d960a928-kube-api-access-xlq65\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:21 crc kubenswrapper[4856]: I1122 09:26:21.561941 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85c8e294-79ba-4970-9655-55a5d960a928-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:21 crc kubenswrapper[4856]: I1122 09:26:21.562157 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85c8e294-79ba-4970-9655-55a5d960a928-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:22 crc kubenswrapper[4856]: I1122 09:26:22.159106 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d8gn8" event={"ID":"85c8e294-79ba-4970-9655-55a5d960a928","Type":"ContainerDied","Data":"f519daf9b22988597f98fc9fd6bcda1dbd3f1129d2ad0eafa0f14b701f120a23"} Nov 22 09:26:22 crc kubenswrapper[4856]: I1122 09:26:22.159264 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d8gn8" Nov 22 09:26:22 crc kubenswrapper[4856]: I1122 09:26:22.159540 4856 scope.go:117] "RemoveContainer" containerID="735bfce67f06fe82d085223e2ff02b9e5efc947fc7560db2835fd43887919c60" Nov 22 09:26:22 crc kubenswrapper[4856]: I1122 09:26:22.202661 4856 scope.go:117] "RemoveContainer" containerID="d02036c225186ad35a757f567463d03292b86a853027acf52f05166ae03409fc" Nov 22 09:26:22 crc kubenswrapper[4856]: I1122 09:26:22.212024 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d8gn8"] Nov 22 09:26:22 crc kubenswrapper[4856]: I1122 09:26:22.228883 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d8gn8"] Nov 22 09:26:22 crc kubenswrapper[4856]: I1122 09:26:22.237191 4856 scope.go:117] "RemoveContainer" containerID="66eb0fbcdc2e383fe14c29809321809b5a9b9fc68be840b2a75e30728a4bd741" Nov 22 09:26:22 crc kubenswrapper[4856]: I1122 09:26:22.721107 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85c8e294-79ba-4970-9655-55a5d960a928" path="/var/lib/kubelet/pods/85c8e294-79ba-4970-9655-55a5d960a928/volumes" Nov 22 09:26:29 crc kubenswrapper[4856]: I1122 09:26:29.754302 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:26:29 crc kubenswrapper[4856]: I1122 09:26:29.754975 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:26:43 crc kubenswrapper[4856]: I1122 09:26:43.374059 4856 generic.go:334] "Generic (PLEG): container finished" podID="0845a70f-bedf-4495-8e38-207547e02a31" containerID="82a2caa8a40ce0803164fa622a39401b7d15654559aecfbfba50e5fdab63b740" exitCode=0 Nov 22 09:26:43 crc kubenswrapper[4856]: I1122 09:26:43.374151 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" event={"ID":"0845a70f-bedf-4495-8e38-207547e02a31","Type":"ContainerDied","Data":"82a2caa8a40ce0803164fa622a39401b7d15654559aecfbfba50e5fdab63b740"} Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.070116 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.190931 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxcrv\" (UniqueName: \"kubernetes.io/projected/0845a70f-bedf-4495-8e38-207547e02a31-kube-api-access-zxcrv\") pod \"0845a70f-bedf-4495-8e38-207547e02a31\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.191000 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ssh-key\") pod \"0845a70f-bedf-4495-8e38-207547e02a31\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.191080 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-1\") pod \"0845a70f-bedf-4495-8e38-207547e02a31\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.191189 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-inventory\") pod \"0845a70f-bedf-4495-8e38-207547e02a31\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.191253 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-telemetry-combined-ca-bundle\") pod \"0845a70f-bedf-4495-8e38-207547e02a31\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.191388 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-2\") pod \"0845a70f-bedf-4495-8e38-207547e02a31\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.191581 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-0\") pod \"0845a70f-bedf-4495-8e38-207547e02a31\" (UID: \"0845a70f-bedf-4495-8e38-207547e02a31\") " Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.197330 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "0845a70f-bedf-4495-8e38-207547e02a31" (UID: "0845a70f-bedf-4495-8e38-207547e02a31"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.197344 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0845a70f-bedf-4495-8e38-207547e02a31-kube-api-access-zxcrv" (OuterVolumeSpecName: "kube-api-access-zxcrv") pod "0845a70f-bedf-4495-8e38-207547e02a31" (UID: "0845a70f-bedf-4495-8e38-207547e02a31"). InnerVolumeSpecName "kube-api-access-zxcrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.223354 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "0845a70f-bedf-4495-8e38-207547e02a31" (UID: "0845a70f-bedf-4495-8e38-207547e02a31"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.225368 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-inventory" (OuterVolumeSpecName: "inventory") pod "0845a70f-bedf-4495-8e38-207547e02a31" (UID: "0845a70f-bedf-4495-8e38-207547e02a31"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.228396 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0845a70f-bedf-4495-8e38-207547e02a31" (UID: "0845a70f-bedf-4495-8e38-207547e02a31"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.229425 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "0845a70f-bedf-4495-8e38-207547e02a31" (UID: "0845a70f-bedf-4495-8e38-207547e02a31"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.238150 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "0845a70f-bedf-4495-8e38-207547e02a31" (UID: "0845a70f-bedf-4495-8e38-207547e02a31"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.294415 4856 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.294450 4856 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.294461 4856 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.294472 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxcrv\" (UniqueName: \"kubernetes.io/projected/0845a70f-bedf-4495-8e38-207547e02a31-kube-api-access-zxcrv\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.294481 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.294490 4856 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.294499 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0845a70f-bedf-4495-8e38-207547e02a31-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.395130 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" event={"ID":"0845a70f-bedf-4495-8e38-207547e02a31","Type":"ContainerDied","Data":"72827997439b3c3587c52206cd17e62f865b6ec64f43870d40f7fb0e593708ff"} Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.395547 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72827997439b3c3587c52206cd17e62f865b6ec64f43870d40f7fb0e593708ff" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.395172 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-openstack-openstack-cell1-rhs7q" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.489484 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-xgb8k"] Nov 22 09:26:45 crc kubenswrapper[4856]: E1122 09:26:45.490223 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c8e294-79ba-4970-9655-55a5d960a928" containerName="extract-utilities" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.490286 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c8e294-79ba-4970-9655-55a5d960a928" containerName="extract-utilities" Nov 22 09:26:45 crc kubenswrapper[4856]: E1122 09:26:45.490313 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c8e294-79ba-4970-9655-55a5d960a928" containerName="extract-content" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.490322 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c8e294-79ba-4970-9655-55a5d960a928" containerName="extract-content" Nov 22 09:26:45 crc kubenswrapper[4856]: E1122 09:26:45.490369 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0845a70f-bedf-4495-8e38-207547e02a31" containerName="telemetry-openstack-openstack-cell1" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.490380 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0845a70f-bedf-4495-8e38-207547e02a31" containerName="telemetry-openstack-openstack-cell1" Nov 22 09:26:45 crc kubenswrapper[4856]: E1122 09:26:45.490393 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85c8e294-79ba-4970-9655-55a5d960a928" containerName="registry-server" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.490400 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="85c8e294-79ba-4970-9655-55a5d960a928" containerName="registry-server" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.490881 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="85c8e294-79ba-4970-9655-55a5d960a928" containerName="registry-server" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.490910 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0845a70f-bedf-4495-8e38-207547e02a31" containerName="telemetry-openstack-openstack-cell1" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.491952 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.495009 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.498000 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.498586 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-sriov-agent-neutron-config" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.498714 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.498895 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.500763 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-xgb8k"] Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.599667 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.599724 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.599786 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.599850 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmtvw\" (UniqueName: \"kubernetes.io/projected/3cfedb7a-57e2-4533-95c9-4c691087caed-kube-api-access-rmtvw\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.599944 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.701414 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.701605 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmtvw\" (UniqueName: \"kubernetes.io/projected/3cfedb7a-57e2-4533-95c9-4c691087caed-kube-api-access-rmtvw\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.701680 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.701761 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.701810 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.705606 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-inventory\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.705785 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-ssh-key\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.706046 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-neutron-sriov-agent-neutron-config-0\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.707242 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-neutron-sriov-combined-ca-bundle\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.719110 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmtvw\" (UniqueName: \"kubernetes.io/projected/3cfedb7a-57e2-4533-95c9-4c691087caed-kube-api-access-rmtvw\") pod \"neutron-sriov-openstack-openstack-cell1-xgb8k\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:45 crc kubenswrapper[4856]: I1122 09:26:45.809693 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:26:46 crc kubenswrapper[4856]: I1122 09:26:46.302921 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-sriov-openstack-openstack-cell1-xgb8k"] Nov 22 09:26:47 crc kubenswrapper[4856]: I1122 09:26:47.417661 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" event={"ID":"3cfedb7a-57e2-4533-95c9-4c691087caed","Type":"ContainerStarted","Data":"0819bdb67e879f52eac5554704b248d49b4750096ad89c7fe960881f3fab8871"} Nov 22 09:26:48 crc kubenswrapper[4856]: I1122 09:26:48.429052 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" event={"ID":"3cfedb7a-57e2-4533-95c9-4c691087caed","Type":"ContainerStarted","Data":"918346bf938b09a9deee2bdd01df80b2739405dc29d78bcdd3dfbf0bbf21511e"} Nov 22 09:26:48 crc kubenswrapper[4856]: I1122 09:26:48.454806 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" podStartSLOduration=2.941156571 podStartE2EDuration="3.454788072s" podCreationTimestamp="2025-11-22 09:26:45 +0000 UTC" firstStartedPulling="2025-11-22 09:26:46.676994599 +0000 UTC m=+8649.090387857" lastFinishedPulling="2025-11-22 09:26:47.1906261 +0000 UTC m=+8649.604019358" observedRunningTime="2025-11-22 09:26:48.446378595 +0000 UTC m=+8650.859771863" watchObservedRunningTime="2025-11-22 09:26:48.454788072 +0000 UTC m=+8650.868181330" Nov 22 09:26:59 crc kubenswrapper[4856]: I1122 09:26:59.754886 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:26:59 crc kubenswrapper[4856]: I1122 09:26:59.755384 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:26:59 crc kubenswrapper[4856]: I1122 09:26:59.755442 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 09:26:59 crc kubenswrapper[4856]: I1122 09:26:59.756656 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a460e306aceef36bf4d6d67ac2c47b2dc59859e0fadc443f2daba9cf3e50d6a"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:26:59 crc kubenswrapper[4856]: I1122 09:26:59.756796 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://7a460e306aceef36bf4d6d67ac2c47b2dc59859e0fadc443f2daba9cf3e50d6a" gracePeriod=600 Nov 22 09:27:00 crc kubenswrapper[4856]: I1122 09:27:00.571905 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="7a460e306aceef36bf4d6d67ac2c47b2dc59859e0fadc443f2daba9cf3e50d6a" exitCode=0 Nov 22 09:27:00 crc kubenswrapper[4856]: I1122 09:27:00.571968 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"7a460e306aceef36bf4d6d67ac2c47b2dc59859e0fadc443f2daba9cf3e50d6a"} Nov 22 09:27:00 crc kubenswrapper[4856]: I1122 09:27:00.572716 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4"} Nov 22 09:27:00 crc kubenswrapper[4856]: I1122 09:27:00.572747 4856 scope.go:117] "RemoveContainer" containerID="2bcc065b0c48c04aa5b160423b3b0be99b88df077986747b08bead643cbac313" Nov 22 09:29:29 crc kubenswrapper[4856]: I1122 09:29:29.754485 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:29:29 crc kubenswrapper[4856]: I1122 09:29:29.755857 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:29:59 crc kubenswrapper[4856]: I1122 09:29:59.754423 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:29:59 crc kubenswrapper[4856]: I1122 09:29:59.755126 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.168885 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2"] Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.170161 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.173416 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.175064 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.200271 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2"] Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.345767 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfndl\" (UniqueName: \"kubernetes.io/projected/32a693bd-1575-42ab-8da5-da5e4a56838c-kube-api-access-vfndl\") pod \"collect-profiles-29396730-98gw2\" (UID: \"32a693bd-1575-42ab-8da5-da5e4a56838c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.345988 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32a693bd-1575-42ab-8da5-da5e4a56838c-secret-volume\") pod \"collect-profiles-29396730-98gw2\" (UID: \"32a693bd-1575-42ab-8da5-da5e4a56838c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.346135 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32a693bd-1575-42ab-8da5-da5e4a56838c-config-volume\") pod \"collect-profiles-29396730-98gw2\" (UID: \"32a693bd-1575-42ab-8da5-da5e4a56838c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.448976 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfndl\" (UniqueName: \"kubernetes.io/projected/32a693bd-1575-42ab-8da5-da5e4a56838c-kube-api-access-vfndl\") pod \"collect-profiles-29396730-98gw2\" (UID: \"32a693bd-1575-42ab-8da5-da5e4a56838c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.449146 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32a693bd-1575-42ab-8da5-da5e4a56838c-secret-volume\") pod \"collect-profiles-29396730-98gw2\" (UID: \"32a693bd-1575-42ab-8da5-da5e4a56838c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.449232 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32a693bd-1575-42ab-8da5-da5e4a56838c-config-volume\") pod \"collect-profiles-29396730-98gw2\" (UID: \"32a693bd-1575-42ab-8da5-da5e4a56838c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.450949 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32a693bd-1575-42ab-8da5-da5e4a56838c-config-volume\") pod \"collect-profiles-29396730-98gw2\" (UID: \"32a693bd-1575-42ab-8da5-da5e4a56838c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.460646 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32a693bd-1575-42ab-8da5-da5e4a56838c-secret-volume\") pod \"collect-profiles-29396730-98gw2\" (UID: \"32a693bd-1575-42ab-8da5-da5e4a56838c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.473249 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfndl\" (UniqueName: \"kubernetes.io/projected/32a693bd-1575-42ab-8da5-da5e4a56838c-kube-api-access-vfndl\") pod \"collect-profiles-29396730-98gw2\" (UID: \"32a693bd-1575-42ab-8da5-da5e4a56838c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" Nov 22 09:30:00 crc kubenswrapper[4856]: I1122 09:30:00.511112 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" Nov 22 09:30:01 crc kubenswrapper[4856]: I1122 09:30:01.032138 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2"] Nov 22 09:30:01 crc kubenswrapper[4856]: I1122 09:30:01.475119 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" event={"ID":"32a693bd-1575-42ab-8da5-da5e4a56838c","Type":"ContainerStarted","Data":"3bcc50e9315995a108b27c7872956bceadcf43b8dd59bf8537e578ec2b937624"} Nov 22 09:30:01 crc kubenswrapper[4856]: I1122 09:30:01.476190 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" event={"ID":"32a693bd-1575-42ab-8da5-da5e4a56838c","Type":"ContainerStarted","Data":"ab1703e6c6dc6217a53201eb17c7de6af71364189df873d9c4cda6a5315ffd02"} Nov 22 09:30:02 crc kubenswrapper[4856]: I1122 09:30:02.489233 4856 generic.go:334] "Generic (PLEG): container finished" podID="32a693bd-1575-42ab-8da5-da5e4a56838c" containerID="3bcc50e9315995a108b27c7872956bceadcf43b8dd59bf8537e578ec2b937624" exitCode=0 Nov 22 09:30:02 crc kubenswrapper[4856]: I1122 09:30:02.489294 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" event={"ID":"32a693bd-1575-42ab-8da5-da5e4a56838c","Type":"ContainerDied","Data":"3bcc50e9315995a108b27c7872956bceadcf43b8dd59bf8537e578ec2b937624"} Nov 22 09:30:02 crc kubenswrapper[4856]: I1122 09:30:02.888121 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.006680 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32a693bd-1575-42ab-8da5-da5e4a56838c-config-volume\") pod \"32a693bd-1575-42ab-8da5-da5e4a56838c\" (UID: \"32a693bd-1575-42ab-8da5-da5e4a56838c\") " Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.007031 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32a693bd-1575-42ab-8da5-da5e4a56838c-secret-volume\") pod \"32a693bd-1575-42ab-8da5-da5e4a56838c\" (UID: \"32a693bd-1575-42ab-8da5-da5e4a56838c\") " Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.007118 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfndl\" (UniqueName: \"kubernetes.io/projected/32a693bd-1575-42ab-8da5-da5e4a56838c-kube-api-access-vfndl\") pod \"32a693bd-1575-42ab-8da5-da5e4a56838c\" (UID: \"32a693bd-1575-42ab-8da5-da5e4a56838c\") " Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.007381 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a693bd-1575-42ab-8da5-da5e4a56838c-config-volume" (OuterVolumeSpecName: "config-volume") pod "32a693bd-1575-42ab-8da5-da5e4a56838c" (UID: "32a693bd-1575-42ab-8da5-da5e4a56838c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.007608 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/32a693bd-1575-42ab-8da5-da5e4a56838c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.012721 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a693bd-1575-42ab-8da5-da5e4a56838c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "32a693bd-1575-42ab-8da5-da5e4a56838c" (UID: "32a693bd-1575-42ab-8da5-da5e4a56838c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.012959 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a693bd-1575-42ab-8da5-da5e4a56838c-kube-api-access-vfndl" (OuterVolumeSpecName: "kube-api-access-vfndl") pod "32a693bd-1575-42ab-8da5-da5e4a56838c" (UID: "32a693bd-1575-42ab-8da5-da5e4a56838c"). InnerVolumeSpecName "kube-api-access-vfndl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.109958 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfndl\" (UniqueName: \"kubernetes.io/projected/32a693bd-1575-42ab-8da5-da5e4a56838c-kube-api-access-vfndl\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.110009 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/32a693bd-1575-42ab-8da5-da5e4a56838c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.505630 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" event={"ID":"32a693bd-1575-42ab-8da5-da5e4a56838c","Type":"ContainerDied","Data":"ab1703e6c6dc6217a53201eb17c7de6af71364189df873d9c4cda6a5315ffd02"} Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.505728 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab1703e6c6dc6217a53201eb17c7de6af71364189df873d9c4cda6a5315ffd02" Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.505867 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396730-98gw2" Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.973625 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm"] Nov 22 09:30:03 crc kubenswrapper[4856]: I1122 09:30:03.983197 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396685-lv6nm"] Nov 22 09:30:04 crc kubenswrapper[4856]: I1122 09:30:04.720622 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1050fae-88db-48d3-8b09-87c3fe96a967" path="/var/lib/kubelet/pods/a1050fae-88db-48d3-8b09-87c3fe96a967/volumes" Nov 22 09:30:29 crc kubenswrapper[4856]: I1122 09:30:29.754490 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:30:29 crc kubenswrapper[4856]: I1122 09:30:29.755069 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:30:29 crc kubenswrapper[4856]: I1122 09:30:29.755120 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 09:30:29 crc kubenswrapper[4856]: I1122 09:30:29.756029 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:30:29 crc kubenswrapper[4856]: I1122 09:30:29.756099 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" gracePeriod=600 Nov 22 09:30:29 crc kubenswrapper[4856]: E1122 09:30:29.894659 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:30:30 crc kubenswrapper[4856]: I1122 09:30:30.783048 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" exitCode=0 Nov 22 09:30:30 crc kubenswrapper[4856]: I1122 09:30:30.783097 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4"} Nov 22 09:30:30 crc kubenswrapper[4856]: I1122 09:30:30.783133 4856 scope.go:117] "RemoveContainer" containerID="7a460e306aceef36bf4d6d67ac2c47b2dc59859e0fadc443f2daba9cf3e50d6a" Nov 22 09:30:30 crc kubenswrapper[4856]: I1122 09:30:30.783880 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:30:30 crc kubenswrapper[4856]: E1122 09:30:30.784182 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:30:44 crc kubenswrapper[4856]: I1122 09:30:44.710776 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:30:44 crc kubenswrapper[4856]: E1122 09:30:44.712285 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:30:49 crc kubenswrapper[4856]: I1122 09:30:49.515450 4856 scope.go:117] "RemoveContainer" containerID="d51c210ed60dcedb4ea79256ad59c40c4d926cf361d6a8619e38780913db5594" Nov 22 09:30:58 crc kubenswrapper[4856]: I1122 09:30:58.724049 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:30:58 crc kubenswrapper[4856]: E1122 09:30:58.725020 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:31:13 crc kubenswrapper[4856]: I1122 09:31:13.709100 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:31:13 crc kubenswrapper[4856]: E1122 09:31:13.709661 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:31:24 crc kubenswrapper[4856]: I1122 09:31:24.709653 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:31:24 crc kubenswrapper[4856]: E1122 09:31:24.710587 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:31:38 crc kubenswrapper[4856]: I1122 09:31:38.721127 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:31:38 crc kubenswrapper[4856]: E1122 09:31:38.723286 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:31:51 crc kubenswrapper[4856]: I1122 09:31:51.710538 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:31:51 crc kubenswrapper[4856]: E1122 09:31:51.711329 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:32:02 crc kubenswrapper[4856]: I1122 09:32:02.709440 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:32:02 crc kubenswrapper[4856]: E1122 09:32:02.710235 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:32:16 crc kubenswrapper[4856]: I1122 09:32:16.710745 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:32:16 crc kubenswrapper[4856]: E1122 09:32:16.711628 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:32:29 crc kubenswrapper[4856]: I1122 09:32:29.711303 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:32:29 crc kubenswrapper[4856]: E1122 09:32:29.712539 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:32:35 crc kubenswrapper[4856]: I1122 09:32:35.037987 4856 generic.go:334] "Generic (PLEG): container finished" podID="3cfedb7a-57e2-4533-95c9-4c691087caed" containerID="918346bf938b09a9deee2bdd01df80b2739405dc29d78bcdd3dfbf0bbf21511e" exitCode=0 Nov 22 09:32:35 crc kubenswrapper[4856]: I1122 09:32:35.038113 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" event={"ID":"3cfedb7a-57e2-4533-95c9-4c691087caed","Type":"ContainerDied","Data":"918346bf938b09a9deee2bdd01df80b2739405dc29d78bcdd3dfbf0bbf21511e"} Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.469193 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.567275 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmtvw\" (UniqueName: \"kubernetes.io/projected/3cfedb7a-57e2-4533-95c9-4c691087caed-kube-api-access-rmtvw\") pod \"3cfedb7a-57e2-4533-95c9-4c691087caed\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.567371 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-neutron-sriov-combined-ca-bundle\") pod \"3cfedb7a-57e2-4533-95c9-4c691087caed\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.567435 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-neutron-sriov-agent-neutron-config-0\") pod \"3cfedb7a-57e2-4533-95c9-4c691087caed\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.567549 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-ssh-key\") pod \"3cfedb7a-57e2-4533-95c9-4c691087caed\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.567590 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-inventory\") pod \"3cfedb7a-57e2-4533-95c9-4c691087caed\" (UID: \"3cfedb7a-57e2-4533-95c9-4c691087caed\") " Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.582567 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-neutron-sriov-combined-ca-bundle" (OuterVolumeSpecName: "neutron-sriov-combined-ca-bundle") pod "3cfedb7a-57e2-4533-95c9-4c691087caed" (UID: "3cfedb7a-57e2-4533-95c9-4c691087caed"). InnerVolumeSpecName "neutron-sriov-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.582655 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cfedb7a-57e2-4533-95c9-4c691087caed-kube-api-access-rmtvw" (OuterVolumeSpecName: "kube-api-access-rmtvw") pod "3cfedb7a-57e2-4533-95c9-4c691087caed" (UID: "3cfedb7a-57e2-4533-95c9-4c691087caed"). InnerVolumeSpecName "kube-api-access-rmtvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.602206 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3cfedb7a-57e2-4533-95c9-4c691087caed" (UID: "3cfedb7a-57e2-4533-95c9-4c691087caed"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.603838 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-inventory" (OuterVolumeSpecName: "inventory") pod "3cfedb7a-57e2-4533-95c9-4c691087caed" (UID: "3cfedb7a-57e2-4533-95c9-4c691087caed"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.613763 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-neutron-sriov-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-sriov-agent-neutron-config-0") pod "3cfedb7a-57e2-4533-95c9-4c691087caed" (UID: "3cfedb7a-57e2-4533-95c9-4c691087caed"). InnerVolumeSpecName "neutron-sriov-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.670489 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmtvw\" (UniqueName: \"kubernetes.io/projected/3cfedb7a-57e2-4533-95c9-4c691087caed-kube-api-access-rmtvw\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.670554 4856 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-neutron-sriov-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.670570 4856 reconciler_common.go:293] "Volume detached for volume \"neutron-sriov-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-neutron-sriov-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.670580 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:36 crc kubenswrapper[4856]: I1122 09:32:36.670588 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3cfedb7a-57e2-4533-95c9-4c691087caed-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.059543 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" event={"ID":"3cfedb7a-57e2-4533-95c9-4c691087caed","Type":"ContainerDied","Data":"0819bdb67e879f52eac5554704b248d49b4750096ad89c7fe960881f3fab8871"} Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.059586 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0819bdb67e879f52eac5554704b248d49b4750096ad89c7fe960881f3fab8871" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.059616 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-sriov-openstack-openstack-cell1-xgb8k" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.324469 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr"] Nov 22 09:32:37 crc kubenswrapper[4856]: E1122 09:32:37.325264 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfedb7a-57e2-4533-95c9-4c691087caed" containerName="neutron-sriov-openstack-openstack-cell1" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.325293 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfedb7a-57e2-4533-95c9-4c691087caed" containerName="neutron-sriov-openstack-openstack-cell1" Nov 22 09:32:37 crc kubenswrapper[4856]: E1122 09:32:37.325367 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a693bd-1575-42ab-8da5-da5e4a56838c" containerName="collect-profiles" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.325377 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a693bd-1575-42ab-8da5-da5e4a56838c" containerName="collect-profiles" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.326753 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a693bd-1575-42ab-8da5-da5e4a56838c" containerName="collect-profiles" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.326793 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cfedb7a-57e2-4533-95c9-4c691087caed" containerName="neutron-sriov-openstack-openstack-cell1" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.328207 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.330495 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.331986 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-dhcp-agent-neutron-config" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.332259 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.332343 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.335116 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.338215 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr"] Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.493382 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.493791 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.493859 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.493902 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldmhf\" (UniqueName: \"kubernetes.io/projected/4021796f-1cba-4573-9efa-4ed786ba2251-kube-api-access-ldmhf\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.494091 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.596832 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.596926 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldmhf\" (UniqueName: \"kubernetes.io/projected/4021796f-1cba-4573-9efa-4ed786ba2251-kube-api-access-ldmhf\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.597000 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.597126 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.597165 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.601816 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-neutron-dhcp-agent-neutron-config-0\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.606623 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-neutron-dhcp-combined-ca-bundle\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.608931 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-inventory\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.610067 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-ssh-key\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.617042 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldmhf\" (UniqueName: \"kubernetes.io/projected/4021796f-1cba-4573-9efa-4ed786ba2251-kube-api-access-ldmhf\") pod \"neutron-dhcp-openstack-openstack-cell1-qz9nr\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:37 crc kubenswrapper[4856]: I1122 09:32:37.694407 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:32:38 crc kubenswrapper[4856]: I1122 09:32:38.290972 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr"] Nov 22 09:32:38 crc kubenswrapper[4856]: I1122 09:32:38.779222 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:32:39 crc kubenswrapper[4856]: I1122 09:32:39.082944 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" event={"ID":"4021796f-1cba-4573-9efa-4ed786ba2251","Type":"ContainerStarted","Data":"8c3d8762ea505de58026eeff7f1426a98ef2c5d3f412029d9ae636a26a7e954b"} Nov 22 09:32:39 crc kubenswrapper[4856]: I1122 09:32:39.373104 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:32:40 crc kubenswrapper[4856]: I1122 09:32:40.095077 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" event={"ID":"4021796f-1cba-4573-9efa-4ed786ba2251","Type":"ContainerStarted","Data":"8a094bd7541143e1e1014c00024e91579bda4f3528e59f5364af016956fc08df"} Nov 22 09:32:40 crc kubenswrapper[4856]: I1122 09:32:40.118394 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" podStartSLOduration=2.52762061 podStartE2EDuration="3.118377025s" podCreationTimestamp="2025-11-22 09:32:37 +0000 UTC" firstStartedPulling="2025-11-22 09:32:38.778946491 +0000 UTC m=+9001.192339759" lastFinishedPulling="2025-11-22 09:32:39.369702916 +0000 UTC m=+9001.783096174" observedRunningTime="2025-11-22 09:32:40.111656264 +0000 UTC m=+9002.525049522" watchObservedRunningTime="2025-11-22 09:32:40.118377025 +0000 UTC m=+9002.531770283" Nov 22 09:32:42 crc kubenswrapper[4856]: I1122 09:32:42.709928 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:32:42 crc kubenswrapper[4856]: E1122 09:32:42.710405 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:32:44 crc kubenswrapper[4856]: I1122 09:32:44.625224 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dhz8h"] Nov 22 09:32:44 crc kubenswrapper[4856]: I1122 09:32:44.628236 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:32:44 crc kubenswrapper[4856]: I1122 09:32:44.643013 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhz8h"] Nov 22 09:32:44 crc kubenswrapper[4856]: I1122 09:32:44.761498 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac40159-6bde-449e-b2b0-a13819b16b37-catalog-content\") pod \"redhat-marketplace-dhz8h\" (UID: \"dac40159-6bde-449e-b2b0-a13819b16b37\") " pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:32:44 crc kubenswrapper[4856]: I1122 09:32:44.761600 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd2f6\" (UniqueName: \"kubernetes.io/projected/dac40159-6bde-449e-b2b0-a13819b16b37-kube-api-access-rd2f6\") pod \"redhat-marketplace-dhz8h\" (UID: \"dac40159-6bde-449e-b2b0-a13819b16b37\") " pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:32:44 crc kubenswrapper[4856]: I1122 09:32:44.762231 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac40159-6bde-449e-b2b0-a13819b16b37-utilities\") pod \"redhat-marketplace-dhz8h\" (UID: \"dac40159-6bde-449e-b2b0-a13819b16b37\") " pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:32:44 crc kubenswrapper[4856]: I1122 09:32:44.864011 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac40159-6bde-449e-b2b0-a13819b16b37-utilities\") pod \"redhat-marketplace-dhz8h\" (UID: \"dac40159-6bde-449e-b2b0-a13819b16b37\") " pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:32:44 crc kubenswrapper[4856]: I1122 09:32:44.864117 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac40159-6bde-449e-b2b0-a13819b16b37-catalog-content\") pod \"redhat-marketplace-dhz8h\" (UID: \"dac40159-6bde-449e-b2b0-a13819b16b37\") " pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:32:44 crc kubenswrapper[4856]: I1122 09:32:44.864156 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd2f6\" (UniqueName: \"kubernetes.io/projected/dac40159-6bde-449e-b2b0-a13819b16b37-kube-api-access-rd2f6\") pod \"redhat-marketplace-dhz8h\" (UID: \"dac40159-6bde-449e-b2b0-a13819b16b37\") " pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:32:44 crc kubenswrapper[4856]: I1122 09:32:44.864541 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac40159-6bde-449e-b2b0-a13819b16b37-utilities\") pod \"redhat-marketplace-dhz8h\" (UID: \"dac40159-6bde-449e-b2b0-a13819b16b37\") " pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:32:44 crc kubenswrapper[4856]: I1122 09:32:44.864627 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac40159-6bde-449e-b2b0-a13819b16b37-catalog-content\") pod \"redhat-marketplace-dhz8h\" (UID: \"dac40159-6bde-449e-b2b0-a13819b16b37\") " pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:32:44 crc kubenswrapper[4856]: I1122 09:32:44.883555 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd2f6\" (UniqueName: \"kubernetes.io/projected/dac40159-6bde-449e-b2b0-a13819b16b37-kube-api-access-rd2f6\") pod \"redhat-marketplace-dhz8h\" (UID: \"dac40159-6bde-449e-b2b0-a13819b16b37\") " pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:32:44 crc kubenswrapper[4856]: I1122 09:32:44.945421 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:32:49 crc kubenswrapper[4856]: I1122 09:32:49.806274 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhz8h"] Nov 22 09:32:51 crc kubenswrapper[4856]: I1122 09:32:51.203236 4856 generic.go:334] "Generic (PLEG): container finished" podID="dac40159-6bde-449e-b2b0-a13819b16b37" containerID="e298668707802e3e33ca05b657cc62e27f011c8ca9310b1791703e4ec1ea835e" exitCode=0 Nov 22 09:32:51 crc kubenswrapper[4856]: I1122 09:32:51.203807 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhz8h" event={"ID":"dac40159-6bde-449e-b2b0-a13819b16b37","Type":"ContainerDied","Data":"e298668707802e3e33ca05b657cc62e27f011c8ca9310b1791703e4ec1ea835e"} Nov 22 09:32:51 crc kubenswrapper[4856]: I1122 09:32:51.204945 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhz8h" event={"ID":"dac40159-6bde-449e-b2b0-a13819b16b37","Type":"ContainerStarted","Data":"957b9716cb790d716ccbc7dfe347e74fbe377f9fdeb664c6fab40a5472e442d7"} Nov 22 09:32:53 crc kubenswrapper[4856]: I1122 09:32:53.234900 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhz8h" event={"ID":"dac40159-6bde-449e-b2b0-a13819b16b37","Type":"ContainerStarted","Data":"a534a0007b1735fc0d75c978d459621621d2120503667bcc0b0760697b4e093e"} Nov 22 09:32:54 crc kubenswrapper[4856]: I1122 09:32:54.245056 4856 generic.go:334] "Generic (PLEG): container finished" podID="dac40159-6bde-449e-b2b0-a13819b16b37" containerID="a534a0007b1735fc0d75c978d459621621d2120503667bcc0b0760697b4e093e" exitCode=0 Nov 22 09:32:54 crc kubenswrapper[4856]: I1122 09:32:54.245098 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhz8h" event={"ID":"dac40159-6bde-449e-b2b0-a13819b16b37","Type":"ContainerDied","Data":"a534a0007b1735fc0d75c978d459621621d2120503667bcc0b0760697b4e093e"} Nov 22 09:32:54 crc kubenswrapper[4856]: I1122 09:32:54.710272 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:32:54 crc kubenswrapper[4856]: E1122 09:32:54.710971 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:32:56 crc kubenswrapper[4856]: I1122 09:32:56.266096 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhz8h" event={"ID":"dac40159-6bde-449e-b2b0-a13819b16b37","Type":"ContainerStarted","Data":"ec890d7e1970f55f9a0a1795c2e1dd672e1c66ffdcbcceca98a25e421c4ce325"} Nov 22 09:32:56 crc kubenswrapper[4856]: I1122 09:32:56.290819 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dhz8h" podStartSLOduration=8.443099928 podStartE2EDuration="12.290799732s" podCreationTimestamp="2025-11-22 09:32:44 +0000 UTC" firstStartedPulling="2025-11-22 09:32:51.205421172 +0000 UTC m=+9013.618814430" lastFinishedPulling="2025-11-22 09:32:55.053120966 +0000 UTC m=+9017.466514234" observedRunningTime="2025-11-22 09:32:56.280603496 +0000 UTC m=+9018.693996754" watchObservedRunningTime="2025-11-22 09:32:56.290799732 +0000 UTC m=+9018.704192990" Nov 22 09:33:04 crc kubenswrapper[4856]: I1122 09:33:04.946267 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:33:04 crc kubenswrapper[4856]: I1122 09:33:04.946744 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:33:04 crc kubenswrapper[4856]: I1122 09:33:04.994365 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:33:05 crc kubenswrapper[4856]: I1122 09:33:05.428430 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:33:05 crc kubenswrapper[4856]: I1122 09:33:05.529912 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhz8h"] Nov 22 09:33:07 crc kubenswrapper[4856]: I1122 09:33:07.376006 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dhz8h" podUID="dac40159-6bde-449e-b2b0-a13819b16b37" containerName="registry-server" containerID="cri-o://ec890d7e1970f55f9a0a1795c2e1dd672e1c66ffdcbcceca98a25e421c4ce325" gracePeriod=2 Nov 22 09:33:07 crc kubenswrapper[4856]: I1122 09:33:07.710152 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:33:07 crc kubenswrapper[4856]: E1122 09:33:07.710774 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:33:07 crc kubenswrapper[4856]: I1122 09:33:07.853737 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:33:07 crc kubenswrapper[4856]: I1122 09:33:07.955262 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac40159-6bde-449e-b2b0-a13819b16b37-catalog-content\") pod \"dac40159-6bde-449e-b2b0-a13819b16b37\" (UID: \"dac40159-6bde-449e-b2b0-a13819b16b37\") " Nov 22 09:33:07 crc kubenswrapper[4856]: I1122 09:33:07.955860 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd2f6\" (UniqueName: \"kubernetes.io/projected/dac40159-6bde-449e-b2b0-a13819b16b37-kube-api-access-rd2f6\") pod \"dac40159-6bde-449e-b2b0-a13819b16b37\" (UID: \"dac40159-6bde-449e-b2b0-a13819b16b37\") " Nov 22 09:33:07 crc kubenswrapper[4856]: I1122 09:33:07.955992 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac40159-6bde-449e-b2b0-a13819b16b37-utilities\") pod \"dac40159-6bde-449e-b2b0-a13819b16b37\" (UID: \"dac40159-6bde-449e-b2b0-a13819b16b37\") " Nov 22 09:33:07 crc kubenswrapper[4856]: I1122 09:33:07.957161 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dac40159-6bde-449e-b2b0-a13819b16b37-utilities" (OuterVolumeSpecName: "utilities") pod "dac40159-6bde-449e-b2b0-a13819b16b37" (UID: "dac40159-6bde-449e-b2b0-a13819b16b37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:33:07 crc kubenswrapper[4856]: I1122 09:33:07.964591 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac40159-6bde-449e-b2b0-a13819b16b37-kube-api-access-rd2f6" (OuterVolumeSpecName: "kube-api-access-rd2f6") pod "dac40159-6bde-449e-b2b0-a13819b16b37" (UID: "dac40159-6bde-449e-b2b0-a13819b16b37"). InnerVolumeSpecName "kube-api-access-rd2f6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:33:07 crc kubenswrapper[4856]: I1122 09:33:07.974069 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dac40159-6bde-449e-b2b0-a13819b16b37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dac40159-6bde-449e-b2b0-a13819b16b37" (UID: "dac40159-6bde-449e-b2b0-a13819b16b37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.059446 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dac40159-6bde-449e-b2b0-a13819b16b37-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.059483 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd2f6\" (UniqueName: \"kubernetes.io/projected/dac40159-6bde-449e-b2b0-a13819b16b37-kube-api-access-rd2f6\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.059493 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dac40159-6bde-449e-b2b0-a13819b16b37-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.387190 4856 generic.go:334] "Generic (PLEG): container finished" podID="dac40159-6bde-449e-b2b0-a13819b16b37" containerID="ec890d7e1970f55f9a0a1795c2e1dd672e1c66ffdcbcceca98a25e421c4ce325" exitCode=0 Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.387234 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dhz8h" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.387232 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhz8h" event={"ID":"dac40159-6bde-449e-b2b0-a13819b16b37","Type":"ContainerDied","Data":"ec890d7e1970f55f9a0a1795c2e1dd672e1c66ffdcbcceca98a25e421c4ce325"} Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.387308 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dhz8h" event={"ID":"dac40159-6bde-449e-b2b0-a13819b16b37","Type":"ContainerDied","Data":"957b9716cb790d716ccbc7dfe347e74fbe377f9fdeb664c6fab40a5472e442d7"} Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.387325 4856 scope.go:117] "RemoveContainer" containerID="ec890d7e1970f55f9a0a1795c2e1dd672e1c66ffdcbcceca98a25e421c4ce325" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.425399 4856 scope.go:117] "RemoveContainer" containerID="a534a0007b1735fc0d75c978d459621621d2120503667bcc0b0760697b4e093e" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.432946 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhz8h"] Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.442712 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dhz8h"] Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.467363 4856 scope.go:117] "RemoveContainer" containerID="e298668707802e3e33ca05b657cc62e27f011c8ca9310b1791703e4ec1ea835e" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.520227 4856 scope.go:117] "RemoveContainer" containerID="ec890d7e1970f55f9a0a1795c2e1dd672e1c66ffdcbcceca98a25e421c4ce325" Nov 22 09:33:08 crc kubenswrapper[4856]: E1122 09:33:08.520978 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec890d7e1970f55f9a0a1795c2e1dd672e1c66ffdcbcceca98a25e421c4ce325\": container with ID starting with ec890d7e1970f55f9a0a1795c2e1dd672e1c66ffdcbcceca98a25e421c4ce325 not found: ID does not exist" containerID="ec890d7e1970f55f9a0a1795c2e1dd672e1c66ffdcbcceca98a25e421c4ce325" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.521013 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec890d7e1970f55f9a0a1795c2e1dd672e1c66ffdcbcceca98a25e421c4ce325"} err="failed to get container status \"ec890d7e1970f55f9a0a1795c2e1dd672e1c66ffdcbcceca98a25e421c4ce325\": rpc error: code = NotFound desc = could not find container \"ec890d7e1970f55f9a0a1795c2e1dd672e1c66ffdcbcceca98a25e421c4ce325\": container with ID starting with ec890d7e1970f55f9a0a1795c2e1dd672e1c66ffdcbcceca98a25e421c4ce325 not found: ID does not exist" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.521033 4856 scope.go:117] "RemoveContainer" containerID="a534a0007b1735fc0d75c978d459621621d2120503667bcc0b0760697b4e093e" Nov 22 09:33:08 crc kubenswrapper[4856]: E1122 09:33:08.521498 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a534a0007b1735fc0d75c978d459621621d2120503667bcc0b0760697b4e093e\": container with ID starting with a534a0007b1735fc0d75c978d459621621d2120503667bcc0b0760697b4e093e not found: ID does not exist" containerID="a534a0007b1735fc0d75c978d459621621d2120503667bcc0b0760697b4e093e" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.521562 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a534a0007b1735fc0d75c978d459621621d2120503667bcc0b0760697b4e093e"} err="failed to get container status \"a534a0007b1735fc0d75c978d459621621d2120503667bcc0b0760697b4e093e\": rpc error: code = NotFound desc = could not find container \"a534a0007b1735fc0d75c978d459621621d2120503667bcc0b0760697b4e093e\": container with ID starting with a534a0007b1735fc0d75c978d459621621d2120503667bcc0b0760697b4e093e not found: ID does not exist" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.521591 4856 scope.go:117] "RemoveContainer" containerID="e298668707802e3e33ca05b657cc62e27f011c8ca9310b1791703e4ec1ea835e" Nov 22 09:33:08 crc kubenswrapper[4856]: E1122 09:33:08.521936 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e298668707802e3e33ca05b657cc62e27f011c8ca9310b1791703e4ec1ea835e\": container with ID starting with e298668707802e3e33ca05b657cc62e27f011c8ca9310b1791703e4ec1ea835e not found: ID does not exist" containerID="e298668707802e3e33ca05b657cc62e27f011c8ca9310b1791703e4ec1ea835e" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.521960 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e298668707802e3e33ca05b657cc62e27f011c8ca9310b1791703e4ec1ea835e"} err="failed to get container status \"e298668707802e3e33ca05b657cc62e27f011c8ca9310b1791703e4ec1ea835e\": rpc error: code = NotFound desc = could not find container \"e298668707802e3e33ca05b657cc62e27f011c8ca9310b1791703e4ec1ea835e\": container with ID starting with e298668707802e3e33ca05b657cc62e27f011c8ca9310b1791703e4ec1ea835e not found: ID does not exist" Nov 22 09:33:08 crc kubenswrapper[4856]: I1122 09:33:08.732372 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dac40159-6bde-449e-b2b0-a13819b16b37" path="/var/lib/kubelet/pods/dac40159-6bde-449e-b2b0-a13819b16b37/volumes" Nov 22 09:33:21 crc kubenswrapper[4856]: I1122 09:33:21.710050 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:33:21 crc kubenswrapper[4856]: E1122 09:33:21.710832 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:33:33 crc kubenswrapper[4856]: I1122 09:33:33.710376 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:33:33 crc kubenswrapper[4856]: E1122 09:33:33.711392 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:33:33 crc kubenswrapper[4856]: I1122 09:33:33.770312 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="d4dcc1d5-4e57-45ff-931e-0be9bc3be546" containerName="galera" probeResult="failure" output="command timed out" Nov 22 09:33:33 crc kubenswrapper[4856]: I1122 09:33:33.770986 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="d4dcc1d5-4e57-45ff-931e-0be9bc3be546" containerName="galera" probeResult="failure" output="command timed out" Nov 22 09:33:45 crc kubenswrapper[4856]: I1122 09:33:45.710879 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:33:45 crc kubenswrapper[4856]: E1122 09:33:45.711952 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.330820 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6fwg6"] Nov 22 09:33:58 crc kubenswrapper[4856]: E1122 09:33:58.333028 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac40159-6bde-449e-b2b0-a13819b16b37" containerName="registry-server" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.333108 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac40159-6bde-449e-b2b0-a13819b16b37" containerName="registry-server" Nov 22 09:33:58 crc kubenswrapper[4856]: E1122 09:33:58.333182 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac40159-6bde-449e-b2b0-a13819b16b37" containerName="extract-utilities" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.333243 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac40159-6bde-449e-b2b0-a13819b16b37" containerName="extract-utilities" Nov 22 09:33:58 crc kubenswrapper[4856]: E1122 09:33:58.333350 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac40159-6bde-449e-b2b0-a13819b16b37" containerName="extract-content" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.333424 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac40159-6bde-449e-b2b0-a13819b16b37" containerName="extract-content" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.333706 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac40159-6bde-449e-b2b0-a13819b16b37" containerName="registry-server" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.335231 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.360031 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6fwg6"] Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.432665 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g6ft\" (UniqueName: \"kubernetes.io/projected/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-kube-api-access-9g6ft\") pod \"community-operators-6fwg6\" (UID: \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\") " pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.432997 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-utilities\") pod \"community-operators-6fwg6\" (UID: \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\") " pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.433175 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-catalog-content\") pod \"community-operators-6fwg6\" (UID: \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\") " pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.535321 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-catalog-content\") pod \"community-operators-6fwg6\" (UID: \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\") " pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.535401 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g6ft\" (UniqueName: \"kubernetes.io/projected/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-kube-api-access-9g6ft\") pod \"community-operators-6fwg6\" (UID: \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\") " pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.535462 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-utilities\") pod \"community-operators-6fwg6\" (UID: \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\") " pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.536008 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-utilities\") pod \"community-operators-6fwg6\" (UID: \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\") " pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.536142 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-catalog-content\") pod \"community-operators-6fwg6\" (UID: \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\") " pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.557964 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g6ft\" (UniqueName: \"kubernetes.io/projected/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-kube-api-access-9g6ft\") pod \"community-operators-6fwg6\" (UID: \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\") " pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.667887 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:33:58 crc kubenswrapper[4856]: I1122 09:33:58.721954 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:33:58 crc kubenswrapper[4856]: E1122 09:33:58.722216 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:33:59 crc kubenswrapper[4856]: I1122 09:33:59.274473 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6fwg6"] Nov 22 09:33:59 crc kubenswrapper[4856]: I1122 09:33:59.954572 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fwg6" event={"ID":"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f","Type":"ContainerStarted","Data":"9a4d2d51fd05b36220a8ded55ec3fe64754f656b73f9cedeeae3a933eac2da7c"} Nov 22 09:34:00 crc kubenswrapper[4856]: E1122 09:34:00.412434 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9d544d2_b6b2_4dd3_bd2b_ed310de2ff1f.slice/crio-conmon-8e1619174a5a5a5d0237608f183b69ee6333228dbfa6c8be0227c8ef171503ea.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9d544d2_b6b2_4dd3_bd2b_ed310de2ff1f.slice/crio-8e1619174a5a5a5d0237608f183b69ee6333228dbfa6c8be0227c8ef171503ea.scope\": RecentStats: unable to find data in memory cache]" Nov 22 09:34:00 crc kubenswrapper[4856]: I1122 09:34:00.966629 4856 generic.go:334] "Generic (PLEG): container finished" podID="e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" containerID="8e1619174a5a5a5d0237608f183b69ee6333228dbfa6c8be0227c8ef171503ea" exitCode=0 Nov 22 09:34:00 crc kubenswrapper[4856]: I1122 09:34:00.966858 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fwg6" event={"ID":"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f","Type":"ContainerDied","Data":"8e1619174a5a5a5d0237608f183b69ee6333228dbfa6c8be0227c8ef171503ea"} Nov 22 09:34:01 crc kubenswrapper[4856]: I1122 09:34:01.980846 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fwg6" event={"ID":"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f","Type":"ContainerStarted","Data":"66cb504dde15a95a3767890b761cffd123e5c1e0ba18bd37360d6cd118530d0e"} Nov 22 09:34:04 crc kubenswrapper[4856]: I1122 09:34:04.010276 4856 generic.go:334] "Generic (PLEG): container finished" podID="e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" containerID="66cb504dde15a95a3767890b761cffd123e5c1e0ba18bd37360d6cd118530d0e" exitCode=0 Nov 22 09:34:04 crc kubenswrapper[4856]: I1122 09:34:04.010358 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fwg6" event={"ID":"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f","Type":"ContainerDied","Data":"66cb504dde15a95a3767890b761cffd123e5c1e0ba18bd37360d6cd118530d0e"} Nov 22 09:34:05 crc kubenswrapper[4856]: I1122 09:34:05.024594 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fwg6" event={"ID":"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f","Type":"ContainerStarted","Data":"d50b6dded8ed36978b28bfa8b46a7a9b56aa66510e3071028447b3152a1bc915"} Nov 22 09:34:05 crc kubenswrapper[4856]: I1122 09:34:05.057689 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6fwg6" podStartSLOduration=3.574179985 podStartE2EDuration="7.057663164s" podCreationTimestamp="2025-11-22 09:33:58 +0000 UTC" firstStartedPulling="2025-11-22 09:34:00.96971658 +0000 UTC m=+9083.383109838" lastFinishedPulling="2025-11-22 09:34:04.453199759 +0000 UTC m=+9086.866593017" observedRunningTime="2025-11-22 09:34:05.042400441 +0000 UTC m=+9087.455793699" watchObservedRunningTime="2025-11-22 09:34:05.057663164 +0000 UTC m=+9087.471056432" Nov 22 09:34:08 crc kubenswrapper[4856]: I1122 09:34:08.668586 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:34:08 crc kubenswrapper[4856]: I1122 09:34:08.669933 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:34:08 crc kubenswrapper[4856]: I1122 09:34:08.736936 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:34:09 crc kubenswrapper[4856]: I1122 09:34:09.131844 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:34:09 crc kubenswrapper[4856]: I1122 09:34:09.190776 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6fwg6"] Nov 22 09:34:11 crc kubenswrapper[4856]: I1122 09:34:11.090055 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6fwg6" podUID="e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" containerName="registry-server" containerID="cri-o://d50b6dded8ed36978b28bfa8b46a7a9b56aa66510e3071028447b3152a1bc915" gracePeriod=2 Nov 22 09:34:11 crc kubenswrapper[4856]: I1122 09:34:11.580789 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:34:11 crc kubenswrapper[4856]: I1122 09:34:11.706714 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-utilities\") pod \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\" (UID: \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\") " Nov 22 09:34:11 crc kubenswrapper[4856]: I1122 09:34:11.706817 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-catalog-content\") pod \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\" (UID: \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\") " Nov 22 09:34:11 crc kubenswrapper[4856]: I1122 09:34:11.706874 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g6ft\" (UniqueName: \"kubernetes.io/projected/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-kube-api-access-9g6ft\") pod \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\" (UID: \"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f\") " Nov 22 09:34:11 crc kubenswrapper[4856]: I1122 09:34:11.707720 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-utilities" (OuterVolumeSpecName: "utilities") pod "e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" (UID: "e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:34:11 crc kubenswrapper[4856]: I1122 09:34:11.721741 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-kube-api-access-9g6ft" (OuterVolumeSpecName: "kube-api-access-9g6ft") pod "e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" (UID: "e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f"). InnerVolumeSpecName "kube-api-access-9g6ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:34:11 crc kubenswrapper[4856]: I1122 09:34:11.764146 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" (UID: "e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:34:11 crc kubenswrapper[4856]: I1122 09:34:11.812302 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:11 crc kubenswrapper[4856]: I1122 09:34:11.812349 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:11 crc kubenswrapper[4856]: I1122 09:34:11.812372 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g6ft\" (UniqueName: \"kubernetes.io/projected/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f-kube-api-access-9g6ft\") on node \"crc\" DevicePath \"\"" Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.103233 4856 generic.go:334] "Generic (PLEG): container finished" podID="e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" containerID="d50b6dded8ed36978b28bfa8b46a7a9b56aa66510e3071028447b3152a1bc915" exitCode=0 Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.103319 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fwg6" event={"ID":"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f","Type":"ContainerDied","Data":"d50b6dded8ed36978b28bfa8b46a7a9b56aa66510e3071028447b3152a1bc915"} Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.103611 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6fwg6" event={"ID":"e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f","Type":"ContainerDied","Data":"9a4d2d51fd05b36220a8ded55ec3fe64754f656b73f9cedeeae3a933eac2da7c"} Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.103636 4856 scope.go:117] "RemoveContainer" containerID="d50b6dded8ed36978b28bfa8b46a7a9b56aa66510e3071028447b3152a1bc915" Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.103366 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6fwg6" Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.128978 4856 scope.go:117] "RemoveContainer" containerID="66cb504dde15a95a3767890b761cffd123e5c1e0ba18bd37360d6cd118530d0e" Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.162986 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6fwg6"] Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.176874 4856 scope.go:117] "RemoveContainer" containerID="8e1619174a5a5a5d0237608f183b69ee6333228dbfa6c8be0227c8ef171503ea" Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.177881 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6fwg6"] Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.218852 4856 scope.go:117] "RemoveContainer" containerID="d50b6dded8ed36978b28bfa8b46a7a9b56aa66510e3071028447b3152a1bc915" Nov 22 09:34:12 crc kubenswrapper[4856]: E1122 09:34:12.219247 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d50b6dded8ed36978b28bfa8b46a7a9b56aa66510e3071028447b3152a1bc915\": container with ID starting with d50b6dded8ed36978b28bfa8b46a7a9b56aa66510e3071028447b3152a1bc915 not found: ID does not exist" containerID="d50b6dded8ed36978b28bfa8b46a7a9b56aa66510e3071028447b3152a1bc915" Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.219281 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d50b6dded8ed36978b28bfa8b46a7a9b56aa66510e3071028447b3152a1bc915"} err="failed to get container status \"d50b6dded8ed36978b28bfa8b46a7a9b56aa66510e3071028447b3152a1bc915\": rpc error: code = NotFound desc = could not find container \"d50b6dded8ed36978b28bfa8b46a7a9b56aa66510e3071028447b3152a1bc915\": container with ID starting with d50b6dded8ed36978b28bfa8b46a7a9b56aa66510e3071028447b3152a1bc915 not found: ID does not exist" Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.219302 4856 scope.go:117] "RemoveContainer" containerID="66cb504dde15a95a3767890b761cffd123e5c1e0ba18bd37360d6cd118530d0e" Nov 22 09:34:12 crc kubenswrapper[4856]: E1122 09:34:12.219567 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66cb504dde15a95a3767890b761cffd123e5c1e0ba18bd37360d6cd118530d0e\": container with ID starting with 66cb504dde15a95a3767890b761cffd123e5c1e0ba18bd37360d6cd118530d0e not found: ID does not exist" containerID="66cb504dde15a95a3767890b761cffd123e5c1e0ba18bd37360d6cd118530d0e" Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.219592 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66cb504dde15a95a3767890b761cffd123e5c1e0ba18bd37360d6cd118530d0e"} err="failed to get container status \"66cb504dde15a95a3767890b761cffd123e5c1e0ba18bd37360d6cd118530d0e\": rpc error: code = NotFound desc = could not find container \"66cb504dde15a95a3767890b761cffd123e5c1e0ba18bd37360d6cd118530d0e\": container with ID starting with 66cb504dde15a95a3767890b761cffd123e5c1e0ba18bd37360d6cd118530d0e not found: ID does not exist" Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.219610 4856 scope.go:117] "RemoveContainer" containerID="8e1619174a5a5a5d0237608f183b69ee6333228dbfa6c8be0227c8ef171503ea" Nov 22 09:34:12 crc kubenswrapper[4856]: E1122 09:34:12.219833 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e1619174a5a5a5d0237608f183b69ee6333228dbfa6c8be0227c8ef171503ea\": container with ID starting with 8e1619174a5a5a5d0237608f183b69ee6333228dbfa6c8be0227c8ef171503ea not found: ID does not exist" containerID="8e1619174a5a5a5d0237608f183b69ee6333228dbfa6c8be0227c8ef171503ea" Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.219866 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e1619174a5a5a5d0237608f183b69ee6333228dbfa6c8be0227c8ef171503ea"} err="failed to get container status \"8e1619174a5a5a5d0237608f183b69ee6333228dbfa6c8be0227c8ef171503ea\": rpc error: code = NotFound desc = could not find container \"8e1619174a5a5a5d0237608f183b69ee6333228dbfa6c8be0227c8ef171503ea\": container with ID starting with 8e1619174a5a5a5d0237608f183b69ee6333228dbfa6c8be0227c8ef171503ea not found: ID does not exist" Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.710305 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:34:12 crc kubenswrapper[4856]: E1122 09:34:12.710749 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:34:12 crc kubenswrapper[4856]: I1122 09:34:12.728644 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" path="/var/lib/kubelet/pods/e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f/volumes" Nov 22 09:34:23 crc kubenswrapper[4856]: I1122 09:34:23.709825 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:34:23 crc kubenswrapper[4856]: E1122 09:34:23.713164 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:34:36 crc kubenswrapper[4856]: I1122 09:34:36.713290 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:34:36 crc kubenswrapper[4856]: E1122 09:34:36.714962 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:34:50 crc kubenswrapper[4856]: I1122 09:34:50.711551 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:34:50 crc kubenswrapper[4856]: E1122 09:34:50.712479 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:35:03 crc kubenswrapper[4856]: I1122 09:35:03.710532 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:35:03 crc kubenswrapper[4856]: E1122 09:35:03.711555 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:35:16 crc kubenswrapper[4856]: I1122 09:35:16.710440 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:35:16 crc kubenswrapper[4856]: E1122 09:35:16.711180 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:35:30 crc kubenswrapper[4856]: I1122 09:35:30.981998 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-68fcd9d79d-pb2lw" podUID="24afd937-020f-43ff-beec-3bccac3dffec" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 22 09:35:31 crc kubenswrapper[4856]: I1122 09:35:31.709419 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:35:32 crc kubenswrapper[4856]: I1122 09:35:32.068934 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"b1d646530f87228299c4981c638bb3d4d4475a9e02490d87890d2a187ed1d6e4"} Nov 22 09:36:37 crc kubenswrapper[4856]: I1122 09:36:37.961807 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zzbc6"] Nov 22 09:36:37 crc kubenswrapper[4856]: E1122 09:36:37.962672 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" containerName="extract-content" Nov 22 09:36:37 crc kubenswrapper[4856]: I1122 09:36:37.962687 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" containerName="extract-content" Nov 22 09:36:37 crc kubenswrapper[4856]: E1122 09:36:37.962695 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" containerName="extract-utilities" Nov 22 09:36:37 crc kubenswrapper[4856]: I1122 09:36:37.962702 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" containerName="extract-utilities" Nov 22 09:36:37 crc kubenswrapper[4856]: E1122 09:36:37.962722 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" containerName="registry-server" Nov 22 09:36:37 crc kubenswrapper[4856]: I1122 09:36:37.962728 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" containerName="registry-server" Nov 22 09:36:37 crc kubenswrapper[4856]: I1122 09:36:37.962936 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9d544d2-b6b2-4dd3-bd2b-ed310de2ff1f" containerName="registry-server" Nov 22 09:36:37 crc kubenswrapper[4856]: I1122 09:36:37.964380 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:37 crc kubenswrapper[4856]: I1122 09:36:37.982890 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zzbc6"] Nov 22 09:36:38 crc kubenswrapper[4856]: I1122 09:36:38.028991 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmh27\" (UniqueName: \"kubernetes.io/projected/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-kube-api-access-hmh27\") pod \"redhat-operators-zzbc6\" (UID: \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\") " pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:38 crc kubenswrapper[4856]: I1122 09:36:38.029108 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-catalog-content\") pod \"redhat-operators-zzbc6\" (UID: \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\") " pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:38 crc kubenswrapper[4856]: I1122 09:36:38.029157 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-utilities\") pod \"redhat-operators-zzbc6\" (UID: \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\") " pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:38 crc kubenswrapper[4856]: I1122 09:36:38.131384 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmh27\" (UniqueName: \"kubernetes.io/projected/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-kube-api-access-hmh27\") pod \"redhat-operators-zzbc6\" (UID: \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\") " pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:38 crc kubenswrapper[4856]: I1122 09:36:38.131523 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-catalog-content\") pod \"redhat-operators-zzbc6\" (UID: \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\") " pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:38 crc kubenswrapper[4856]: I1122 09:36:38.131583 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-utilities\") pod \"redhat-operators-zzbc6\" (UID: \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\") " pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:38 crc kubenswrapper[4856]: I1122 09:36:38.132254 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-catalog-content\") pod \"redhat-operators-zzbc6\" (UID: \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\") " pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:38 crc kubenswrapper[4856]: I1122 09:36:38.132530 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-utilities\") pod \"redhat-operators-zzbc6\" (UID: \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\") " pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:38 crc kubenswrapper[4856]: I1122 09:36:38.155721 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmh27\" (UniqueName: \"kubernetes.io/projected/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-kube-api-access-hmh27\") pod \"redhat-operators-zzbc6\" (UID: \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\") " pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:38 crc kubenswrapper[4856]: I1122 09:36:38.301322 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:38 crc kubenswrapper[4856]: I1122 09:36:38.779140 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zzbc6"] Nov 22 09:36:38 crc kubenswrapper[4856]: I1122 09:36:38.808192 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzbc6" event={"ID":"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4","Type":"ContainerStarted","Data":"59a3d8277e0bfea1827a8b3e5d54556d76c7b2709dc8cbc9e77059e9112f82b4"} Nov 22 09:36:39 crc kubenswrapper[4856]: I1122 09:36:39.827896 4856 generic.go:334] "Generic (PLEG): container finished" podID="0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" containerID="da5bc7fecd0ab4da33d151e9f2dc374c5d42d92618f6128ad7fa00f32b7376d7" exitCode=0 Nov 22 09:36:39 crc kubenswrapper[4856]: I1122 09:36:39.828002 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzbc6" event={"ID":"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4","Type":"ContainerDied","Data":"da5bc7fecd0ab4da33d151e9f2dc374c5d42d92618f6128ad7fa00f32b7376d7"} Nov 22 09:36:40 crc kubenswrapper[4856]: I1122 09:36:40.842307 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzbc6" event={"ID":"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4","Type":"ContainerStarted","Data":"96a33c82abf99c52797d32e39565cdebeb6d85585496def4a49628156a14db78"} Nov 22 09:36:43 crc kubenswrapper[4856]: I1122 09:36:43.326002 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4798k"] Nov 22 09:36:43 crc kubenswrapper[4856]: I1122 09:36:43.329278 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:43 crc kubenswrapper[4856]: I1122 09:36:43.336881 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4798k"] Nov 22 09:36:43 crc kubenswrapper[4856]: I1122 09:36:43.351094 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxrg5\" (UniqueName: \"kubernetes.io/projected/0a52905c-c240-425f-982c-987eb0fbe3e9-kube-api-access-pxrg5\") pod \"certified-operators-4798k\" (UID: \"0a52905c-c240-425f-982c-987eb0fbe3e9\") " pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:43 crc kubenswrapper[4856]: I1122 09:36:43.351237 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a52905c-c240-425f-982c-987eb0fbe3e9-utilities\") pod \"certified-operators-4798k\" (UID: \"0a52905c-c240-425f-982c-987eb0fbe3e9\") " pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:43 crc kubenswrapper[4856]: I1122 09:36:43.351395 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a52905c-c240-425f-982c-987eb0fbe3e9-catalog-content\") pod \"certified-operators-4798k\" (UID: \"0a52905c-c240-425f-982c-987eb0fbe3e9\") " pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:43 crc kubenswrapper[4856]: I1122 09:36:43.453325 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a52905c-c240-425f-982c-987eb0fbe3e9-utilities\") pod \"certified-operators-4798k\" (UID: \"0a52905c-c240-425f-982c-987eb0fbe3e9\") " pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:43 crc kubenswrapper[4856]: I1122 09:36:43.453473 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a52905c-c240-425f-982c-987eb0fbe3e9-catalog-content\") pod \"certified-operators-4798k\" (UID: \"0a52905c-c240-425f-982c-987eb0fbe3e9\") " pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:43 crc kubenswrapper[4856]: I1122 09:36:43.453546 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxrg5\" (UniqueName: \"kubernetes.io/projected/0a52905c-c240-425f-982c-987eb0fbe3e9-kube-api-access-pxrg5\") pod \"certified-operators-4798k\" (UID: \"0a52905c-c240-425f-982c-987eb0fbe3e9\") " pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:43 crc kubenswrapper[4856]: I1122 09:36:43.453795 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a52905c-c240-425f-982c-987eb0fbe3e9-utilities\") pod \"certified-operators-4798k\" (UID: \"0a52905c-c240-425f-982c-987eb0fbe3e9\") " pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:43 crc kubenswrapper[4856]: I1122 09:36:43.453830 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a52905c-c240-425f-982c-987eb0fbe3e9-catalog-content\") pod \"certified-operators-4798k\" (UID: \"0a52905c-c240-425f-982c-987eb0fbe3e9\") " pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:43 crc kubenswrapper[4856]: I1122 09:36:43.874877 4856 generic.go:334] "Generic (PLEG): container finished" podID="0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" containerID="96a33c82abf99c52797d32e39565cdebeb6d85585496def4a49628156a14db78" exitCode=0 Nov 22 09:36:43 crc kubenswrapper[4856]: I1122 09:36:43.874920 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzbc6" event={"ID":"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4","Type":"ContainerDied","Data":"96a33c82abf99c52797d32e39565cdebeb6d85585496def4a49628156a14db78"} Nov 22 09:36:44 crc kubenswrapper[4856]: I1122 09:36:44.169187 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxrg5\" (UniqueName: \"kubernetes.io/projected/0a52905c-c240-425f-982c-987eb0fbe3e9-kube-api-access-pxrg5\") pod \"certified-operators-4798k\" (UID: \"0a52905c-c240-425f-982c-987eb0fbe3e9\") " pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:44 crc kubenswrapper[4856]: I1122 09:36:44.255318 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:44 crc kubenswrapper[4856]: I1122 09:36:44.869260 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4798k"] Nov 22 09:36:44 crc kubenswrapper[4856]: I1122 09:36:44.893272 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4798k" event={"ID":"0a52905c-c240-425f-982c-987eb0fbe3e9","Type":"ContainerStarted","Data":"a29e89746cdfd3c86e919c98f4d12a707042a08ab1d283b2daacaa5403acbd03"} Nov 22 09:36:45 crc kubenswrapper[4856]: I1122 09:36:45.905283 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzbc6" event={"ID":"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4","Type":"ContainerStarted","Data":"d92c2ee7f8bbc637f3b4d8316c22bb428c47a0391bdcb8ab2089ecd79f81e794"} Nov 22 09:36:45 crc kubenswrapper[4856]: I1122 09:36:45.907985 4856 generic.go:334] "Generic (PLEG): container finished" podID="0a52905c-c240-425f-982c-987eb0fbe3e9" containerID="dafde85079586ddccd2a4dc46014e4bae44d1f7ab2bd4b8ef23bf3c89a7e06df" exitCode=0 Nov 22 09:36:45 crc kubenswrapper[4856]: I1122 09:36:45.908094 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4798k" event={"ID":"0a52905c-c240-425f-982c-987eb0fbe3e9","Type":"ContainerDied","Data":"dafde85079586ddccd2a4dc46014e4bae44d1f7ab2bd4b8ef23bf3c89a7e06df"} Nov 22 09:36:45 crc kubenswrapper[4856]: I1122 09:36:45.911479 4856 generic.go:334] "Generic (PLEG): container finished" podID="4021796f-1cba-4573-9efa-4ed786ba2251" containerID="8a094bd7541143e1e1014c00024e91579bda4f3528e59f5364af016956fc08df" exitCode=0 Nov 22 09:36:45 crc kubenswrapper[4856]: I1122 09:36:45.911544 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" event={"ID":"4021796f-1cba-4573-9efa-4ed786ba2251","Type":"ContainerDied","Data":"8a094bd7541143e1e1014c00024e91579bda4f3528e59f5364af016956fc08df"} Nov 22 09:36:45 crc kubenswrapper[4856]: I1122 09:36:45.931761 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zzbc6" podStartSLOduration=3.9544984530000002 podStartE2EDuration="8.931734816s" podCreationTimestamp="2025-11-22 09:36:37 +0000 UTC" firstStartedPulling="2025-11-22 09:36:39.830491273 +0000 UTC m=+9242.243884531" lastFinishedPulling="2025-11-22 09:36:44.807727616 +0000 UTC m=+9247.221120894" observedRunningTime="2025-11-22 09:36:45.922832116 +0000 UTC m=+9248.336225384" watchObservedRunningTime="2025-11-22 09:36:45.931734816 +0000 UTC m=+9248.345128084" Nov 22 09:36:46 crc kubenswrapper[4856]: I1122 09:36:46.925349 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4798k" event={"ID":"0a52905c-c240-425f-982c-987eb0fbe3e9","Type":"ContainerStarted","Data":"57e1f646294fc29111ce965c1122ad069328e9cf1bf0543abd5a4f5867d77660"} Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.357579 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.558458 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-neutron-dhcp-combined-ca-bundle\") pod \"4021796f-1cba-4573-9efa-4ed786ba2251\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.558907 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldmhf\" (UniqueName: \"kubernetes.io/projected/4021796f-1cba-4573-9efa-4ed786ba2251-kube-api-access-ldmhf\") pod \"4021796f-1cba-4573-9efa-4ed786ba2251\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.558961 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-ssh-key\") pod \"4021796f-1cba-4573-9efa-4ed786ba2251\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.559010 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-neutron-dhcp-agent-neutron-config-0\") pod \"4021796f-1cba-4573-9efa-4ed786ba2251\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.559137 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-inventory\") pod \"4021796f-1cba-4573-9efa-4ed786ba2251\" (UID: \"4021796f-1cba-4573-9efa-4ed786ba2251\") " Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.568781 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-neutron-dhcp-combined-ca-bundle" (OuterVolumeSpecName: "neutron-dhcp-combined-ca-bundle") pod "4021796f-1cba-4573-9efa-4ed786ba2251" (UID: "4021796f-1cba-4573-9efa-4ed786ba2251"). InnerVolumeSpecName "neutron-dhcp-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.568868 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4021796f-1cba-4573-9efa-4ed786ba2251-kube-api-access-ldmhf" (OuterVolumeSpecName: "kube-api-access-ldmhf") pod "4021796f-1cba-4573-9efa-4ed786ba2251" (UID: "4021796f-1cba-4573-9efa-4ed786ba2251"). InnerVolumeSpecName "kube-api-access-ldmhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.588017 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-inventory" (OuterVolumeSpecName: "inventory") pod "4021796f-1cba-4573-9efa-4ed786ba2251" (UID: "4021796f-1cba-4573-9efa-4ed786ba2251"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.593852 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-neutron-dhcp-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-dhcp-agent-neutron-config-0") pod "4021796f-1cba-4573-9efa-4ed786ba2251" (UID: "4021796f-1cba-4573-9efa-4ed786ba2251"). InnerVolumeSpecName "neutron-dhcp-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.596227 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4021796f-1cba-4573-9efa-4ed786ba2251" (UID: "4021796f-1cba-4573-9efa-4ed786ba2251"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.660917 4856 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-neutron-dhcp-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.660948 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldmhf\" (UniqueName: \"kubernetes.io/projected/4021796f-1cba-4573-9efa-4ed786ba2251-kube-api-access-ldmhf\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.660958 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.660967 4856 reconciler_common.go:293] "Volume detached for volume \"neutron-dhcp-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-neutron-dhcp-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.660975 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4021796f-1cba-4573-9efa-4ed786ba2251-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.935136 4856 generic.go:334] "Generic (PLEG): container finished" podID="0a52905c-c240-425f-982c-987eb0fbe3e9" containerID="57e1f646294fc29111ce965c1122ad069328e9cf1bf0543abd5a4f5867d77660" exitCode=0 Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.935221 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4798k" event={"ID":"0a52905c-c240-425f-982c-987eb0fbe3e9","Type":"ContainerDied","Data":"57e1f646294fc29111ce965c1122ad069328e9cf1bf0543abd5a4f5867d77660"} Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.937121 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" event={"ID":"4021796f-1cba-4573-9efa-4ed786ba2251","Type":"ContainerDied","Data":"8c3d8762ea505de58026eeff7f1426a98ef2c5d3f412029d9ae636a26a7e954b"} Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.937142 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c3d8762ea505de58026eeff7f1426a98ef2c5d3f412029d9ae636a26a7e954b" Nov 22 09:36:47 crc kubenswrapper[4856]: I1122 09:36:47.937196 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dhcp-openstack-openstack-cell1-qz9nr" Nov 22 09:36:48 crc kubenswrapper[4856]: I1122 09:36:48.302582 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:48 crc kubenswrapper[4856]: I1122 09:36:48.302892 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:49 crc kubenswrapper[4856]: I1122 09:36:49.355499 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zzbc6" podUID="0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" containerName="registry-server" probeResult="failure" output=< Nov 22 09:36:49 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 09:36:49 crc kubenswrapper[4856]: > Nov 22 09:36:49 crc kubenswrapper[4856]: I1122 09:36:49.956542 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4798k" event={"ID":"0a52905c-c240-425f-982c-987eb0fbe3e9","Type":"ContainerStarted","Data":"63f390a494370bd921057eede0143b78f7c7ce2c363521fbb8ab25d4ff00785c"} Nov 22 09:36:49 crc kubenswrapper[4856]: I1122 09:36:49.982575 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4798k" podStartSLOduration=4.130406882 podStartE2EDuration="6.982552558s" podCreationTimestamp="2025-11-22 09:36:43 +0000 UTC" firstStartedPulling="2025-11-22 09:36:45.910668907 +0000 UTC m=+9248.324062175" lastFinishedPulling="2025-11-22 09:36:48.762814593 +0000 UTC m=+9251.176207851" observedRunningTime="2025-11-22 09:36:49.981020327 +0000 UTC m=+9252.394413585" watchObservedRunningTime="2025-11-22 09:36:49.982552558 +0000 UTC m=+9252.395945826" Nov 22 09:36:52 crc kubenswrapper[4856]: I1122 09:36:52.682870 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 09:36:52 crc kubenswrapper[4856]: I1122 09:36:52.683439 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="9dc40524-1b5e-4265-b926-8714e07bc20d" containerName="nova-cell0-conductor-conductor" containerID="cri-o://4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" gracePeriod=30 Nov 22 09:36:52 crc kubenswrapper[4856]: E1122 09:36:52.812253 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:36:52 crc kubenswrapper[4856]: E1122 09:36:52.813493 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:36:52 crc kubenswrapper[4856]: E1122 09:36:52.814699 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:36:52 crc kubenswrapper[4856]: E1122 09:36:52.814742 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="9dc40524-1b5e-4265-b926-8714e07bc20d" containerName="nova-cell0-conductor-conductor" Nov 22 09:36:53 crc kubenswrapper[4856]: I1122 09:36:53.199784 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 09:36:53 crc kubenswrapper[4856]: I1122 09:36:53.199965 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="ca2731b0-1c70-4129-b2b3-5ae1d17abcd6" containerName="nova-cell1-conductor-conductor" containerID="cri-o://611b59ee08ab058ac6753040a2c087b8fc3a9a3f3797adf808d52b4785da2748" gracePeriod=30 Nov 22 09:36:53 crc kubenswrapper[4856]: E1122 09:36:53.291342 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="611b59ee08ab058ac6753040a2c087b8fc3a9a3f3797adf808d52b4785da2748" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:36:53 crc kubenswrapper[4856]: E1122 09:36:53.294267 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="611b59ee08ab058ac6753040a2c087b8fc3a9a3f3797adf808d52b4785da2748" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:36:53 crc kubenswrapper[4856]: E1122 09:36:53.295668 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="611b59ee08ab058ac6753040a2c087b8fc3a9a3f3797adf808d52b4785da2748" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:36:53 crc kubenswrapper[4856]: E1122 09:36:53.295765 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="ca2731b0-1c70-4129-b2b3-5ae1d17abcd6" containerName="nova-cell1-conductor-conductor" Nov 22 09:36:53 crc kubenswrapper[4856]: I1122 09:36:53.352772 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:36:53 crc kubenswrapper[4856]: I1122 09:36:53.353030 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bc1b3193-18b5-400b-a11c-7787373cc559" containerName="nova-api-log" containerID="cri-o://db2203dcce0e38961a7c4d02903b0b86c38225490a82559e30eba9d7c39d6d00" gracePeriod=30 Nov 22 09:36:53 crc kubenswrapper[4856]: I1122 09:36:53.353126 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bc1b3193-18b5-400b-a11c-7787373cc559" containerName="nova-api-api" containerID="cri-o://811699a363f8fa80b6295ee0f719b801ca4867d326580ffff6e9d8cde8caa1c2" gracePeriod=30 Nov 22 09:36:53 crc kubenswrapper[4856]: I1122 09:36:53.404526 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:36:53 crc kubenswrapper[4856]: I1122 09:36:53.404762 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9ce55421-c0e1-4f25-9a74-8ae0e35b250c" containerName="nova-scheduler-scheduler" containerID="cri-o://7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62" gracePeriod=30 Nov 22 09:36:53 crc kubenswrapper[4856]: I1122 09:36:53.414934 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:36:53 crc kubenswrapper[4856]: I1122 09:36:53.415145 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fb038800-5d2f-42f1-85b1-05d8aa807383" containerName="nova-metadata-log" containerID="cri-o://2b5b955717e9bd38ae5b9a0616eaf2cac006e23a3db4204566fc610bea28865b" gracePeriod=30 Nov 22 09:36:53 crc kubenswrapper[4856]: I1122 09:36:53.415246 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fb038800-5d2f-42f1-85b1-05d8aa807383" containerName="nova-metadata-metadata" containerID="cri-o://4560371460c6dd4d4d43bcf8040fe8d4a482df1312737f51ad0d59f12c7664f0" gracePeriod=30 Nov 22 09:36:54 crc kubenswrapper[4856]: I1122 09:36:54.045434 4856 generic.go:334] "Generic (PLEG): container finished" podID="fb038800-5d2f-42f1-85b1-05d8aa807383" containerID="2b5b955717e9bd38ae5b9a0616eaf2cac006e23a3db4204566fc610bea28865b" exitCode=143 Nov 22 09:36:54 crc kubenswrapper[4856]: I1122 09:36:54.045545 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb038800-5d2f-42f1-85b1-05d8aa807383","Type":"ContainerDied","Data":"2b5b955717e9bd38ae5b9a0616eaf2cac006e23a3db4204566fc610bea28865b"} Nov 22 09:36:54 crc kubenswrapper[4856]: I1122 09:36:54.049388 4856 generic.go:334] "Generic (PLEG): container finished" podID="bc1b3193-18b5-400b-a11c-7787373cc559" containerID="db2203dcce0e38961a7c4d02903b0b86c38225490a82559e30eba9d7c39d6d00" exitCode=143 Nov 22 09:36:54 crc kubenswrapper[4856]: I1122 09:36:54.049426 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc1b3193-18b5-400b-a11c-7787373cc559","Type":"ContainerDied","Data":"db2203dcce0e38961a7c4d02903b0b86c38225490a82559e30eba9d7c39d6d00"} Nov 22 09:36:54 crc kubenswrapper[4856]: I1122 09:36:54.255914 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:54 crc kubenswrapper[4856]: I1122 09:36:54.255962 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:54 crc kubenswrapper[4856]: I1122 09:36:54.303851 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:54 crc kubenswrapper[4856]: E1122 09:36:54.939354 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:36:54 crc kubenswrapper[4856]: E1122 09:36:54.940981 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:36:54 crc kubenswrapper[4856]: E1122 09:36:54.943116 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 22 09:36:54 crc kubenswrapper[4856]: E1122 09:36:54.943199 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="9ce55421-c0e1-4f25-9a74-8ae0e35b250c" containerName="nova-scheduler-scheduler" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.060052 4856 generic.go:334] "Generic (PLEG): container finished" podID="ca2731b0-1c70-4129-b2b3-5ae1d17abcd6" containerID="611b59ee08ab058ac6753040a2c087b8fc3a9a3f3797adf808d52b4785da2748" exitCode=0 Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.060138 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6","Type":"ContainerDied","Data":"611b59ee08ab058ac6753040a2c087b8fc3a9a3f3797adf808d52b4785da2748"} Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.060180 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6","Type":"ContainerDied","Data":"f5fef5652751a84671badd752388cc36e6023a1d09f0c33aa3c4cfa82eaa5c67"} Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.060193 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5fef5652751a84671badd752388cc36e6023a1d09f0c33aa3c4cfa82eaa5c67" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.216212 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.272458 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4798k"] Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.341492 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.510984 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkw7v\" (UniqueName: \"kubernetes.io/projected/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-kube-api-access-pkw7v\") pod \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\" (UID: \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\") " Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.511121 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-config-data\") pod \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\" (UID: \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\") " Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.511171 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-combined-ca-bundle\") pod \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\" (UID: \"ca2731b0-1c70-4129-b2b3-5ae1d17abcd6\") " Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.517439 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-kube-api-access-pkw7v" (OuterVolumeSpecName: "kube-api-access-pkw7v") pod "ca2731b0-1c70-4129-b2b3-5ae1d17abcd6" (UID: "ca2731b0-1c70-4129-b2b3-5ae1d17abcd6"). InnerVolumeSpecName "kube-api-access-pkw7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.548743 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-config-data" (OuterVolumeSpecName: "config-data") pod "ca2731b0-1c70-4129-b2b3-5ae1d17abcd6" (UID: "ca2731b0-1c70-4129-b2b3-5ae1d17abcd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.549029 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca2731b0-1c70-4129-b2b3-5ae1d17abcd6" (UID: "ca2731b0-1c70-4129-b2b3-5ae1d17abcd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.613438 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkw7v\" (UniqueName: \"kubernetes.io/projected/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-kube-api-access-pkw7v\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.613465 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.613476 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.668640 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.817408 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-combined-ca-bundle\") pod \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\" (UID: \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\") " Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.817473 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5q79\" (UniqueName: \"kubernetes.io/projected/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-kube-api-access-h5q79\") pod \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\" (UID: \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\") " Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.817610 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-config-data\") pod \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\" (UID: \"9ce55421-c0e1-4f25-9a74-8ae0e35b250c\") " Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.822366 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-kube-api-access-h5q79" (OuterVolumeSpecName: "kube-api-access-h5q79") pod "9ce55421-c0e1-4f25-9a74-8ae0e35b250c" (UID: "9ce55421-c0e1-4f25-9a74-8ae0e35b250c"). InnerVolumeSpecName "kube-api-access-h5q79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.855840 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ce55421-c0e1-4f25-9a74-8ae0e35b250c" (UID: "9ce55421-c0e1-4f25-9a74-8ae0e35b250c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.880253 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-config-data" (OuterVolumeSpecName: "config-data") pod "9ce55421-c0e1-4f25-9a74-8ae0e35b250c" (UID: "9ce55421-c0e1-4f25-9a74-8ae0e35b250c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.919887 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5q79\" (UniqueName: \"kubernetes.io/projected/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-kube-api-access-h5q79\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.919936 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:55 crc kubenswrapper[4856]: I1122 09:36:55.919948 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce55421-c0e1-4f25-9a74-8ae0e35b250c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.074620 4856 generic.go:334] "Generic (PLEG): container finished" podID="9ce55421-c0e1-4f25-9a74-8ae0e35b250c" containerID="7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62" exitCode=0 Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.076041 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.079979 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.079962 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ce55421-c0e1-4f25-9a74-8ae0e35b250c","Type":"ContainerDied","Data":"7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62"} Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.080257 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9ce55421-c0e1-4f25-9a74-8ae0e35b250c","Type":"ContainerDied","Data":"c6bb006538a4466e530c4133563d2a161e79e92d63344b749d608c10e489349f"} Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.080282 4856 scope.go:117] "RemoveContainer" containerID="7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.107062 4856 scope.go:117] "RemoveContainer" containerID="7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62" Nov 22 09:36:56 crc kubenswrapper[4856]: E1122 09:36:56.107704 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62\": container with ID starting with 7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62 not found: ID does not exist" containerID="7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.107737 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62"} err="failed to get container status \"7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62\": rpc error: code = NotFound desc = could not find container \"7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62\": container with ID starting with 7755d718d2aadcff8e9beeebb78fc1a4651945701b822ee0fdd2bd7c0a266b62 not found: ID does not exist" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.123422 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.136588 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.160725 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.160786 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.160803 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:36:56 crc kubenswrapper[4856]: E1122 09:36:56.161136 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4021796f-1cba-4573-9efa-4ed786ba2251" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.161148 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4021796f-1cba-4573-9efa-4ed786ba2251" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 22 09:36:56 crc kubenswrapper[4856]: E1122 09:36:56.161182 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ce55421-c0e1-4f25-9a74-8ae0e35b250c" containerName="nova-scheduler-scheduler" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.161188 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce55421-c0e1-4f25-9a74-8ae0e35b250c" containerName="nova-scheduler-scheduler" Nov 22 09:36:56 crc kubenswrapper[4856]: E1122 09:36:56.161234 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2731b0-1c70-4129-b2b3-5ae1d17abcd6" containerName="nova-cell1-conductor-conductor" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.161240 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2731b0-1c70-4129-b2b3-5ae1d17abcd6" containerName="nova-cell1-conductor-conductor" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.161428 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4021796f-1cba-4573-9efa-4ed786ba2251" containerName="neutron-dhcp-openstack-openstack-cell1" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.161443 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2731b0-1c70-4129-b2b3-5ae1d17abcd6" containerName="nova-cell1-conductor-conductor" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.161477 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ce55421-c0e1-4f25-9a74-8ae0e35b250c" containerName="nova-scheduler-scheduler" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.162196 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.176570 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.179605 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.183251 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.184865 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.185998 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.194902 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.227359 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe29e444-cca9-41c8-920a-70302a80bf99-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fe29e444-cca9-41c8-920a-70302a80bf99\") " pod="openstack/nova-scheduler-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.227569 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkr86\" (UniqueName: \"kubernetes.io/projected/fe29e444-cca9-41c8-920a-70302a80bf99-kube-api-access-qkr86\") pod \"nova-scheduler-0\" (UID: \"fe29e444-cca9-41c8-920a-70302a80bf99\") " pod="openstack/nova-scheduler-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.228006 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe29e444-cca9-41c8-920a-70302a80bf99-config-data\") pod \"nova-scheduler-0\" (UID: \"fe29e444-cca9-41c8-920a-70302a80bf99\") " pod="openstack/nova-scheduler-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.330357 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkr86\" (UniqueName: \"kubernetes.io/projected/fe29e444-cca9-41c8-920a-70302a80bf99-kube-api-access-qkr86\") pod \"nova-scheduler-0\" (UID: \"fe29e444-cca9-41c8-920a-70302a80bf99\") " pod="openstack/nova-scheduler-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.330421 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf96v\" (UniqueName: \"kubernetes.io/projected/09683c8e-5f3c-4c9f-ab27-59ba9a51387e-kube-api-access-vf96v\") pod \"nova-cell1-conductor-0\" (UID: \"09683c8e-5f3c-4c9f-ab27-59ba9a51387e\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.330518 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09683c8e-5f3c-4c9f-ab27-59ba9a51387e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"09683c8e-5f3c-4c9f-ab27-59ba9a51387e\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.330577 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe29e444-cca9-41c8-920a-70302a80bf99-config-data\") pod \"nova-scheduler-0\" (UID: \"fe29e444-cca9-41c8-920a-70302a80bf99\") " pod="openstack/nova-scheduler-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.330604 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09683c8e-5f3c-4c9f-ab27-59ba9a51387e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"09683c8e-5f3c-4c9f-ab27-59ba9a51387e\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.330652 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe29e444-cca9-41c8-920a-70302a80bf99-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fe29e444-cca9-41c8-920a-70302a80bf99\") " pod="openstack/nova-scheduler-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.335119 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe29e444-cca9-41c8-920a-70302a80bf99-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fe29e444-cca9-41c8-920a-70302a80bf99\") " pod="openstack/nova-scheduler-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.335173 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe29e444-cca9-41c8-920a-70302a80bf99-config-data\") pod \"nova-scheduler-0\" (UID: \"fe29e444-cca9-41c8-920a-70302a80bf99\") " pod="openstack/nova-scheduler-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.346177 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkr86\" (UniqueName: \"kubernetes.io/projected/fe29e444-cca9-41c8-920a-70302a80bf99-kube-api-access-qkr86\") pod \"nova-scheduler-0\" (UID: \"fe29e444-cca9-41c8-920a-70302a80bf99\") " pod="openstack/nova-scheduler-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.431835 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf96v\" (UniqueName: \"kubernetes.io/projected/09683c8e-5f3c-4c9f-ab27-59ba9a51387e-kube-api-access-vf96v\") pod \"nova-cell1-conductor-0\" (UID: \"09683c8e-5f3c-4c9f-ab27-59ba9a51387e\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.431928 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09683c8e-5f3c-4c9f-ab27-59ba9a51387e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"09683c8e-5f3c-4c9f-ab27-59ba9a51387e\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.431982 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09683c8e-5f3c-4c9f-ab27-59ba9a51387e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"09683c8e-5f3c-4c9f-ab27-59ba9a51387e\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.501186 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.516931 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="bc1b3193-18b5-400b-a11c-7787373cc559" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.103:8774/\": read tcp 10.217.0.2:56450->10.217.1.103:8774: read: connection reset by peer" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.517449 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="bc1b3193-18b5-400b-a11c-7787373cc559" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.103:8774/\": read tcp 10.217.0.2:56452->10.217.1.103:8774: read: connection reset by peer" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.722158 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce55421-c0e1-4f25-9a74-8ae0e35b250c" path="/var/lib/kubelet/pods/9ce55421-c0e1-4f25-9a74-8ae0e35b250c/volumes" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.722772 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2731b0-1c70-4129-b2b3-5ae1d17abcd6" path="/var/lib/kubelet/pods/ca2731b0-1c70-4129-b2b3-5ae1d17abcd6/volumes" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.968469 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09683c8e-5f3c-4c9f-ab27-59ba9a51387e-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"09683c8e-5f3c-4c9f-ab27-59ba9a51387e\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.968633 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09683c8e-5f3c-4c9f-ab27-59ba9a51387e-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"09683c8e-5f3c-4c9f-ab27-59ba9a51387e\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:36:56 crc kubenswrapper[4856]: I1122 09:36:56.975261 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf96v\" (UniqueName: \"kubernetes.io/projected/09683c8e-5f3c-4c9f-ab27-59ba9a51387e-kube-api-access-vf96v\") pod \"nova-cell1-conductor-0\" (UID: \"09683c8e-5f3c-4c9f-ab27-59ba9a51387e\") " pod="openstack/nova-cell1-conductor-0" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.112636 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.115268 4856 generic.go:334] "Generic (PLEG): container finished" podID="bc1b3193-18b5-400b-a11c-7787373cc559" containerID="811699a363f8fa80b6295ee0f719b801ca4867d326580ffff6e9d8cde8caa1c2" exitCode=0 Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.115341 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc1b3193-18b5-400b-a11c-7787373cc559","Type":"ContainerDied","Data":"811699a363f8fa80b6295ee0f719b801ca4867d326580ffff6e9d8cde8caa1c2"} Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.125591 4856 generic.go:334] "Generic (PLEG): container finished" podID="fb038800-5d2f-42f1-85b1-05d8aa807383" containerID="4560371460c6dd4d4d43bcf8040fe8d4a482df1312737f51ad0d59f12c7664f0" exitCode=0 Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.125828 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4798k" podUID="0a52905c-c240-425f-982c-987eb0fbe3e9" containerName="registry-server" containerID="cri-o://63f390a494370bd921057eede0143b78f7c7ce2c363521fbb8ab25d4ff00785c" gracePeriod=2 Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.126138 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb038800-5d2f-42f1-85b1-05d8aa807383","Type":"ContainerDied","Data":"4560371460c6dd4d4d43bcf8040fe8d4a482df1312737f51ad0d59f12c7664f0"} Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.436975 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.522869 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.556620 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.656840 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5s8x\" (UniqueName: \"kubernetes.io/projected/bc1b3193-18b5-400b-a11c-7787373cc559-kube-api-access-s5s8x\") pod \"bc1b3193-18b5-400b-a11c-7787373cc559\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.657228 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-internal-tls-certs\") pod \"bc1b3193-18b5-400b-a11c-7787373cc559\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.657344 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-public-tls-certs\") pod \"bc1b3193-18b5-400b-a11c-7787373cc559\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.657401 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-combined-ca-bundle\") pod \"fb038800-5d2f-42f1-85b1-05d8aa807383\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.657432 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-config-data\") pod \"fb038800-5d2f-42f1-85b1-05d8aa807383\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.657551 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc1b3193-18b5-400b-a11c-7787373cc559-logs\") pod \"bc1b3193-18b5-400b-a11c-7787373cc559\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.657603 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-config-data\") pod \"bc1b3193-18b5-400b-a11c-7787373cc559\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.657635 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggzn6\" (UniqueName: \"kubernetes.io/projected/fb038800-5d2f-42f1-85b1-05d8aa807383-kube-api-access-ggzn6\") pod \"fb038800-5d2f-42f1-85b1-05d8aa807383\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.657670 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb038800-5d2f-42f1-85b1-05d8aa807383-logs\") pod \"fb038800-5d2f-42f1-85b1-05d8aa807383\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.657707 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-nova-metadata-tls-certs\") pod \"fb038800-5d2f-42f1-85b1-05d8aa807383\" (UID: \"fb038800-5d2f-42f1-85b1-05d8aa807383\") " Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.657731 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-combined-ca-bundle\") pod \"bc1b3193-18b5-400b-a11c-7787373cc559\" (UID: \"bc1b3193-18b5-400b-a11c-7787373cc559\") " Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.658157 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc1b3193-18b5-400b-a11c-7787373cc559-logs" (OuterVolumeSpecName: "logs") pod "bc1b3193-18b5-400b-a11c-7787373cc559" (UID: "bc1b3193-18b5-400b-a11c-7787373cc559"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.658412 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bc1b3193-18b5-400b-a11c-7787373cc559-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.659185 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb038800-5d2f-42f1-85b1-05d8aa807383-logs" (OuterVolumeSpecName: "logs") pod "fb038800-5d2f-42f1-85b1-05d8aa807383" (UID: "fb038800-5d2f-42f1-85b1-05d8aa807383"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.660447 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc1b3193-18b5-400b-a11c-7787373cc559-kube-api-access-s5s8x" (OuterVolumeSpecName: "kube-api-access-s5s8x") pod "bc1b3193-18b5-400b-a11c-7787373cc559" (UID: "bc1b3193-18b5-400b-a11c-7787373cc559"). InnerVolumeSpecName "kube-api-access-s5s8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.662401 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb038800-5d2f-42f1-85b1-05d8aa807383-kube-api-access-ggzn6" (OuterVolumeSpecName: "kube-api-access-ggzn6") pod "fb038800-5d2f-42f1-85b1-05d8aa807383" (UID: "fb038800-5d2f-42f1-85b1-05d8aa807383"). InnerVolumeSpecName "kube-api-access-ggzn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.691941 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc1b3193-18b5-400b-a11c-7787373cc559" (UID: "bc1b3193-18b5-400b-a11c-7787373cc559"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.711788 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-config-data" (OuterVolumeSpecName: "config-data") pod "bc1b3193-18b5-400b-a11c-7787373cc559" (UID: "bc1b3193-18b5-400b-a11c-7787373cc559"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.715685 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-config-data" (OuterVolumeSpecName: "config-data") pod "fb038800-5d2f-42f1-85b1-05d8aa807383" (UID: "fb038800-5d2f-42f1-85b1-05d8aa807383"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.722988 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb038800-5d2f-42f1-85b1-05d8aa807383" (UID: "fb038800-5d2f-42f1-85b1-05d8aa807383"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.726884 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fb038800-5d2f-42f1-85b1-05d8aa807383" (UID: "fb038800-5d2f-42f1-85b1-05d8aa807383"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.728114 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bc1b3193-18b5-400b-a11c-7787373cc559" (UID: "bc1b3193-18b5-400b-a11c-7787373cc559"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.746052 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bc1b3193-18b5-400b-a11c-7787373cc559" (UID: "bc1b3193-18b5-400b-a11c-7787373cc559"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.760448 4856 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.760488 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.760517 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.760532 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.760551 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggzn6\" (UniqueName: \"kubernetes.io/projected/fb038800-5d2f-42f1-85b1-05d8aa807383-kube-api-access-ggzn6\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.760563 4856 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb038800-5d2f-42f1-85b1-05d8aa807383-logs\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.760574 4856 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb038800-5d2f-42f1-85b1-05d8aa807383-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.760586 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.760597 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5s8x\" (UniqueName: \"kubernetes.io/projected/bc1b3193-18b5-400b-a11c-7787373cc559-kube-api-access-s5s8x\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.760607 4856 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc1b3193-18b5-400b-a11c-7787373cc559-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:57 crc kubenswrapper[4856]: I1122 09:36:57.781276 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 22 09:36:57 crc kubenswrapper[4856]: E1122 09:36:57.813110 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:36:57 crc kubenswrapper[4856]: E1122 09:36:57.814589 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:36:57 crc kubenswrapper[4856]: E1122 09:36:57.815907 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:36:57 crc kubenswrapper[4856]: E1122 09:36:57.815955 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="9dc40524-1b5e-4265-b926-8714e07bc20d" containerName="nova-cell0-conductor-conductor" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.135886 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fe29e444-cca9-41c8-920a-70302a80bf99","Type":"ContainerStarted","Data":"fc4377dc470efc396a18a06425759d4e68e111e30aeb7b6ff661acfdc9651266"} Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.139164 4856 generic.go:334] "Generic (PLEG): container finished" podID="0a52905c-c240-425f-982c-987eb0fbe3e9" containerID="63f390a494370bd921057eede0143b78f7c7ce2c363521fbb8ab25d4ff00785c" exitCode=0 Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.139248 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4798k" event={"ID":"0a52905c-c240-425f-982c-987eb0fbe3e9","Type":"ContainerDied","Data":"63f390a494370bd921057eede0143b78f7c7ce2c363521fbb8ab25d4ff00785c"} Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.139300 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4798k" event={"ID":"0a52905c-c240-425f-982c-987eb0fbe3e9","Type":"ContainerDied","Data":"a29e89746cdfd3c86e919c98f4d12a707042a08ab1d283b2daacaa5403acbd03"} Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.139315 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a29e89746cdfd3c86e919c98f4d12a707042a08ab1d283b2daacaa5403acbd03" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.141605 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bc1b3193-18b5-400b-a11c-7787373cc559","Type":"ContainerDied","Data":"c8aba31375ae803262782f0eda1688f2a8f6f0fb3ae47b1d0c94ee699b1c2a19"} Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.141644 4856 scope.go:117] "RemoveContainer" containerID="811699a363f8fa80b6295ee0f719b801ca4867d326580ffff6e9d8cde8caa1c2" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.141614 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.142824 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"09683c8e-5f3c-4c9f-ab27-59ba9a51387e","Type":"ContainerStarted","Data":"744261ec274aec64f913b75372eafe3a0ff506814e5794bc56e319f4c5aaae7e"} Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.144871 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fb038800-5d2f-42f1-85b1-05d8aa807383","Type":"ContainerDied","Data":"d00503ba8cd7c221f0d7212ee4eef6c987be5c4d94e3ec8d4331f91c4f46acbe"} Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.144949 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.822845 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.827867 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.829318 4856 scope.go:117] "RemoveContainer" containerID="db2203dcce0e38961a7c4d02903b0b86c38225490a82559e30eba9d7c39d6d00" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.931381 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.943404 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.947570 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxrg5\" (UniqueName: \"kubernetes.io/projected/0a52905c-c240-425f-982c-987eb0fbe3e9-kube-api-access-pxrg5\") pod \"0a52905c-c240-425f-982c-987eb0fbe3e9\" (UID: \"0a52905c-c240-425f-982c-987eb0fbe3e9\") " Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.947643 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a52905c-c240-425f-982c-987eb0fbe3e9-utilities\") pod \"0a52905c-c240-425f-982c-987eb0fbe3e9\" (UID: \"0a52905c-c240-425f-982c-987eb0fbe3e9\") " Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.947775 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a52905c-c240-425f-982c-987eb0fbe3e9-catalog-content\") pod \"0a52905c-c240-425f-982c-987eb0fbe3e9\" (UID: \"0a52905c-c240-425f-982c-987eb0fbe3e9\") " Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.952501 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a52905c-c240-425f-982c-987eb0fbe3e9-kube-api-access-pxrg5" (OuterVolumeSpecName: "kube-api-access-pxrg5") pod "0a52905c-c240-425f-982c-987eb0fbe3e9" (UID: "0a52905c-c240-425f-982c-987eb0fbe3e9"). InnerVolumeSpecName "kube-api-access-pxrg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.960998 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.971958 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.974484 4856 scope.go:117] "RemoveContainer" containerID="4560371460c6dd4d4d43bcf8040fe8d4a482df1312737f51ad0d59f12c7664f0" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.986327 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 22 09:36:58 crc kubenswrapper[4856]: E1122 09:36:58.986988 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a52905c-c240-425f-982c-987eb0fbe3e9" containerName="extract-content" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.987007 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a52905c-c240-425f-982c-987eb0fbe3e9" containerName="extract-content" Nov 22 09:36:58 crc kubenswrapper[4856]: E1122 09:36:58.987077 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb038800-5d2f-42f1-85b1-05d8aa807383" containerName="nova-metadata-log" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.987084 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb038800-5d2f-42f1-85b1-05d8aa807383" containerName="nova-metadata-log" Nov 22 09:36:58 crc kubenswrapper[4856]: E1122 09:36:58.987100 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1b3193-18b5-400b-a11c-7787373cc559" containerName="nova-api-log" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.987107 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1b3193-18b5-400b-a11c-7787373cc559" containerName="nova-api-log" Nov 22 09:36:58 crc kubenswrapper[4856]: E1122 09:36:58.987116 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a52905c-c240-425f-982c-987eb0fbe3e9" containerName="registry-server" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.987122 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a52905c-c240-425f-982c-987eb0fbe3e9" containerName="registry-server" Nov 22 09:36:58 crc kubenswrapper[4856]: E1122 09:36:58.987132 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a52905c-c240-425f-982c-987eb0fbe3e9" containerName="extract-utilities" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.987138 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a52905c-c240-425f-982c-987eb0fbe3e9" containerName="extract-utilities" Nov 22 09:36:58 crc kubenswrapper[4856]: E1122 09:36:58.987150 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb038800-5d2f-42f1-85b1-05d8aa807383" containerName="nova-metadata-metadata" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.987156 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb038800-5d2f-42f1-85b1-05d8aa807383" containerName="nova-metadata-metadata" Nov 22 09:36:58 crc kubenswrapper[4856]: E1122 09:36:58.987168 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1b3193-18b5-400b-a11c-7787373cc559" containerName="nova-api-api" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.987174 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1b3193-18b5-400b-a11c-7787373cc559" containerName="nova-api-api" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.987359 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb038800-5d2f-42f1-85b1-05d8aa807383" containerName="nova-metadata-log" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.987370 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1b3193-18b5-400b-a11c-7787373cc559" containerName="nova-api-api" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.987382 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a52905c-c240-425f-982c-987eb0fbe3e9" containerName="registry-server" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.987391 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb038800-5d2f-42f1-85b1-05d8aa807383" containerName="nova-metadata-metadata" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.987414 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1b3193-18b5-400b-a11c-7787373cc559" containerName="nova-api-log" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.988719 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.993713 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.994498 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 22 09:36:58 crc kubenswrapper[4856]: I1122 09:36:58.997091 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.013730 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.018408 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a52905c-c240-425f-982c-987eb0fbe3e9-utilities" (OuterVolumeSpecName: "utilities") pod "0a52905c-c240-425f-982c-987eb0fbe3e9" (UID: "0a52905c-c240-425f-982c-987eb0fbe3e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.031023 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.032961 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.036738 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.040725 4856 scope.go:117] "RemoveContainer" containerID="2b5b955717e9bd38ae5b9a0616eaf2cac006e23a3db4204566fc610bea28865b" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.043234 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.050491 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxrg5\" (UniqueName: \"kubernetes.io/projected/0a52905c-c240-425f-982c-987eb0fbe3e9-kube-api-access-pxrg5\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.050547 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a52905c-c240-425f-982c-987eb0fbe3e9-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.053178 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.060588 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.111992 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a52905c-c240-425f-982c-987eb0fbe3e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a52905c-c240-425f-982c-987eb0fbe3e9" (UID: "0a52905c-c240-425f-982c-987eb0fbe3e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.152048 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12e51766-4906-4715-8a2e-ba76c14f18cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.152094 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1320212e-aa18-4900-8d1f-6935e2d18225-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.152117 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mdlw\" (UniqueName: \"kubernetes.io/projected/1320212e-aa18-4900-8d1f-6935e2d18225-kube-api-access-9mdlw\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.152179 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtkq5\" (UniqueName: \"kubernetes.io/projected/12e51766-4906-4715-8a2e-ba76c14f18cc-kube-api-access-gtkq5\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.152224 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e51766-4906-4715-8a2e-ba76c14f18cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.152240 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1320212e-aa18-4900-8d1f-6935e2d18225-logs\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.152263 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1320212e-aa18-4900-8d1f-6935e2d18225-config-data\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.152283 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1320212e-aa18-4900-8d1f-6935e2d18225-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.152330 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e51766-4906-4715-8a2e-ba76c14f18cc-config-data\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.152351 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1320212e-aa18-4900-8d1f-6935e2d18225-public-tls-certs\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.152395 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12e51766-4906-4715-8a2e-ba76c14f18cc-logs\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.152472 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a52905c-c240-425f-982c-987eb0fbe3e9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.164254 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fe29e444-cca9-41c8-920a-70302a80bf99","Type":"ContainerStarted","Data":"68fddd923fc7868cfeedfd08d04324650aedd2450a401518ec029d06b1e49eac"} Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.167759 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4798k" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.201013 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.200995437 podStartE2EDuration="3.200995437s" podCreationTimestamp="2025-11-22 09:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:36:59.180078332 +0000 UTC m=+9261.593471590" watchObservedRunningTime="2025-11-22 09:36:59.200995437 +0000 UTC m=+9261.614388695" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.210148 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4798k"] Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.220305 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4798k"] Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.255848 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e51766-4906-4715-8a2e-ba76c14f18cc-config-data\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.255893 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1320212e-aa18-4900-8d1f-6935e2d18225-public-tls-certs\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.255943 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12e51766-4906-4715-8a2e-ba76c14f18cc-logs\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.256017 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12e51766-4906-4715-8a2e-ba76c14f18cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.256035 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1320212e-aa18-4900-8d1f-6935e2d18225-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.256050 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mdlw\" (UniqueName: \"kubernetes.io/projected/1320212e-aa18-4900-8d1f-6935e2d18225-kube-api-access-9mdlw\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.256072 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtkq5\" (UniqueName: \"kubernetes.io/projected/12e51766-4906-4715-8a2e-ba76c14f18cc-kube-api-access-gtkq5\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.256110 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e51766-4906-4715-8a2e-ba76c14f18cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.256126 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1320212e-aa18-4900-8d1f-6935e2d18225-logs\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.256149 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1320212e-aa18-4900-8d1f-6935e2d18225-config-data\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.256170 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1320212e-aa18-4900-8d1f-6935e2d18225-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.257118 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12e51766-4906-4715-8a2e-ba76c14f18cc-logs\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.257914 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1320212e-aa18-4900-8d1f-6935e2d18225-logs\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.263273 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1320212e-aa18-4900-8d1f-6935e2d18225-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.263298 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/12e51766-4906-4715-8a2e-ba76c14f18cc-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.263364 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12e51766-4906-4715-8a2e-ba76c14f18cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.263731 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1320212e-aa18-4900-8d1f-6935e2d18225-config-data\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.263802 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12e51766-4906-4715-8a2e-ba76c14f18cc-config-data\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.265349 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1320212e-aa18-4900-8d1f-6935e2d18225-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.272292 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1320212e-aa18-4900-8d1f-6935e2d18225-public-tls-certs\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.274136 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtkq5\" (UniqueName: \"kubernetes.io/projected/12e51766-4906-4715-8a2e-ba76c14f18cc-kube-api-access-gtkq5\") pod \"nova-metadata-0\" (UID: \"12e51766-4906-4715-8a2e-ba76c14f18cc\") " pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.281808 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mdlw\" (UniqueName: \"kubernetes.io/projected/1320212e-aa18-4900-8d1f-6935e2d18225-kube-api-access-9mdlw\") pod \"nova-api-0\" (UID: \"1320212e-aa18-4900-8d1f-6935e2d18225\") " pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.366785 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.374803 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.882407 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 22 09:36:59 crc kubenswrapper[4856]: I1122 09:36:59.938211 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 22 09:36:59 crc kubenswrapper[4856]: W1122 09:36:59.938343 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12e51766_4906_4715_8a2e_ba76c14f18cc.slice/crio-744e8fc3f653ab478db083dcc6bd3c7a0224e23bb0aee44c6de406abd1d1aa5d WatchSource:0}: Error finding container 744e8fc3f653ab478db083dcc6bd3c7a0224e23bb0aee44c6de406abd1d1aa5d: Status 404 returned error can't find the container with id 744e8fc3f653ab478db083dcc6bd3c7a0224e23bb0aee44c6de406abd1d1aa5d Nov 22 09:37:00 crc kubenswrapper[4856]: I1122 09:37:00.179064 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"09683c8e-5f3c-4c9f-ab27-59ba9a51387e","Type":"ContainerStarted","Data":"a4a941e8de87b65a321b8aa8420fe35d0e9c3bf2477d2dc70dc725ef09336873"} Nov 22 09:37:00 crc kubenswrapper[4856]: I1122 09:37:00.179437 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 22 09:37:00 crc kubenswrapper[4856]: I1122 09:37:00.181209 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12e51766-4906-4715-8a2e-ba76c14f18cc","Type":"ContainerStarted","Data":"88fec2bb23434e281d9b5b59c3f2657e88f232d98858554b89788133015103a7"} Nov 22 09:37:00 crc kubenswrapper[4856]: I1122 09:37:00.181285 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12e51766-4906-4715-8a2e-ba76c14f18cc","Type":"ContainerStarted","Data":"744e8fc3f653ab478db083dcc6bd3c7a0224e23bb0aee44c6de406abd1d1aa5d"} Nov 22 09:37:00 crc kubenswrapper[4856]: I1122 09:37:00.193834 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1320212e-aa18-4900-8d1f-6935e2d18225","Type":"ContainerStarted","Data":"a2763dac6997e857fbca7d5625c93316380df244839bec30244cbbc64c1b6472"} Nov 22 09:37:00 crc kubenswrapper[4856]: I1122 09:37:00.193900 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1320212e-aa18-4900-8d1f-6935e2d18225","Type":"ContainerStarted","Data":"7bc4cd5bae5957b01b2986419930537bfc87ea9d2544616a30208debc5942997"} Nov 22 09:37:00 crc kubenswrapper[4856]: I1122 09:37:00.200534 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=4.200495883 podStartE2EDuration="4.200495883s" podCreationTimestamp="2025-11-22 09:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:37:00.19630331 +0000 UTC m=+9262.609696568" watchObservedRunningTime="2025-11-22 09:37:00.200495883 +0000 UTC m=+9262.613889141" Nov 22 09:37:00 crc kubenswrapper[4856]: I1122 09:37:00.721719 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a52905c-c240-425f-982c-987eb0fbe3e9" path="/var/lib/kubelet/pods/0a52905c-c240-425f-982c-987eb0fbe3e9/volumes" Nov 22 09:37:00 crc kubenswrapper[4856]: I1122 09:37:00.723003 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc1b3193-18b5-400b-a11c-7787373cc559" path="/var/lib/kubelet/pods/bc1b3193-18b5-400b-a11c-7787373cc559/volumes" Nov 22 09:37:00 crc kubenswrapper[4856]: I1122 09:37:00.723673 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb038800-5d2f-42f1-85b1-05d8aa807383" path="/var/lib/kubelet/pods/fb038800-5d2f-42f1-85b1-05d8aa807383/volumes" Nov 22 09:37:00 crc kubenswrapper[4856]: I1122 09:37:00.953091 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zzbc6"] Nov 22 09:37:00 crc kubenswrapper[4856]: I1122 09:37:00.953395 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zzbc6" podUID="0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" containerName="registry-server" containerID="cri-o://d92c2ee7f8bbc637f3b4d8316c22bb428c47a0391bdcb8ab2089ecd79f81e794" gracePeriod=2 Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.229834 4856 generic.go:334] "Generic (PLEG): container finished" podID="0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" containerID="d92c2ee7f8bbc637f3b4d8316c22bb428c47a0391bdcb8ab2089ecd79f81e794" exitCode=0 Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.230109 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzbc6" event={"ID":"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4","Type":"ContainerDied","Data":"d92c2ee7f8bbc637f3b4d8316c22bb428c47a0391bdcb8ab2089ecd79f81e794"} Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.237937 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"12e51766-4906-4715-8a2e-ba76c14f18cc","Type":"ContainerStarted","Data":"d419813f946e42efbb80b46723596bb393d01bd5c251027cd96e27c886737678"} Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.250781 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1320212e-aa18-4900-8d1f-6935e2d18225","Type":"ContainerStarted","Data":"abafe9deedc1d6ce4038cb1923b238b84509e98c6b3540a79e80f5ec0561d945"} Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.270285 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.270265438 podStartE2EDuration="3.270265438s" podCreationTimestamp="2025-11-22 09:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:37:01.267189815 +0000 UTC m=+9263.680583083" watchObservedRunningTime="2025-11-22 09:37:01.270265438 +0000 UTC m=+9263.683658696" Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.290368 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.29034656 podStartE2EDuration="3.29034656s" podCreationTimestamp="2025-11-22 09:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:37:01.283800983 +0000 UTC m=+9263.697194251" watchObservedRunningTime="2025-11-22 09:37:01.29034656 +0000 UTC m=+9263.703739818" Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.435466 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.500753 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmh27\" (UniqueName: \"kubernetes.io/projected/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-kube-api-access-hmh27\") pod \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\" (UID: \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\") " Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.500875 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-utilities\") pod \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\" (UID: \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\") " Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.501260 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-catalog-content\") pod \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\" (UID: \"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4\") " Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.502214 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-utilities" (OuterVolumeSpecName: "utilities") pod "0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" (UID: "0460eb5e-d3b1-484e-ac1c-fc406beaf1d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.502437 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.516765 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-kube-api-access-hmh27" (OuterVolumeSpecName: "kube-api-access-hmh27") pod "0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" (UID: "0460eb5e-d3b1-484e-ac1c-fc406beaf1d4"). InnerVolumeSpecName "kube-api-access-hmh27". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.604888 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmh27\" (UniqueName: \"kubernetes.io/projected/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-kube-api-access-hmh27\") on node \"crc\" DevicePath \"\"" Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.604932 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.732794 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" (UID: "0460eb5e-d3b1-484e-ac1c-fc406beaf1d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:37:01 crc kubenswrapper[4856]: I1122 09:37:01.808410 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:37:02 crc kubenswrapper[4856]: I1122 09:37:02.262806 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzbc6" event={"ID":"0460eb5e-d3b1-484e-ac1c-fc406beaf1d4","Type":"ContainerDied","Data":"59a3d8277e0bfea1827a8b3e5d54556d76c7b2709dc8cbc9e77059e9112f82b4"} Nov 22 09:37:02 crc kubenswrapper[4856]: I1122 09:37:02.262864 4856 scope.go:117] "RemoveContainer" containerID="d92c2ee7f8bbc637f3b4d8316c22bb428c47a0391bdcb8ab2089ecd79f81e794" Nov 22 09:37:02 crc kubenswrapper[4856]: I1122 09:37:02.263077 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzbc6" Nov 22 09:37:02 crc kubenswrapper[4856]: I1122 09:37:02.284813 4856 scope.go:117] "RemoveContainer" containerID="96a33c82abf99c52797d32e39565cdebeb6d85585496def4a49628156a14db78" Nov 22 09:37:02 crc kubenswrapper[4856]: I1122 09:37:02.311165 4856 scope.go:117] "RemoveContainer" containerID="da5bc7fecd0ab4da33d151e9f2dc374c5d42d92618f6128ad7fa00f32b7376d7" Nov 22 09:37:02 crc kubenswrapper[4856]: I1122 09:37:02.316955 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zzbc6"] Nov 22 09:37:02 crc kubenswrapper[4856]: I1122 09:37:02.325946 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zzbc6"] Nov 22 09:37:02 crc kubenswrapper[4856]: I1122 09:37:02.725936 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" path="/var/lib/kubelet/pods/0460eb5e-d3b1-484e-ac1c-fc406beaf1d4/volumes" Nov 22 09:37:02 crc kubenswrapper[4856]: E1122 09:37:02.810819 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785 is running failed: container process not found" containerID="4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:37:02 crc kubenswrapper[4856]: E1122 09:37:02.811612 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785 is running failed: container process not found" containerID="4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:37:02 crc kubenswrapper[4856]: E1122 09:37:02.811926 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785 is running failed: container process not found" containerID="4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Nov 22 09:37:02 crc kubenswrapper[4856]: E1122 09:37:02.811975 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="9dc40524-1b5e-4265-b926-8714e07bc20d" containerName="nova-cell0-conductor-conductor" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.169625 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.234027 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc40524-1b5e-4265-b926-8714e07bc20d-config-data\") pod \"9dc40524-1b5e-4265-b926-8714e07bc20d\" (UID: \"9dc40524-1b5e-4265-b926-8714e07bc20d\") " Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.234152 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zgxs\" (UniqueName: \"kubernetes.io/projected/9dc40524-1b5e-4265-b926-8714e07bc20d-kube-api-access-9zgxs\") pod \"9dc40524-1b5e-4265-b926-8714e07bc20d\" (UID: \"9dc40524-1b5e-4265-b926-8714e07bc20d\") " Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.234335 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc40524-1b5e-4265-b926-8714e07bc20d-combined-ca-bundle\") pod \"9dc40524-1b5e-4265-b926-8714e07bc20d\" (UID: \"9dc40524-1b5e-4265-b926-8714e07bc20d\") " Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.245380 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc40524-1b5e-4265-b926-8714e07bc20d-kube-api-access-9zgxs" (OuterVolumeSpecName: "kube-api-access-9zgxs") pod "9dc40524-1b5e-4265-b926-8714e07bc20d" (UID: "9dc40524-1b5e-4265-b926-8714e07bc20d"). InnerVolumeSpecName "kube-api-access-9zgxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.263621 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc40524-1b5e-4265-b926-8714e07bc20d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dc40524-1b5e-4265-b926-8714e07bc20d" (UID: "9dc40524-1b5e-4265-b926-8714e07bc20d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.265238 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc40524-1b5e-4265-b926-8714e07bc20d-config-data" (OuterVolumeSpecName: "config-data") pod "9dc40524-1b5e-4265-b926-8714e07bc20d" (UID: "9dc40524-1b5e-4265-b926-8714e07bc20d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.279183 4856 generic.go:334] "Generic (PLEG): container finished" podID="9dc40524-1b5e-4265-b926-8714e07bc20d" containerID="4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" exitCode=0 Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.279248 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9dc40524-1b5e-4265-b926-8714e07bc20d","Type":"ContainerDied","Data":"4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785"} Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.279273 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"9dc40524-1b5e-4265-b926-8714e07bc20d","Type":"ContainerDied","Data":"9655c0d336c0c5fe388f0a646a9e95d768e8641e0194311b0aa6487972cb682e"} Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.279290 4856 scope.go:117] "RemoveContainer" containerID="4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.279381 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.319067 4856 scope.go:117] "RemoveContainer" containerID="4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" Nov 22 09:37:03 crc kubenswrapper[4856]: E1122 09:37:03.319399 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785\": container with ID starting with 4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785 not found: ID does not exist" containerID="4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.319433 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785"} err="failed to get container status \"4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785\": rpc error: code = NotFound desc = could not find container \"4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785\": container with ID starting with 4e99e51c22141ef9e7f0d17eb9c4167f5c4923bd4e38be5906194d3ebb4ff785 not found: ID does not exist" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.337049 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zgxs\" (UniqueName: \"kubernetes.io/projected/9dc40524-1b5e-4265-b926-8714e07bc20d-kube-api-access-9zgxs\") on node \"crc\" DevicePath \"\"" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.337092 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc40524-1b5e-4265-b926-8714e07bc20d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.337105 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc40524-1b5e-4265-b926-8714e07bc20d-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.371858 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.380419 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.388302 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 09:37:03 crc kubenswrapper[4856]: E1122 09:37:03.388706 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" containerName="extract-utilities" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.388724 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" containerName="extract-utilities" Nov 22 09:37:03 crc kubenswrapper[4856]: E1122 09:37:03.388749 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" containerName="registry-server" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.388757 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" containerName="registry-server" Nov 22 09:37:03 crc kubenswrapper[4856]: E1122 09:37:03.388779 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc40524-1b5e-4265-b926-8714e07bc20d" containerName="nova-cell0-conductor-conductor" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.388786 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc40524-1b5e-4265-b926-8714e07bc20d" containerName="nova-cell0-conductor-conductor" Nov 22 09:37:03 crc kubenswrapper[4856]: E1122 09:37:03.388800 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" containerName="extract-content" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.388806 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" containerName="extract-content" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.388994 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0460eb5e-d3b1-484e-ac1c-fc406beaf1d4" containerName="registry-server" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.389024 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc40524-1b5e-4265-b926-8714e07bc20d" containerName="nova-cell0-conductor-conductor" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.389752 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.398146 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.408255 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.540665 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e066a9cb-49d5-4f3f-9e6c-fd3c10084936-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e066a9cb-49d5-4f3f-9e6c-fd3c10084936\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.540954 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e066a9cb-49d5-4f3f-9e6c-fd3c10084936-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e066a9cb-49d5-4f3f-9e6c-fd3c10084936\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.541226 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbkr7\" (UniqueName: \"kubernetes.io/projected/e066a9cb-49d5-4f3f-9e6c-fd3c10084936-kube-api-access-pbkr7\") pod \"nova-cell0-conductor-0\" (UID: \"e066a9cb-49d5-4f3f-9e6c-fd3c10084936\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.642584 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbkr7\" (UniqueName: \"kubernetes.io/projected/e066a9cb-49d5-4f3f-9e6c-fd3c10084936-kube-api-access-pbkr7\") pod \"nova-cell0-conductor-0\" (UID: \"e066a9cb-49d5-4f3f-9e6c-fd3c10084936\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.642668 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e066a9cb-49d5-4f3f-9e6c-fd3c10084936-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e066a9cb-49d5-4f3f-9e6c-fd3c10084936\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.642760 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e066a9cb-49d5-4f3f-9e6c-fd3c10084936-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e066a9cb-49d5-4f3f-9e6c-fd3c10084936\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.646574 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e066a9cb-49d5-4f3f-9e6c-fd3c10084936-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e066a9cb-49d5-4f3f-9e6c-fd3c10084936\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.647178 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e066a9cb-49d5-4f3f-9e6c-fd3c10084936-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e066a9cb-49d5-4f3f-9e6c-fd3c10084936\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.661035 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbkr7\" (UniqueName: \"kubernetes.io/projected/e066a9cb-49d5-4f3f-9e6c-fd3c10084936-kube-api-access-pbkr7\") pod \"nova-cell0-conductor-0\" (UID: \"e066a9cb-49d5-4f3f-9e6c-fd3c10084936\") " pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:03 crc kubenswrapper[4856]: I1122 09:37:03.716698 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:04 crc kubenswrapper[4856]: I1122 09:37:04.180899 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 22 09:37:04 crc kubenswrapper[4856]: W1122 09:37:04.186701 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode066a9cb_49d5_4f3f_9e6c_fd3c10084936.slice/crio-dc1d7f23baf535a563cddc4333b0bca4c5ab465d032dc4d0d3c452e65e61ba6c WatchSource:0}: Error finding container dc1d7f23baf535a563cddc4333b0bca4c5ab465d032dc4d0d3c452e65e61ba6c: Status 404 returned error can't find the container with id dc1d7f23baf535a563cddc4333b0bca4c5ab465d032dc4d0d3c452e65e61ba6c Nov 22 09:37:04 crc kubenswrapper[4856]: I1122 09:37:04.303343 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e066a9cb-49d5-4f3f-9e6c-fd3c10084936","Type":"ContainerStarted","Data":"dc1d7f23baf535a563cddc4333b0bca4c5ab465d032dc4d0d3c452e65e61ba6c"} Nov 22 09:37:04 crc kubenswrapper[4856]: I1122 09:37:04.375537 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 09:37:04 crc kubenswrapper[4856]: I1122 09:37:04.375599 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 22 09:37:04 crc kubenswrapper[4856]: I1122 09:37:04.722460 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc40524-1b5e-4265-b926-8714e07bc20d" path="/var/lib/kubelet/pods/9dc40524-1b5e-4265-b926-8714e07bc20d/volumes" Nov 22 09:37:05 crc kubenswrapper[4856]: I1122 09:37:05.318806 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e066a9cb-49d5-4f3f-9e6c-fd3c10084936","Type":"ContainerStarted","Data":"ff4873a2633dd6826de4649f874050ae4ff562727f120c23c08a548adc845e3f"} Nov 22 09:37:05 crc kubenswrapper[4856]: I1122 09:37:05.319419 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:06 crc kubenswrapper[4856]: I1122 09:37:06.504928 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 22 09:37:06 crc kubenswrapper[4856]: I1122 09:37:06.903839 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 22 09:37:06 crc kubenswrapper[4856]: I1122 09:37:06.925021 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.9250031610000002 podStartE2EDuration="3.925003161s" podCreationTimestamp="2025-11-22 09:37:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 09:37:05.352119747 +0000 UTC m=+9267.765513045" watchObservedRunningTime="2025-11-22 09:37:06.925003161 +0000 UTC m=+9269.338396419" Nov 22 09:37:07 crc kubenswrapper[4856]: I1122 09:37:07.153498 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 22 09:37:07 crc kubenswrapper[4856]: I1122 09:37:07.388125 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 22 09:37:09 crc kubenswrapper[4856]: I1122 09:37:09.367478 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 09:37:09 crc kubenswrapper[4856]: I1122 09:37:09.367989 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 22 09:37:09 crc kubenswrapper[4856]: I1122 09:37:09.375014 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 09:37:09 crc kubenswrapper[4856]: I1122 09:37:09.375125 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 22 09:37:10 crc kubenswrapper[4856]: I1122 09:37:10.402760 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="12e51766-4906-4715-8a2e-ba76c14f18cc" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:37:10 crc kubenswrapper[4856]: I1122 09:37:10.402753 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1320212e-aa18-4900-8d1f-6935e2d18225" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.189:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:37:10 crc kubenswrapper[4856]: I1122 09:37:10.402822 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1320212e-aa18-4900-8d1f-6935e2d18225" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.189:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:37:10 crc kubenswrapper[4856]: I1122 09:37:10.402848 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="12e51766-4906-4715-8a2e-ba76c14f18cc" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:37:13 crc kubenswrapper[4856]: I1122 09:37:13.745942 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 22 09:37:19 crc kubenswrapper[4856]: I1122 09:37:19.374595 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 09:37:19 crc kubenswrapper[4856]: I1122 09:37:19.375397 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 09:37:19 crc kubenswrapper[4856]: I1122 09:37:19.375990 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 22 09:37:19 crc kubenswrapper[4856]: I1122 09:37:19.384245 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 09:37:19 crc kubenswrapper[4856]: I1122 09:37:19.389752 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 22 09:37:19 crc kubenswrapper[4856]: I1122 09:37:19.390494 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 09:37:19 crc kubenswrapper[4856]: I1122 09:37:19.399079 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 09:37:19 crc kubenswrapper[4856]: I1122 09:37:19.472828 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 22 09:37:19 crc kubenswrapper[4856]: I1122 09:37:19.477715 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 22 09:37:19 crc kubenswrapper[4856]: I1122 09:37:19.479781 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.857543 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq"] Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.860457 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.863640 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-dhz9f" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.863957 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.863966 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.863966 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.864251 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-cells-global-config" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.864576 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.864926 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.870421 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq"] Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.917544 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.917595 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.917712 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.917736 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.917775 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z58b4\" (UniqueName: \"kubernetes.io/projected/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-kube-api-access-z58b4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.917798 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.917822 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.917865 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:20 crc kubenswrapper[4856]: I1122 09:37:20.917899 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.019432 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.019755 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.019914 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z58b4\" (UniqueName: \"kubernetes.io/projected/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-kube-api-access-z58b4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.020048 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.020171 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.020298 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.020433 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.020603 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.020733 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.021635 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cells-global-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.026153 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-migration-ssh-key-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.026229 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-compute-config-1\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.026446 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-ssh-key\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.026539 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-compute-config-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.027134 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-combined-ca-bundle\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.029551 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-inventory\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.031298 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-migration-ssh-key-0\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.036722 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z58b4\" (UniqueName: \"kubernetes.io/projected/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-kube-api-access-z58b4\") pod \"nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.186145 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:37:21 crc kubenswrapper[4856]: I1122 09:37:21.762960 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq"] Nov 22 09:37:21 crc kubenswrapper[4856]: W1122 09:37:21.767729 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bdbb850_a5cf_4f8e_ae2e_88655ceda16c.slice/crio-c5ba774d8f39f05ddf1ff09bf8863f890216b8eed2b7d0774d36fe450bedc266 WatchSource:0}: Error finding container c5ba774d8f39f05ddf1ff09bf8863f890216b8eed2b7d0774d36fe450bedc266: Status 404 returned error can't find the container with id c5ba774d8f39f05ddf1ff09bf8863f890216b8eed2b7d0774d36fe450bedc266 Nov 22 09:37:22 crc kubenswrapper[4856]: I1122 09:37:22.502031 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" event={"ID":"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c","Type":"ContainerStarted","Data":"227481d16d6becf623d2fd5f3a663a24bd6af2f03aed4fe5829bbac069545f7d"} Nov 22 09:37:22 crc kubenswrapper[4856]: I1122 09:37:22.502334 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" event={"ID":"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c","Type":"ContainerStarted","Data":"c5ba774d8f39f05ddf1ff09bf8863f890216b8eed2b7d0774d36fe450bedc266"} Nov 22 09:37:22 crc kubenswrapper[4856]: I1122 09:37:22.530303 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" podStartSLOduration=2.14431843 podStartE2EDuration="2.530286785s" podCreationTimestamp="2025-11-22 09:37:20 +0000 UTC" firstStartedPulling="2025-11-22 09:37:21.770857973 +0000 UTC m=+9284.184251231" lastFinishedPulling="2025-11-22 09:37:22.156826338 +0000 UTC m=+9284.570219586" observedRunningTime="2025-11-22 09:37:22.524476599 +0000 UTC m=+9284.937869857" watchObservedRunningTime="2025-11-22 09:37:22.530286785 +0000 UTC m=+9284.943680043" Nov 22 09:37:49 crc kubenswrapper[4856]: I1122 09:37:49.750906 4856 scope.go:117] "RemoveContainer" containerID="611b59ee08ab058ac6753040a2c087b8fc3a9a3f3797adf808d52b4785da2748" Nov 22 09:37:59 crc kubenswrapper[4856]: I1122 09:37:59.755120 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:37:59 crc kubenswrapper[4856]: I1122 09:37:59.756399 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:38:29 crc kubenswrapper[4856]: I1122 09:38:29.755215 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:38:29 crc kubenswrapper[4856]: I1122 09:38:29.756311 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:38:59 crc kubenswrapper[4856]: I1122 09:38:59.754350 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:38:59 crc kubenswrapper[4856]: I1122 09:38:59.755003 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:38:59 crc kubenswrapper[4856]: I1122 09:38:59.755057 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 09:38:59 crc kubenswrapper[4856]: I1122 09:38:59.755969 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1d646530f87228299c4981c638bb3d4d4475a9e02490d87890d2a187ed1d6e4"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:38:59 crc kubenswrapper[4856]: I1122 09:38:59.756026 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://b1d646530f87228299c4981c638bb3d4d4475a9e02490d87890d2a187ed1d6e4" gracePeriod=600 Nov 22 09:39:00 crc kubenswrapper[4856]: I1122 09:39:00.622828 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="b1d646530f87228299c4981c638bb3d4d4475a9e02490d87890d2a187ed1d6e4" exitCode=0 Nov 22 09:39:00 crc kubenswrapper[4856]: I1122 09:39:00.622918 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"b1d646530f87228299c4981c638bb3d4d4475a9e02490d87890d2a187ed1d6e4"} Nov 22 09:39:00 crc kubenswrapper[4856]: I1122 09:39:00.623422 4856 scope.go:117] "RemoveContainer" containerID="ec7ddc154cf3a48b38e5271ea7485099e95fd1884843b588613c4e18a17e17c4" Nov 22 09:39:01 crc kubenswrapper[4856]: I1122 09:39:01.046072 4856 trace.go:236] Trace[1588639194]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-2" (22-Nov-2025 09:38:59.864) (total time: 1181ms): Nov 22 09:39:01 crc kubenswrapper[4856]: Trace[1588639194]: [1.181351748s] [1.181351748s] END Nov 22 09:39:01 crc kubenswrapper[4856]: I1122 09:39:01.635861 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4"} Nov 22 09:41:29 crc kubenswrapper[4856]: I1122 09:41:29.756666 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:41:29 crc kubenswrapper[4856]: I1122 09:41:29.757256 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:41:59 crc kubenswrapper[4856]: I1122 09:41:59.754052 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:41:59 crc kubenswrapper[4856]: I1122 09:41:59.754686 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:42:22 crc kubenswrapper[4856]: I1122 09:42:22.697072 4856 generic.go:334] "Generic (PLEG): container finished" podID="1bdbb850-a5cf-4f8e-ae2e-88655ceda16c" containerID="227481d16d6becf623d2fd5f3a663a24bd6af2f03aed4fe5829bbac069545f7d" exitCode=0 Nov 22 09:42:22 crc kubenswrapper[4856]: I1122 09:42:22.697548 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" event={"ID":"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c","Type":"ContainerDied","Data":"227481d16d6becf623d2fd5f3a663a24bd6af2f03aed4fe5829bbac069545f7d"} Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.133303 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.290757 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-compute-config-0\") pod \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.290815 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-inventory\") pod \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.290869 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-combined-ca-bundle\") pod \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.290889 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cells-global-config-0\") pod \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.290940 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-migration-ssh-key-1\") pod \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.291016 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-compute-config-1\") pod \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.291042 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z58b4\" (UniqueName: \"kubernetes.io/projected/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-kube-api-access-z58b4\") pod \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.291114 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-ssh-key\") pod \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.291162 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-migration-ssh-key-0\") pod \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\" (UID: \"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c\") " Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.301982 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-kube-api-access-z58b4" (OuterVolumeSpecName: "kube-api-access-z58b4") pod "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c" (UID: "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c"). InnerVolumeSpecName "kube-api-access-z58b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.302781 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-combined-ca-bundle" (OuterVolumeSpecName: "nova-cell1-combined-ca-bundle") pod "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c" (UID: "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c"). InnerVolumeSpecName "nova-cell1-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.324204 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c" (UID: "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.326488 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-inventory" (OuterVolumeSpecName: "inventory") pod "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c" (UID: "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.329884 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cells-global-config-0" (OuterVolumeSpecName: "nova-cells-global-config-0") pod "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c" (UID: "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c"). InnerVolumeSpecName "nova-cells-global-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.343505 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c" (UID: "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.343977 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c" (UID: "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.348487 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c" (UID: "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.357032 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c" (UID: "1bdbb850-a5cf-4f8e-ae2e-88655ceda16c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.393627 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.393666 4856 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.393681 4856 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.393694 4856 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-inventory\") on node \"crc\" DevicePath \"\"" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.393709 4856 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.393720 4856 reconciler_common.go:293] "Volume detached for volume \"nova-cells-global-config-0\" (UniqueName: \"kubernetes.io/configmap/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cells-global-config-0\") on node \"crc\" DevicePath \"\"" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.393731 4856 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.393742 4856 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.393753 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z58b4\" (UniqueName: \"kubernetes.io/projected/1bdbb850-a5cf-4f8e-ae2e-88655ceda16c-kube-api-access-z58b4\") on node \"crc\" DevicePath \"\"" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.720962 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.728904 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq" event={"ID":"1bdbb850-a5cf-4f8e-ae2e-88655ceda16c","Type":"ContainerDied","Data":"c5ba774d8f39f05ddf1ff09bf8863f890216b8eed2b7d0774d36fe450bedc266"} Nov 22 09:42:24 crc kubenswrapper[4856]: I1122 09:42:24.728962 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5ba774d8f39f05ddf1ff09bf8863f890216b8eed2b7d0774d36fe450bedc266" Nov 22 09:42:29 crc kubenswrapper[4856]: I1122 09:42:29.754398 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:42:29 crc kubenswrapper[4856]: I1122 09:42:29.754957 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:42:29 crc kubenswrapper[4856]: I1122 09:42:29.755028 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 09:42:29 crc kubenswrapper[4856]: I1122 09:42:29.755689 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:42:29 crc kubenswrapper[4856]: I1122 09:42:29.755757 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" gracePeriod=600 Nov 22 09:42:30 crc kubenswrapper[4856]: I1122 09:42:30.810453 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" exitCode=0 Nov 22 09:42:30 crc kubenswrapper[4856]: I1122 09:42:30.810811 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4"} Nov 22 09:42:30 crc kubenswrapper[4856]: I1122 09:42:30.810852 4856 scope.go:117] "RemoveContainer" containerID="b1d646530f87228299c4981c638bb3d4d4475a9e02490d87890d2a187ed1d6e4" Nov 22 09:42:31 crc kubenswrapper[4856]: E1122 09:42:31.094015 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:42:31 crc kubenswrapper[4856]: I1122 09:42:31.825712 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:42:31 crc kubenswrapper[4856]: E1122 09:42:31.826448 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:42:43 crc kubenswrapper[4856]: I1122 09:42:43.709823 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:42:43 crc kubenswrapper[4856]: E1122 09:42:43.710623 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:42:49 crc kubenswrapper[4856]: I1122 09:42:49.980375 4856 scope.go:117] "RemoveContainer" containerID="57e1f646294fc29111ce965c1122ad069328e9cf1bf0543abd5a4f5867d77660" Nov 22 09:42:50 crc kubenswrapper[4856]: I1122 09:42:50.028148 4856 scope.go:117] "RemoveContainer" containerID="63f390a494370bd921057eede0143b78f7c7ce2c363521fbb8ab25d4ff00785c" Nov 22 09:42:50 crc kubenswrapper[4856]: I1122 09:42:50.081981 4856 scope.go:117] "RemoveContainer" containerID="dafde85079586ddccd2a4dc46014e4bae44d1f7ab2bd4b8ef23bf3c89a7e06df" Nov 22 09:42:57 crc kubenswrapper[4856]: I1122 09:42:57.709379 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:42:57 crc kubenswrapper[4856]: E1122 09:42:57.710200 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:43:11 crc kubenswrapper[4856]: I1122 09:43:11.710405 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:43:11 crc kubenswrapper[4856]: E1122 09:43:11.712739 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:43:23 crc kubenswrapper[4856]: I1122 09:43:23.709713 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:43:23 crc kubenswrapper[4856]: E1122 09:43:23.710571 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:43:34 crc kubenswrapper[4856]: I1122 09:43:34.710915 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:43:34 crc kubenswrapper[4856]: E1122 09:43:34.712036 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:43:47 crc kubenswrapper[4856]: I1122 09:43:47.710260 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:43:47 crc kubenswrapper[4856]: E1122 09:43:47.711384 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:43:54 crc kubenswrapper[4856]: I1122 09:43:54.937930 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jpsrw"] Nov 22 09:43:54 crc kubenswrapper[4856]: E1122 09:43:54.942268 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bdbb850-a5cf-4f8e-ae2e-88655ceda16c" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 22 09:43:54 crc kubenswrapper[4856]: I1122 09:43:54.942293 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bdbb850-a5cf-4f8e-ae2e-88655ceda16c" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 22 09:43:54 crc kubenswrapper[4856]: I1122 09:43:54.942682 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bdbb850-a5cf-4f8e-ae2e-88655ceda16c" containerName="nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1" Nov 22 09:43:54 crc kubenswrapper[4856]: I1122 09:43:54.945601 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:43:54 crc kubenswrapper[4856]: I1122 09:43:54.962186 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpsrw"] Nov 22 09:43:55 crc kubenswrapper[4856]: I1122 09:43:55.123074 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sngc\" (UniqueName: \"kubernetes.io/projected/0b19cb61-fb14-4480-afaa-68fbe057007c-kube-api-access-7sngc\") pod \"redhat-marketplace-jpsrw\" (UID: \"0b19cb61-fb14-4480-afaa-68fbe057007c\") " pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:43:55 crc kubenswrapper[4856]: I1122 09:43:55.123607 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b19cb61-fb14-4480-afaa-68fbe057007c-catalog-content\") pod \"redhat-marketplace-jpsrw\" (UID: \"0b19cb61-fb14-4480-afaa-68fbe057007c\") " pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:43:55 crc kubenswrapper[4856]: I1122 09:43:55.123637 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b19cb61-fb14-4480-afaa-68fbe057007c-utilities\") pod \"redhat-marketplace-jpsrw\" (UID: \"0b19cb61-fb14-4480-afaa-68fbe057007c\") " pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:43:55 crc kubenswrapper[4856]: I1122 09:43:55.225487 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sngc\" (UniqueName: \"kubernetes.io/projected/0b19cb61-fb14-4480-afaa-68fbe057007c-kube-api-access-7sngc\") pod \"redhat-marketplace-jpsrw\" (UID: \"0b19cb61-fb14-4480-afaa-68fbe057007c\") " pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:43:55 crc kubenswrapper[4856]: I1122 09:43:55.225675 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b19cb61-fb14-4480-afaa-68fbe057007c-catalog-content\") pod \"redhat-marketplace-jpsrw\" (UID: \"0b19cb61-fb14-4480-afaa-68fbe057007c\") " pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:43:55 crc kubenswrapper[4856]: I1122 09:43:55.225706 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b19cb61-fb14-4480-afaa-68fbe057007c-utilities\") pod \"redhat-marketplace-jpsrw\" (UID: \"0b19cb61-fb14-4480-afaa-68fbe057007c\") " pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:43:55 crc kubenswrapper[4856]: I1122 09:43:55.226322 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b19cb61-fb14-4480-afaa-68fbe057007c-utilities\") pod \"redhat-marketplace-jpsrw\" (UID: \"0b19cb61-fb14-4480-afaa-68fbe057007c\") " pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:43:55 crc kubenswrapper[4856]: I1122 09:43:55.226559 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b19cb61-fb14-4480-afaa-68fbe057007c-catalog-content\") pod \"redhat-marketplace-jpsrw\" (UID: \"0b19cb61-fb14-4480-afaa-68fbe057007c\") " pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:43:55 crc kubenswrapper[4856]: I1122 09:43:55.249046 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sngc\" (UniqueName: \"kubernetes.io/projected/0b19cb61-fb14-4480-afaa-68fbe057007c-kube-api-access-7sngc\") pod \"redhat-marketplace-jpsrw\" (UID: \"0b19cb61-fb14-4480-afaa-68fbe057007c\") " pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:43:55 crc kubenswrapper[4856]: I1122 09:43:55.296177 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:43:55 crc kubenswrapper[4856]: I1122 09:43:55.809981 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpsrw"] Nov 22 09:43:56 crc kubenswrapper[4856]: I1122 09:43:56.689788 4856 generic.go:334] "Generic (PLEG): container finished" podID="0b19cb61-fb14-4480-afaa-68fbe057007c" containerID="8a7436c7b1abbb71110e6161235b5a5a0dfb02aab0a19069d79f209986127dd4" exitCode=0 Nov 22 09:43:56 crc kubenswrapper[4856]: I1122 09:43:56.689848 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpsrw" event={"ID":"0b19cb61-fb14-4480-afaa-68fbe057007c","Type":"ContainerDied","Data":"8a7436c7b1abbb71110e6161235b5a5a0dfb02aab0a19069d79f209986127dd4"} Nov 22 09:43:56 crc kubenswrapper[4856]: I1122 09:43:56.690156 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpsrw" event={"ID":"0b19cb61-fb14-4480-afaa-68fbe057007c","Type":"ContainerStarted","Data":"86a26a81d0d3189dc9e56090594c022dd49de840a3fbd7c93ca4da8247ba1f7a"} Nov 22 09:43:56 crc kubenswrapper[4856]: I1122 09:43:56.692773 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:43:57 crc kubenswrapper[4856]: I1122 09:43:57.702100 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpsrw" event={"ID":"0b19cb61-fb14-4480-afaa-68fbe057007c","Type":"ContainerStarted","Data":"b2721ec70859a47176ba544f0e45fd324b60988328d93ae55250891aae9e6970"} Nov 22 09:43:58 crc kubenswrapper[4856]: I1122 09:43:58.717484 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:43:58 crc kubenswrapper[4856]: E1122 09:43:58.718062 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:43:58 crc kubenswrapper[4856]: I1122 09:43:58.719491 4856 generic.go:334] "Generic (PLEG): container finished" podID="0b19cb61-fb14-4480-afaa-68fbe057007c" containerID="b2721ec70859a47176ba544f0e45fd324b60988328d93ae55250891aae9e6970" exitCode=0 Nov 22 09:43:58 crc kubenswrapper[4856]: I1122 09:43:58.727830 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpsrw" event={"ID":"0b19cb61-fb14-4480-afaa-68fbe057007c","Type":"ContainerDied","Data":"b2721ec70859a47176ba544f0e45fd324b60988328d93ae55250891aae9e6970"} Nov 22 09:43:59 crc kubenswrapper[4856]: I1122 09:43:59.730609 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpsrw" event={"ID":"0b19cb61-fb14-4480-afaa-68fbe057007c","Type":"ContainerStarted","Data":"71939ffbc5a4d00540cd651102c1ed148cb73e603f07a6fac43fab89a3e46069"} Nov 22 09:43:59 crc kubenswrapper[4856]: I1122 09:43:59.748294 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jpsrw" podStartSLOduration=3.3141511550000002 podStartE2EDuration="5.748276091s" podCreationTimestamp="2025-11-22 09:43:54 +0000 UTC" firstStartedPulling="2025-11-22 09:43:56.692461073 +0000 UTC m=+9679.105854331" lastFinishedPulling="2025-11-22 09:43:59.126585979 +0000 UTC m=+9681.539979267" observedRunningTime="2025-11-22 09:43:59.746840952 +0000 UTC m=+9682.160234210" watchObservedRunningTime="2025-11-22 09:43:59.748276091 +0000 UTC m=+9682.161669349" Nov 22 09:44:05 crc kubenswrapper[4856]: I1122 09:44:05.297342 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:44:05 crc kubenswrapper[4856]: I1122 09:44:05.298170 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:44:05 crc kubenswrapper[4856]: I1122 09:44:05.367242 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:44:05 crc kubenswrapper[4856]: I1122 09:44:05.932566 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:44:06 crc kubenswrapper[4856]: I1122 09:44:06.020896 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpsrw"] Nov 22 09:44:07 crc kubenswrapper[4856]: I1122 09:44:07.862455 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jpsrw" podUID="0b19cb61-fb14-4480-afaa-68fbe057007c" containerName="registry-server" containerID="cri-o://71939ffbc5a4d00540cd651102c1ed148cb73e603f07a6fac43fab89a3e46069" gracePeriod=2 Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.111764 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.112465 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb" containerName="adoption" containerID="cri-o://2a730927cabd91eacda30b64ce903cd66ad57b902645157e1718e4c206b6427f" gracePeriod=30 Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.342395 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.442219 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b19cb61-fb14-4480-afaa-68fbe057007c-utilities\") pod \"0b19cb61-fb14-4480-afaa-68fbe057007c\" (UID: \"0b19cb61-fb14-4480-afaa-68fbe057007c\") " Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.442294 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sngc\" (UniqueName: \"kubernetes.io/projected/0b19cb61-fb14-4480-afaa-68fbe057007c-kube-api-access-7sngc\") pod \"0b19cb61-fb14-4480-afaa-68fbe057007c\" (UID: \"0b19cb61-fb14-4480-afaa-68fbe057007c\") " Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.442341 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b19cb61-fb14-4480-afaa-68fbe057007c-catalog-content\") pod \"0b19cb61-fb14-4480-afaa-68fbe057007c\" (UID: \"0b19cb61-fb14-4480-afaa-68fbe057007c\") " Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.443098 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b19cb61-fb14-4480-afaa-68fbe057007c-utilities" (OuterVolumeSpecName: "utilities") pod "0b19cb61-fb14-4480-afaa-68fbe057007c" (UID: "0b19cb61-fb14-4480-afaa-68fbe057007c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.448607 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b19cb61-fb14-4480-afaa-68fbe057007c-kube-api-access-7sngc" (OuterVolumeSpecName: "kube-api-access-7sngc") pod "0b19cb61-fb14-4480-afaa-68fbe057007c" (UID: "0b19cb61-fb14-4480-afaa-68fbe057007c"). InnerVolumeSpecName "kube-api-access-7sngc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.459983 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b19cb61-fb14-4480-afaa-68fbe057007c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b19cb61-fb14-4480-afaa-68fbe057007c" (UID: "0b19cb61-fb14-4480-afaa-68fbe057007c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.544287 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b19cb61-fb14-4480-afaa-68fbe057007c-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.544321 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sngc\" (UniqueName: \"kubernetes.io/projected/0b19cb61-fb14-4480-afaa-68fbe057007c-kube-api-access-7sngc\") on node \"crc\" DevicePath \"\"" Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.544333 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b19cb61-fb14-4480-afaa-68fbe057007c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.905543 4856 generic.go:334] "Generic (PLEG): container finished" podID="0b19cb61-fb14-4480-afaa-68fbe057007c" containerID="71939ffbc5a4d00540cd651102c1ed148cb73e603f07a6fac43fab89a3e46069" exitCode=0 Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.905629 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jpsrw" Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.905632 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpsrw" event={"ID":"0b19cb61-fb14-4480-afaa-68fbe057007c","Type":"ContainerDied","Data":"71939ffbc5a4d00540cd651102c1ed148cb73e603f07a6fac43fab89a3e46069"} Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.906483 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jpsrw" event={"ID":"0b19cb61-fb14-4480-afaa-68fbe057007c","Type":"ContainerDied","Data":"86a26a81d0d3189dc9e56090594c022dd49de840a3fbd7c93ca4da8247ba1f7a"} Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.906547 4856 scope.go:117] "RemoveContainer" containerID="71939ffbc5a4d00540cd651102c1ed148cb73e603f07a6fac43fab89a3e46069" Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.943652 4856 scope.go:117] "RemoveContainer" containerID="b2721ec70859a47176ba544f0e45fd324b60988328d93ae55250891aae9e6970" Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.946976 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpsrw"] Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.954716 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jpsrw"] Nov 22 09:44:08 crc kubenswrapper[4856]: I1122 09:44:08.980361 4856 scope.go:117] "RemoveContainer" containerID="8a7436c7b1abbb71110e6161235b5a5a0dfb02aab0a19069d79f209986127dd4" Nov 22 09:44:09 crc kubenswrapper[4856]: I1122 09:44:09.014023 4856 scope.go:117] "RemoveContainer" containerID="71939ffbc5a4d00540cd651102c1ed148cb73e603f07a6fac43fab89a3e46069" Nov 22 09:44:09 crc kubenswrapper[4856]: E1122 09:44:09.014487 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71939ffbc5a4d00540cd651102c1ed148cb73e603f07a6fac43fab89a3e46069\": container with ID starting with 71939ffbc5a4d00540cd651102c1ed148cb73e603f07a6fac43fab89a3e46069 not found: ID does not exist" containerID="71939ffbc5a4d00540cd651102c1ed148cb73e603f07a6fac43fab89a3e46069" Nov 22 09:44:09 crc kubenswrapper[4856]: I1122 09:44:09.014553 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71939ffbc5a4d00540cd651102c1ed148cb73e603f07a6fac43fab89a3e46069"} err="failed to get container status \"71939ffbc5a4d00540cd651102c1ed148cb73e603f07a6fac43fab89a3e46069\": rpc error: code = NotFound desc = could not find container \"71939ffbc5a4d00540cd651102c1ed148cb73e603f07a6fac43fab89a3e46069\": container with ID starting with 71939ffbc5a4d00540cd651102c1ed148cb73e603f07a6fac43fab89a3e46069 not found: ID does not exist" Nov 22 09:44:09 crc kubenswrapper[4856]: I1122 09:44:09.014588 4856 scope.go:117] "RemoveContainer" containerID="b2721ec70859a47176ba544f0e45fd324b60988328d93ae55250891aae9e6970" Nov 22 09:44:09 crc kubenswrapper[4856]: E1122 09:44:09.014887 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2721ec70859a47176ba544f0e45fd324b60988328d93ae55250891aae9e6970\": container with ID starting with b2721ec70859a47176ba544f0e45fd324b60988328d93ae55250891aae9e6970 not found: ID does not exist" containerID="b2721ec70859a47176ba544f0e45fd324b60988328d93ae55250891aae9e6970" Nov 22 09:44:09 crc kubenswrapper[4856]: I1122 09:44:09.014922 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2721ec70859a47176ba544f0e45fd324b60988328d93ae55250891aae9e6970"} err="failed to get container status \"b2721ec70859a47176ba544f0e45fd324b60988328d93ae55250891aae9e6970\": rpc error: code = NotFound desc = could not find container \"b2721ec70859a47176ba544f0e45fd324b60988328d93ae55250891aae9e6970\": container with ID starting with b2721ec70859a47176ba544f0e45fd324b60988328d93ae55250891aae9e6970 not found: ID does not exist" Nov 22 09:44:09 crc kubenswrapper[4856]: I1122 09:44:09.014943 4856 scope.go:117] "RemoveContainer" containerID="8a7436c7b1abbb71110e6161235b5a5a0dfb02aab0a19069d79f209986127dd4" Nov 22 09:44:09 crc kubenswrapper[4856]: E1122 09:44:09.015259 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a7436c7b1abbb71110e6161235b5a5a0dfb02aab0a19069d79f209986127dd4\": container with ID starting with 8a7436c7b1abbb71110e6161235b5a5a0dfb02aab0a19069d79f209986127dd4 not found: ID does not exist" containerID="8a7436c7b1abbb71110e6161235b5a5a0dfb02aab0a19069d79f209986127dd4" Nov 22 09:44:09 crc kubenswrapper[4856]: I1122 09:44:09.015293 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a7436c7b1abbb71110e6161235b5a5a0dfb02aab0a19069d79f209986127dd4"} err="failed to get container status \"8a7436c7b1abbb71110e6161235b5a5a0dfb02aab0a19069d79f209986127dd4\": rpc error: code = NotFound desc = could not find container \"8a7436c7b1abbb71110e6161235b5a5a0dfb02aab0a19069d79f209986127dd4\": container with ID starting with 8a7436c7b1abbb71110e6161235b5a5a0dfb02aab0a19069d79f209986127dd4 not found: ID does not exist" Nov 22 09:44:09 crc kubenswrapper[4856]: I1122 09:44:09.710807 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:44:09 crc kubenswrapper[4856]: E1122 09:44:09.711047 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:44:10 crc kubenswrapper[4856]: I1122 09:44:10.722562 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b19cb61-fb14-4480-afaa-68fbe057007c" path="/var/lib/kubelet/pods/0b19cb61-fb14-4480-afaa-68fbe057007c/volumes" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.018229 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jh9mq"] Nov 22 09:44:11 crc kubenswrapper[4856]: E1122 09:44:11.018985 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b19cb61-fb14-4480-afaa-68fbe057007c" containerName="extract-content" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.019009 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b19cb61-fb14-4480-afaa-68fbe057007c" containerName="extract-content" Nov 22 09:44:11 crc kubenswrapper[4856]: E1122 09:44:11.019047 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b19cb61-fb14-4480-afaa-68fbe057007c" containerName="extract-utilities" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.019060 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b19cb61-fb14-4480-afaa-68fbe057007c" containerName="extract-utilities" Nov 22 09:44:11 crc kubenswrapper[4856]: E1122 09:44:11.019127 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b19cb61-fb14-4480-afaa-68fbe057007c" containerName="registry-server" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.019143 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b19cb61-fb14-4480-afaa-68fbe057007c" containerName="registry-server" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.020836 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b19cb61-fb14-4480-afaa-68fbe057007c" containerName="registry-server" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.022385 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.043723 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jh9mq"] Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.199745 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84de7cab-7e5c-4808-ba45-736b0d95cf44-utilities\") pod \"community-operators-jh9mq\" (UID: \"84de7cab-7e5c-4808-ba45-736b0d95cf44\") " pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.199906 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84de7cab-7e5c-4808-ba45-736b0d95cf44-catalog-content\") pod \"community-operators-jh9mq\" (UID: \"84de7cab-7e5c-4808-ba45-736b0d95cf44\") " pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.199954 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dflzd\" (UniqueName: \"kubernetes.io/projected/84de7cab-7e5c-4808-ba45-736b0d95cf44-kube-api-access-dflzd\") pod \"community-operators-jh9mq\" (UID: \"84de7cab-7e5c-4808-ba45-736b0d95cf44\") " pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.301437 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84de7cab-7e5c-4808-ba45-736b0d95cf44-catalog-content\") pod \"community-operators-jh9mq\" (UID: \"84de7cab-7e5c-4808-ba45-736b0d95cf44\") " pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.301532 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dflzd\" (UniqueName: \"kubernetes.io/projected/84de7cab-7e5c-4808-ba45-736b0d95cf44-kube-api-access-dflzd\") pod \"community-operators-jh9mq\" (UID: \"84de7cab-7e5c-4808-ba45-736b0d95cf44\") " pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.301632 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84de7cab-7e5c-4808-ba45-736b0d95cf44-utilities\") pod \"community-operators-jh9mq\" (UID: \"84de7cab-7e5c-4808-ba45-736b0d95cf44\") " pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.302140 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84de7cab-7e5c-4808-ba45-736b0d95cf44-catalog-content\") pod \"community-operators-jh9mq\" (UID: \"84de7cab-7e5c-4808-ba45-736b0d95cf44\") " pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.302165 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84de7cab-7e5c-4808-ba45-736b0d95cf44-utilities\") pod \"community-operators-jh9mq\" (UID: \"84de7cab-7e5c-4808-ba45-736b0d95cf44\") " pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.320606 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dflzd\" (UniqueName: \"kubernetes.io/projected/84de7cab-7e5c-4808-ba45-736b0d95cf44-kube-api-access-dflzd\") pod \"community-operators-jh9mq\" (UID: \"84de7cab-7e5c-4808-ba45-736b0d95cf44\") " pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.344292 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.880123 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jh9mq"] Nov 22 09:44:11 crc kubenswrapper[4856]: W1122 09:44:11.896698 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84de7cab_7e5c_4808_ba45_736b0d95cf44.slice/crio-4e52584e202f45a2abad89d173d6431bb173a074baf0a798c449bca89cbd1890 WatchSource:0}: Error finding container 4e52584e202f45a2abad89d173d6431bb173a074baf0a798c449bca89cbd1890: Status 404 returned error can't find the container with id 4e52584e202f45a2abad89d173d6431bb173a074baf0a798c449bca89cbd1890 Nov 22 09:44:11 crc kubenswrapper[4856]: I1122 09:44:11.941420 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh9mq" event={"ID":"84de7cab-7e5c-4808-ba45-736b0d95cf44","Type":"ContainerStarted","Data":"4e52584e202f45a2abad89d173d6431bb173a074baf0a798c449bca89cbd1890"} Nov 22 09:44:12 crc kubenswrapper[4856]: I1122 09:44:12.953040 4856 generic.go:334] "Generic (PLEG): container finished" podID="84de7cab-7e5c-4808-ba45-736b0d95cf44" containerID="b8a14cc6f4728b2160fc2119c9e88a528a32a1fd6d68682962e14f4b9a39185b" exitCode=0 Nov 22 09:44:12 crc kubenswrapper[4856]: I1122 09:44:12.953110 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh9mq" event={"ID":"84de7cab-7e5c-4808-ba45-736b0d95cf44","Type":"ContainerDied","Data":"b8a14cc6f4728b2160fc2119c9e88a528a32a1fd6d68682962e14f4b9a39185b"} Nov 22 09:44:14 crc kubenswrapper[4856]: I1122 09:44:14.974619 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh9mq" event={"ID":"84de7cab-7e5c-4808-ba45-736b0d95cf44","Type":"ContainerStarted","Data":"7cb43c4210b29024701949a6854144775b4d9892da2d0433074c9b5385a3add0"} Nov 22 09:44:15 crc kubenswrapper[4856]: I1122 09:44:15.986325 4856 generic.go:334] "Generic (PLEG): container finished" podID="84de7cab-7e5c-4808-ba45-736b0d95cf44" containerID="7cb43c4210b29024701949a6854144775b4d9892da2d0433074c9b5385a3add0" exitCode=0 Nov 22 09:44:15 crc kubenswrapper[4856]: I1122 09:44:15.986397 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh9mq" event={"ID":"84de7cab-7e5c-4808-ba45-736b0d95cf44","Type":"ContainerDied","Data":"7cb43c4210b29024701949a6854144775b4d9892da2d0433074c9b5385a3add0"} Nov 22 09:44:17 crc kubenswrapper[4856]: I1122 09:44:17.018723 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh9mq" event={"ID":"84de7cab-7e5c-4808-ba45-736b0d95cf44","Type":"ContainerStarted","Data":"cae96684e121b8bc70ef6fd4864a62b80bb2d9da52b3a3d05d806ab42f2a80f8"} Nov 22 09:44:17 crc kubenswrapper[4856]: I1122 09:44:17.042149 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jh9mq" podStartSLOduration=3.596687554 podStartE2EDuration="7.042132174s" podCreationTimestamp="2025-11-22 09:44:10 +0000 UTC" firstStartedPulling="2025-11-22 09:44:12.955324631 +0000 UTC m=+9695.368717889" lastFinishedPulling="2025-11-22 09:44:16.400769251 +0000 UTC m=+9698.814162509" observedRunningTime="2025-11-22 09:44:17.036002189 +0000 UTC m=+9699.449395447" watchObservedRunningTime="2025-11-22 09:44:17.042132174 +0000 UTC m=+9699.455525432" Nov 22 09:44:21 crc kubenswrapper[4856]: I1122 09:44:21.344806 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:21 crc kubenswrapper[4856]: I1122 09:44:21.345429 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:21 crc kubenswrapper[4856]: I1122 09:44:21.417822 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:22 crc kubenswrapper[4856]: I1122 09:44:22.129319 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:22 crc kubenswrapper[4856]: I1122 09:44:22.212003 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jh9mq"] Nov 22 09:44:22 crc kubenswrapper[4856]: I1122 09:44:22.713753 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:44:22 crc kubenswrapper[4856]: E1122 09:44:22.714137 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:44:24 crc kubenswrapper[4856]: I1122 09:44:24.096215 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jh9mq" podUID="84de7cab-7e5c-4808-ba45-736b0d95cf44" containerName="registry-server" containerID="cri-o://cae96684e121b8bc70ef6fd4864a62b80bb2d9da52b3a3d05d806ab42f2a80f8" gracePeriod=2 Nov 22 09:44:24 crc kubenswrapper[4856]: I1122 09:44:24.563933 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:24 crc kubenswrapper[4856]: I1122 09:44:24.700470 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84de7cab-7e5c-4808-ba45-736b0d95cf44-utilities\") pod \"84de7cab-7e5c-4808-ba45-736b0d95cf44\" (UID: \"84de7cab-7e5c-4808-ba45-736b0d95cf44\") " Nov 22 09:44:24 crc kubenswrapper[4856]: I1122 09:44:24.700832 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dflzd\" (UniqueName: \"kubernetes.io/projected/84de7cab-7e5c-4808-ba45-736b0d95cf44-kube-api-access-dflzd\") pod \"84de7cab-7e5c-4808-ba45-736b0d95cf44\" (UID: \"84de7cab-7e5c-4808-ba45-736b0d95cf44\") " Nov 22 09:44:24 crc kubenswrapper[4856]: I1122 09:44:24.700911 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84de7cab-7e5c-4808-ba45-736b0d95cf44-catalog-content\") pod \"84de7cab-7e5c-4808-ba45-736b0d95cf44\" (UID: \"84de7cab-7e5c-4808-ba45-736b0d95cf44\") " Nov 22 09:44:24 crc kubenswrapper[4856]: I1122 09:44:24.701318 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84de7cab-7e5c-4808-ba45-736b0d95cf44-utilities" (OuterVolumeSpecName: "utilities") pod "84de7cab-7e5c-4808-ba45-736b0d95cf44" (UID: "84de7cab-7e5c-4808-ba45-736b0d95cf44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:44:24 crc kubenswrapper[4856]: I1122 09:44:24.701451 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84de7cab-7e5c-4808-ba45-736b0d95cf44-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:44:24 crc kubenswrapper[4856]: I1122 09:44:24.706780 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84de7cab-7e5c-4808-ba45-736b0d95cf44-kube-api-access-dflzd" (OuterVolumeSpecName: "kube-api-access-dflzd") pod "84de7cab-7e5c-4808-ba45-736b0d95cf44" (UID: "84de7cab-7e5c-4808-ba45-736b0d95cf44"). InnerVolumeSpecName "kube-api-access-dflzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:44:24 crc kubenswrapper[4856]: I1122 09:44:24.803667 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dflzd\" (UniqueName: \"kubernetes.io/projected/84de7cab-7e5c-4808-ba45-736b0d95cf44-kube-api-access-dflzd\") on node \"crc\" DevicePath \"\"" Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.107798 4856 generic.go:334] "Generic (PLEG): container finished" podID="84de7cab-7e5c-4808-ba45-736b0d95cf44" containerID="cae96684e121b8bc70ef6fd4864a62b80bb2d9da52b3a3d05d806ab42f2a80f8" exitCode=0 Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.107872 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jh9mq" Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.107895 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh9mq" event={"ID":"84de7cab-7e5c-4808-ba45-736b0d95cf44","Type":"ContainerDied","Data":"cae96684e121b8bc70ef6fd4864a62b80bb2d9da52b3a3d05d806ab42f2a80f8"} Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.109039 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jh9mq" event={"ID":"84de7cab-7e5c-4808-ba45-736b0d95cf44","Type":"ContainerDied","Data":"4e52584e202f45a2abad89d173d6431bb173a074baf0a798c449bca89cbd1890"} Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.109067 4856 scope.go:117] "RemoveContainer" containerID="cae96684e121b8bc70ef6fd4864a62b80bb2d9da52b3a3d05d806ab42f2a80f8" Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.144369 4856 scope.go:117] "RemoveContainer" containerID="7cb43c4210b29024701949a6854144775b4d9892da2d0433074c9b5385a3add0" Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.183898 4856 scope.go:117] "RemoveContainer" containerID="b8a14cc6f4728b2160fc2119c9e88a528a32a1fd6d68682962e14f4b9a39185b" Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.251260 4856 scope.go:117] "RemoveContainer" containerID="cae96684e121b8bc70ef6fd4864a62b80bb2d9da52b3a3d05d806ab42f2a80f8" Nov 22 09:44:25 crc kubenswrapper[4856]: E1122 09:44:25.251795 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cae96684e121b8bc70ef6fd4864a62b80bb2d9da52b3a3d05d806ab42f2a80f8\": container with ID starting with cae96684e121b8bc70ef6fd4864a62b80bb2d9da52b3a3d05d806ab42f2a80f8 not found: ID does not exist" containerID="cae96684e121b8bc70ef6fd4864a62b80bb2d9da52b3a3d05d806ab42f2a80f8" Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.251827 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cae96684e121b8bc70ef6fd4864a62b80bb2d9da52b3a3d05d806ab42f2a80f8"} err="failed to get container status \"cae96684e121b8bc70ef6fd4864a62b80bb2d9da52b3a3d05d806ab42f2a80f8\": rpc error: code = NotFound desc = could not find container \"cae96684e121b8bc70ef6fd4864a62b80bb2d9da52b3a3d05d806ab42f2a80f8\": container with ID starting with cae96684e121b8bc70ef6fd4864a62b80bb2d9da52b3a3d05d806ab42f2a80f8 not found: ID does not exist" Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.251850 4856 scope.go:117] "RemoveContainer" containerID="7cb43c4210b29024701949a6854144775b4d9892da2d0433074c9b5385a3add0" Nov 22 09:44:25 crc kubenswrapper[4856]: E1122 09:44:25.252248 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb43c4210b29024701949a6854144775b4d9892da2d0433074c9b5385a3add0\": container with ID starting with 7cb43c4210b29024701949a6854144775b4d9892da2d0433074c9b5385a3add0 not found: ID does not exist" containerID="7cb43c4210b29024701949a6854144775b4d9892da2d0433074c9b5385a3add0" Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.252281 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb43c4210b29024701949a6854144775b4d9892da2d0433074c9b5385a3add0"} err="failed to get container status \"7cb43c4210b29024701949a6854144775b4d9892da2d0433074c9b5385a3add0\": rpc error: code = NotFound desc = could not find container \"7cb43c4210b29024701949a6854144775b4d9892da2d0433074c9b5385a3add0\": container with ID starting with 7cb43c4210b29024701949a6854144775b4d9892da2d0433074c9b5385a3add0 not found: ID does not exist" Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.252303 4856 scope.go:117] "RemoveContainer" containerID="b8a14cc6f4728b2160fc2119c9e88a528a32a1fd6d68682962e14f4b9a39185b" Nov 22 09:44:25 crc kubenswrapper[4856]: E1122 09:44:25.252878 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8a14cc6f4728b2160fc2119c9e88a528a32a1fd6d68682962e14f4b9a39185b\": container with ID starting with b8a14cc6f4728b2160fc2119c9e88a528a32a1fd6d68682962e14f4b9a39185b not found: ID does not exist" containerID="b8a14cc6f4728b2160fc2119c9e88a528a32a1fd6d68682962e14f4b9a39185b" Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.252941 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8a14cc6f4728b2160fc2119c9e88a528a32a1fd6d68682962e14f4b9a39185b"} err="failed to get container status \"b8a14cc6f4728b2160fc2119c9e88a528a32a1fd6d68682962e14f4b9a39185b\": rpc error: code = NotFound desc = could not find container \"b8a14cc6f4728b2160fc2119c9e88a528a32a1fd6d68682962e14f4b9a39185b\": container with ID starting with b8a14cc6f4728b2160fc2119c9e88a528a32a1fd6d68682962e14f4b9a39185b not found: ID does not exist" Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.426724 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84de7cab-7e5c-4808-ba45-736b0d95cf44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84de7cab-7e5c-4808-ba45-736b0d95cf44" (UID: "84de7cab-7e5c-4808-ba45-736b0d95cf44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.521919 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84de7cab-7e5c-4808-ba45-736b0d95cf44-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.745176 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jh9mq"] Nov 22 09:44:25 crc kubenswrapper[4856]: I1122 09:44:25.754536 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jh9mq"] Nov 22 09:44:26 crc kubenswrapper[4856]: I1122 09:44:26.732287 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84de7cab-7e5c-4808-ba45-736b0d95cf44" path="/var/lib/kubelet/pods/84de7cab-7e5c-4808-ba45-736b0d95cf44/volumes" Nov 22 09:44:35 crc kubenswrapper[4856]: I1122 09:44:35.710679 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:44:35 crc kubenswrapper[4856]: E1122 09:44:35.711709 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:44:38 crc kubenswrapper[4856]: I1122 09:44:38.278669 4856 generic.go:334] "Generic (PLEG): container finished" podID="7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb" containerID="2a730927cabd91eacda30b64ce903cd66ad57b902645157e1718e4c206b6427f" exitCode=137 Nov 22 09:44:38 crc kubenswrapper[4856]: I1122 09:44:38.278826 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb","Type":"ContainerDied","Data":"2a730927cabd91eacda30b64ce903cd66ad57b902645157e1718e4c206b6427f"} Nov 22 09:44:38 crc kubenswrapper[4856]: I1122 09:44:38.617580 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 22 09:44:38 crc kubenswrapper[4856]: I1122 09:44:38.734004 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4qvs\" (UniqueName: \"kubernetes.io/projected/7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb-kube-api-access-q4qvs\") pod \"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb\" (UID: \"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb\") " Nov 22 09:44:38 crc kubenswrapper[4856]: I1122 09:44:38.735056 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54\") pod \"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb\" (UID: \"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb\") " Nov 22 09:44:38 crc kubenswrapper[4856]: I1122 09:44:38.742729 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb-kube-api-access-q4qvs" (OuterVolumeSpecName: "kube-api-access-q4qvs") pod "7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb" (UID: "7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb"). InnerVolumeSpecName "kube-api-access-q4qvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:44:38 crc kubenswrapper[4856]: I1122 09:44:38.756011 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54" (OuterVolumeSpecName: "mariadb-data") pod "7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb" (UID: "7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb"). InnerVolumeSpecName "pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:44:38 crc kubenswrapper[4856]: I1122 09:44:38.838677 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4qvs\" (UniqueName: \"kubernetes.io/projected/7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb-kube-api-access-q4qvs\") on node \"crc\" DevicePath \"\"" Nov 22 09:44:38 crc kubenswrapper[4856]: I1122 09:44:38.838759 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54\") on node \"crc\" " Nov 22 09:44:38 crc kubenswrapper[4856]: I1122 09:44:38.891839 4856 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:44:38 crc kubenswrapper[4856]: I1122 09:44:38.892028 4856 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54") on node "crc" Nov 22 09:44:38 crc kubenswrapper[4856]: I1122 09:44:38.940627 4856 reconciler_common.go:293] "Volume detached for volume \"pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-82784a2a-b63c-4edd-a9bc-a5990fd41c54\") on node \"crc\" DevicePath \"\"" Nov 22 09:44:39 crc kubenswrapper[4856]: I1122 09:44:39.290026 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb","Type":"ContainerDied","Data":"0152727fc3fa9b518a4fd8341da94d7f298aadb5f6d950b8eaafbd0bc7e344b5"} Nov 22 09:44:39 crc kubenswrapper[4856]: I1122 09:44:39.290074 4856 scope.go:117] "RemoveContainer" containerID="2a730927cabd91eacda30b64ce903cd66ad57b902645157e1718e4c206b6427f" Nov 22 09:44:39 crc kubenswrapper[4856]: I1122 09:44:39.290190 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Nov 22 09:44:39 crc kubenswrapper[4856]: I1122 09:44:39.323130 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 09:44:39 crc kubenswrapper[4856]: I1122 09:44:39.331106 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Nov 22 09:44:40 crc kubenswrapper[4856]: I1122 09:44:40.005658 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 22 09:44:40 crc kubenswrapper[4856]: I1122 09:44:40.006286 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="19475584-27e0-4a31-b29f-d93bd563b5ef" containerName="adoption" containerID="cri-o://53ec1333c997dba7960dc08afcc0069cf964876c1bbc176773f571f7a78848d3" gracePeriod=30 Nov 22 09:44:40 crc kubenswrapper[4856]: I1122 09:44:40.726594 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb" path="/var/lib/kubelet/pods/7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb/volumes" Nov 22 09:44:50 crc kubenswrapper[4856]: I1122 09:44:50.709935 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:44:50 crc kubenswrapper[4856]: E1122 09:44:50.711197 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.191033 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth"] Nov 22 09:45:00 crc kubenswrapper[4856]: E1122 09:45:00.192987 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84de7cab-7e5c-4808-ba45-736b0d95cf44" containerName="extract-content" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.193013 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="84de7cab-7e5c-4808-ba45-736b0d95cf44" containerName="extract-content" Nov 22 09:45:00 crc kubenswrapper[4856]: E1122 09:45:00.193084 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84de7cab-7e5c-4808-ba45-736b0d95cf44" containerName="registry-server" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.193099 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="84de7cab-7e5c-4808-ba45-736b0d95cf44" containerName="registry-server" Nov 22 09:45:00 crc kubenswrapper[4856]: E1122 09:45:00.193162 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb" containerName="adoption" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.193171 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb" containerName="adoption" Nov 22 09:45:00 crc kubenswrapper[4856]: E1122 09:45:00.193205 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84de7cab-7e5c-4808-ba45-736b0d95cf44" containerName="extract-utilities" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.193215 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="84de7cab-7e5c-4808-ba45-736b0d95cf44" containerName="extract-utilities" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.194048 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b7d97c2-2b1c-449c-a50f-ed3d5d1563eb" containerName="adoption" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.194128 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="84de7cab-7e5c-4808-ba45-736b0d95cf44" containerName="registry-server" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.195496 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.199449 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.199734 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.233266 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth"] Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.339071 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c67260e2-6e46-486a-a3a3-f1f4c64d934c-config-volume\") pod \"collect-profiles-29396745-8zxth\" (UID: \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.339258 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5jlt\" (UniqueName: \"kubernetes.io/projected/c67260e2-6e46-486a-a3a3-f1f4c64d934c-kube-api-access-f5jlt\") pod \"collect-profiles-29396745-8zxth\" (UID: \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.339459 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c67260e2-6e46-486a-a3a3-f1f4c64d934c-secret-volume\") pod \"collect-profiles-29396745-8zxth\" (UID: \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.442644 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c67260e2-6e46-486a-a3a3-f1f4c64d934c-config-volume\") pod \"collect-profiles-29396745-8zxth\" (UID: \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.442758 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5jlt\" (UniqueName: \"kubernetes.io/projected/c67260e2-6e46-486a-a3a3-f1f4c64d934c-kube-api-access-f5jlt\") pod \"collect-profiles-29396745-8zxth\" (UID: \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.442873 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c67260e2-6e46-486a-a3a3-f1f4c64d934c-secret-volume\") pod \"collect-profiles-29396745-8zxth\" (UID: \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.443627 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c67260e2-6e46-486a-a3a3-f1f4c64d934c-config-volume\") pod \"collect-profiles-29396745-8zxth\" (UID: \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.458614 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c67260e2-6e46-486a-a3a3-f1f4c64d934c-secret-volume\") pod \"collect-profiles-29396745-8zxth\" (UID: \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.462360 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5jlt\" (UniqueName: \"kubernetes.io/projected/c67260e2-6e46-486a-a3a3-f1f4c64d934c-kube-api-access-f5jlt\") pod \"collect-profiles-29396745-8zxth\" (UID: \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" Nov 22 09:45:00 crc kubenswrapper[4856]: I1122 09:45:00.525846 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" Nov 22 09:45:01 crc kubenswrapper[4856]: I1122 09:45:01.020222 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth"] Nov 22 09:45:01 crc kubenswrapper[4856]: I1122 09:45:01.586576 4856 generic.go:334] "Generic (PLEG): container finished" podID="c67260e2-6e46-486a-a3a3-f1f4c64d934c" containerID="7ba5461e71605eacb29260f300d16324262790a2d5971420e977563cf9d7e811" exitCode=0 Nov 22 09:45:01 crc kubenswrapper[4856]: I1122 09:45:01.586657 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" event={"ID":"c67260e2-6e46-486a-a3a3-f1f4c64d934c","Type":"ContainerDied","Data":"7ba5461e71605eacb29260f300d16324262790a2d5971420e977563cf9d7e811"} Nov 22 09:45:01 crc kubenswrapper[4856]: I1122 09:45:01.586912 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" event={"ID":"c67260e2-6e46-486a-a3a3-f1f4c64d934c","Type":"ContainerStarted","Data":"cb1f7ed5b7a66f76d76e87974ebacd614c329324940e3127df08702ad7c3624c"} Nov 22 09:45:02 crc kubenswrapper[4856]: I1122 09:45:02.996417 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" Nov 22 09:45:03 crc kubenswrapper[4856]: I1122 09:45:03.102631 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5jlt\" (UniqueName: \"kubernetes.io/projected/c67260e2-6e46-486a-a3a3-f1f4c64d934c-kube-api-access-f5jlt\") pod \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\" (UID: \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\") " Nov 22 09:45:03 crc kubenswrapper[4856]: I1122 09:45:03.102790 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c67260e2-6e46-486a-a3a3-f1f4c64d934c-config-volume\") pod \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\" (UID: \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\") " Nov 22 09:45:03 crc kubenswrapper[4856]: I1122 09:45:03.102942 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c67260e2-6e46-486a-a3a3-f1f4c64d934c-secret-volume\") pod \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\" (UID: \"c67260e2-6e46-486a-a3a3-f1f4c64d934c\") " Nov 22 09:45:03 crc kubenswrapper[4856]: I1122 09:45:03.103597 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c67260e2-6e46-486a-a3a3-f1f4c64d934c-config-volume" (OuterVolumeSpecName: "config-volume") pod "c67260e2-6e46-486a-a3a3-f1f4c64d934c" (UID: "c67260e2-6e46-486a-a3a3-f1f4c64d934c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 09:45:03 crc kubenswrapper[4856]: I1122 09:45:03.109430 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c67260e2-6e46-486a-a3a3-f1f4c64d934c-kube-api-access-f5jlt" (OuterVolumeSpecName: "kube-api-access-f5jlt") pod "c67260e2-6e46-486a-a3a3-f1f4c64d934c" (UID: "c67260e2-6e46-486a-a3a3-f1f4c64d934c"). InnerVolumeSpecName "kube-api-access-f5jlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:45:03 crc kubenswrapper[4856]: I1122 09:45:03.110360 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c67260e2-6e46-486a-a3a3-f1f4c64d934c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c67260e2-6e46-486a-a3a3-f1f4c64d934c" (UID: "c67260e2-6e46-486a-a3a3-f1f4c64d934c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:45:03 crc kubenswrapper[4856]: I1122 09:45:03.205882 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c67260e2-6e46-486a-a3a3-f1f4c64d934c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:45:03 crc kubenswrapper[4856]: I1122 09:45:03.205928 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5jlt\" (UniqueName: \"kubernetes.io/projected/c67260e2-6e46-486a-a3a3-f1f4c64d934c-kube-api-access-f5jlt\") on node \"crc\" DevicePath \"\"" Nov 22 09:45:03 crc kubenswrapper[4856]: I1122 09:45:03.205937 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c67260e2-6e46-486a-a3a3-f1f4c64d934c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 09:45:03 crc kubenswrapper[4856]: I1122 09:45:03.612600 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" event={"ID":"c67260e2-6e46-486a-a3a3-f1f4c64d934c","Type":"ContainerDied","Data":"cb1f7ed5b7a66f76d76e87974ebacd614c329324940e3127df08702ad7c3624c"} Nov 22 09:45:03 crc kubenswrapper[4856]: I1122 09:45:03.612644 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb1f7ed5b7a66f76d76e87974ebacd614c329324940e3127df08702ad7c3624c" Nov 22 09:45:03 crc kubenswrapper[4856]: I1122 09:45:03.614046 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396745-8zxth" Nov 22 09:45:04 crc kubenswrapper[4856]: I1122 09:45:04.074863 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6"] Nov 22 09:45:04 crc kubenswrapper[4856]: I1122 09:45:04.086118 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396700-zxxf6"] Nov 22 09:45:04 crc kubenswrapper[4856]: I1122 09:45:04.721433 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1b0306c-9e7a-4831-8e59-e0e743c35064" path="/var/lib/kubelet/pods/e1b0306c-9e7a-4831-8e59-e0e743c35064/volumes" Nov 22 09:45:05 crc kubenswrapper[4856]: I1122 09:45:05.709531 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:45:05 crc kubenswrapper[4856]: E1122 09:45:05.710035 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.636260 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.695156 4856 generic.go:334] "Generic (PLEG): container finished" podID="19475584-27e0-4a31-b29f-d93bd563b5ef" containerID="53ec1333c997dba7960dc08afcc0069cf964876c1bbc176773f571f7a78848d3" exitCode=137 Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.695365 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"19475584-27e0-4a31-b29f-d93bd563b5ef","Type":"ContainerDied","Data":"53ec1333c997dba7960dc08afcc0069cf964876c1bbc176773f571f7a78848d3"} Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.695457 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"19475584-27e0-4a31-b29f-d93bd563b5ef","Type":"ContainerDied","Data":"0763286be3ab5d2a63be3ca4bd7ca9bcf254cc05cd1c67301d081ec45d05d860"} Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.695479 4856 scope.go:117] "RemoveContainer" containerID="53ec1333c997dba7960dc08afcc0069cf964876c1bbc176773f571f7a78848d3" Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.695482 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.716803 4856 scope.go:117] "RemoveContainer" containerID="53ec1333c997dba7960dc08afcc0069cf964876c1bbc176773f571f7a78848d3" Nov 22 09:45:11 crc kubenswrapper[4856]: E1122 09:45:11.717209 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53ec1333c997dba7960dc08afcc0069cf964876c1bbc176773f571f7a78848d3\": container with ID starting with 53ec1333c997dba7960dc08afcc0069cf964876c1bbc176773f571f7a78848d3 not found: ID does not exist" containerID="53ec1333c997dba7960dc08afcc0069cf964876c1bbc176773f571f7a78848d3" Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.717251 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53ec1333c997dba7960dc08afcc0069cf964876c1bbc176773f571f7a78848d3"} err="failed to get container status \"53ec1333c997dba7960dc08afcc0069cf964876c1bbc176773f571f7a78848d3\": rpc error: code = NotFound desc = could not find container \"53ec1333c997dba7960dc08afcc0069cf964876c1bbc176773f571f7a78848d3\": container with ID starting with 53ec1333c997dba7960dc08afcc0069cf964876c1bbc176773f571f7a78848d3 not found: ID does not exist" Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.723883 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5747b224-26ef-4a31-82e4-f602c81b2617\") pod \"19475584-27e0-4a31-b29f-d93bd563b5ef\" (UID: \"19475584-27e0-4a31-b29f-d93bd563b5ef\") " Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.723955 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqls8\" (UniqueName: \"kubernetes.io/projected/19475584-27e0-4a31-b29f-d93bd563b5ef-kube-api-access-fqls8\") pod \"19475584-27e0-4a31-b29f-d93bd563b5ef\" (UID: \"19475584-27e0-4a31-b29f-d93bd563b5ef\") " Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.724149 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/19475584-27e0-4a31-b29f-d93bd563b5ef-ovn-data-cert\") pod \"19475584-27e0-4a31-b29f-d93bd563b5ef\" (UID: \"19475584-27e0-4a31-b29f-d93bd563b5ef\") " Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.729600 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19475584-27e0-4a31-b29f-d93bd563b5ef-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "19475584-27e0-4a31-b29f-d93bd563b5ef" (UID: "19475584-27e0-4a31-b29f-d93bd563b5ef"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.729843 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19475584-27e0-4a31-b29f-d93bd563b5ef-kube-api-access-fqls8" (OuterVolumeSpecName: "kube-api-access-fqls8") pod "19475584-27e0-4a31-b29f-d93bd563b5ef" (UID: "19475584-27e0-4a31-b29f-d93bd563b5ef"). InnerVolumeSpecName "kube-api-access-fqls8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.742569 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5747b224-26ef-4a31-82e4-f602c81b2617" (OuterVolumeSpecName: "ovn-data") pod "19475584-27e0-4a31-b29f-d93bd563b5ef" (UID: "19475584-27e0-4a31-b29f-d93bd563b5ef"). InnerVolumeSpecName "pvc-5747b224-26ef-4a31-82e4-f602c81b2617". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.826265 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5747b224-26ef-4a31-82e4-f602c81b2617\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5747b224-26ef-4a31-82e4-f602c81b2617\") on node \"crc\" " Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.826308 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqls8\" (UniqueName: \"kubernetes.io/projected/19475584-27e0-4a31-b29f-d93bd563b5ef-kube-api-access-fqls8\") on node \"crc\" DevicePath \"\"" Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.826323 4856 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/19475584-27e0-4a31-b29f-d93bd563b5ef-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.856084 4856 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.856294 4856 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5747b224-26ef-4a31-82e4-f602c81b2617" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5747b224-26ef-4a31-82e4-f602c81b2617") on node "crc" Nov 22 09:45:11 crc kubenswrapper[4856]: I1122 09:45:11.928222 4856 reconciler_common.go:293] "Volume detached for volume \"pvc-5747b224-26ef-4a31-82e4-f602c81b2617\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5747b224-26ef-4a31-82e4-f602c81b2617\") on node \"crc\" DevicePath \"\"" Nov 22 09:45:12 crc kubenswrapper[4856]: I1122 09:45:12.028901 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Nov 22 09:45:12 crc kubenswrapper[4856]: I1122 09:45:12.037443 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Nov 22 09:45:12 crc kubenswrapper[4856]: I1122 09:45:12.722670 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19475584-27e0-4a31-b29f-d93bd563b5ef" path="/var/lib/kubelet/pods/19475584-27e0-4a31-b29f-d93bd563b5ef/volumes" Nov 22 09:45:16 crc kubenswrapper[4856]: I1122 09:45:16.710384 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:45:16 crc kubenswrapper[4856]: E1122 09:45:16.711271 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:45:29 crc kubenswrapper[4856]: I1122 09:45:29.709898 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:45:29 crc kubenswrapper[4856]: E1122 09:45:29.710975 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.868297 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 22 09:45:32 crc kubenswrapper[4856]: E1122 09:45:32.869777 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19475584-27e0-4a31-b29f-d93bd563b5ef" containerName="adoption" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.869817 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="19475584-27e0-4a31-b29f-d93bd563b5ef" containerName="adoption" Nov 22 09:45:32 crc kubenswrapper[4856]: E1122 09:45:32.869840 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c67260e2-6e46-486a-a3a3-f1f4c64d934c" containerName="collect-profiles" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.869848 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c67260e2-6e46-486a-a3a3-f1f4c64d934c" containerName="collect-profiles" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.870479 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="19475584-27e0-4a31-b29f-d93bd563b5ef" containerName="adoption" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.870527 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c67260e2-6e46-486a-a3a3-f1f4c64d934c" containerName="collect-profiles" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.871670 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.877960 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.879132 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.879802 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.880079 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nvzxv" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.884299 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.995571 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9ceb57cb-8794-40bb-97b2-d59671b89459-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.995934 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9ceb57cb-8794-40bb-97b2-d59671b89459-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.995956 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.996007 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ceb57cb-8794-40bb-97b2-d59671b89459-config-data\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.996026 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfwz9\" (UniqueName: \"kubernetes.io/projected/9ceb57cb-8794-40bb-97b2-d59671b89459-kube-api-access-bfwz9\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.996053 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.996081 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.996110 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ceb57cb-8794-40bb-97b2-d59671b89459-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:32 crc kubenswrapper[4856]: I1122 09:45:32.996154 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.098645 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9ceb57cb-8794-40bb-97b2-d59671b89459-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.098745 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9ceb57cb-8794-40bb-97b2-d59671b89459-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.098777 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.098882 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ceb57cb-8794-40bb-97b2-d59671b89459-config-data\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.098927 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfwz9\" (UniqueName: \"kubernetes.io/projected/9ceb57cb-8794-40bb-97b2-d59671b89459-kube-api-access-bfwz9\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.099019 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.099074 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.099127 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ceb57cb-8794-40bb-97b2-d59671b89459-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.099237 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.099281 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9ceb57cb-8794-40bb-97b2-d59671b89459-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.099330 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9ceb57cb-8794-40bb-97b2-d59671b89459-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.099387 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.100671 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ceb57cb-8794-40bb-97b2-d59671b89459-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.102719 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ceb57cb-8794-40bb-97b2-d59671b89459-config-data\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.107014 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.107023 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.116834 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.124009 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfwz9\" (UniqueName: \"kubernetes.io/projected/9ceb57cb-8794-40bb-97b2-d59671b89459-kube-api-access-bfwz9\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.157220 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.200538 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.659763 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 22 09:45:33 crc kubenswrapper[4856]: I1122 09:45:33.924750 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9ceb57cb-8794-40bb-97b2-d59671b89459","Type":"ContainerStarted","Data":"db5a297fdee10da587b0c07acdb07d0b4029dc9b1ee84f269418ca4bd8761c33"} Nov 22 09:45:43 crc kubenswrapper[4856]: I1122 09:45:43.710025 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:45:43 crc kubenswrapper[4856]: E1122 09:45:43.710969 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:45:50 crc kubenswrapper[4856]: I1122 09:45:50.266957 4856 scope.go:117] "RemoveContainer" containerID="85326c197aab7ac8d5e6b131d871ec6b2782ce2404f52819e68e114711ec7f2b" Nov 22 09:45:55 crc kubenswrapper[4856]: I1122 09:45:55.709624 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:45:55 crc kubenswrapper[4856]: E1122 09:45:55.710830 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:46:07 crc kubenswrapper[4856]: I1122 09:46:07.710684 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:46:07 crc kubenswrapper[4856]: E1122 09:46:07.712220 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:46:11 crc kubenswrapper[4856]: I1122 09:46:11.769632 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="466d6ab8-2d26-4845-85a4-d4e652a857e7" containerName="ovn-northd" probeResult="failure" output="command timed out" Nov 22 09:46:11 crc kubenswrapper[4856]: I1122 09:46:11.770008 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="466d6ab8-2d26-4845-85a4-d4e652a857e7" containerName="ovn-northd" probeResult="failure" output="command timed out" Nov 22 09:46:12 crc kubenswrapper[4856]: I1122 09:46:12.132957 4856 patch_prober.go:28] interesting pod/route-controller-manager-699987fd4b-ggfvm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 22 09:46:12 crc kubenswrapper[4856]: I1122 09:46:12.133034 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-699987fd4b-ggfvm" podUID="76ca1343-914b-414b-b1e3-5e4b2a165697" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 22 09:46:15 crc kubenswrapper[4856]: I1122 09:46:15.773114 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-g4jn9" podUID="1966788b-abc1-4c4a-a29c-aaeba9a3ca65" containerName="registry-server" probeResult="failure" output="command timed out" Nov 22 09:46:15 crc kubenswrapper[4856]: I1122 09:46:15.775936 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-g4jn9" podUID="1966788b-abc1-4c4a-a29c-aaeba9a3ca65" containerName="registry-server" probeResult="failure" output="command timed out" Nov 22 09:46:16 crc kubenswrapper[4856]: I1122 09:46:16.769424 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="466d6ab8-2d26-4845-85a4-d4e652a857e7" containerName="ovn-northd" probeResult="failure" output="command timed out" Nov 22 09:46:16 crc kubenswrapper[4856]: I1122 09:46:16.769811 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="466d6ab8-2d26-4845-85a4-d4e652a857e7" containerName="ovn-northd" probeResult="failure" output="command timed out" Nov 22 09:46:17 crc kubenswrapper[4856]: I1122 09:46:17.774181 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-s8jpj" podUID="a8b51997-87ba-499c-903d-82c1b85c0968" containerName="registry-server" probeResult="failure" output="command timed out" Nov 22 09:46:17 crc kubenswrapper[4856]: I1122 09:46:17.774360 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-s8jpj" podUID="a8b51997-87ba-499c-903d-82c1b85c0968" containerName="registry-server" probeResult="failure" output="command timed out" Nov 22 09:46:17 crc kubenswrapper[4856]: I1122 09:46:17.774393 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-rqb7t" podUID="3c7b0aba-250c-483e-ba94-3dcc4b9c59bb" containerName="registry-server" probeResult="failure" output="command timed out" Nov 22 09:46:17 crc kubenswrapper[4856]: I1122 09:46:17.774470 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-rqb7t" podUID="3c7b0aba-250c-483e-ba94-3dcc4b9c59bb" containerName="registry-server" probeResult="failure" output="command timed out" Nov 22 09:46:18 crc kubenswrapper[4856]: I1122 09:46:18.772409 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="d442d81d-f24e-4a27-bbb5-f25a1792bfca" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Nov 22 09:46:20 crc kubenswrapper[4856]: I1122 09:46:20.770918 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-k5p5q" podUID="53a40c3b-f70f-4fa6-80f9-bbbc6ef4a5f0" containerName="registry-server" probeResult="failure" output="command timed out" Nov 22 09:46:20 crc kubenswrapper[4856]: I1122 09:46:20.772802 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-k5p5q" podUID="53a40c3b-f70f-4fa6-80f9-bbbc6ef4a5f0" containerName="registry-server" probeResult="failure" output="command timed out" Nov 22 09:46:21 crc kubenswrapper[4856]: I1122 09:46:21.710214 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:46:21 crc kubenswrapper[4856]: E1122 09:46:21.711185 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:46:21 crc kubenswrapper[4856]: I1122 09:46:21.768891 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="466d6ab8-2d26-4845-85a4-d4e652a857e7" containerName="ovn-northd" probeResult="failure" output="command timed out" Nov 22 09:46:21 crc kubenswrapper[4856]: I1122 09:46:21.768991 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ovn-northd-0" Nov 22 09:46:21 crc kubenswrapper[4856]: I1122 09:46:21.770949 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ovn-northd" containerStatusID={"Type":"cri-o","ID":"34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8"} pod="openstack/ovn-northd-0" containerMessage="Container ovn-northd failed liveness probe, will be restarted" Nov 22 09:46:21 crc kubenswrapper[4856]: I1122 09:46:21.771120 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="466d6ab8-2d26-4845-85a4-d4e652a857e7" containerName="ovn-northd" containerID="cri-o://34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" gracePeriod=30 Nov 22 09:46:21 crc kubenswrapper[4856]: I1122 09:46:21.771573 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="466d6ab8-2d26-4845-85a4-d4e652a857e7" containerName="ovn-northd" probeResult="failure" output="command timed out" Nov 22 09:46:21 crc kubenswrapper[4856]: I1122 09:46:21.771705 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 22 09:46:22 crc kubenswrapper[4856]: I1122 09:46:22.316764 4856 trace.go:236] Trace[609701877]: "Calculate volume metrics of mcc-auth-proxy-config for pod openshift-machine-config-operator/machine-config-controller-84d6567774-5cq4r" (22-Nov-2025 09:46:11.829) (total time: 10486ms): Nov 22 09:46:22 crc kubenswrapper[4856]: Trace[609701877]: [10.486996682s] [10.486996682s] END Nov 22 09:46:23 crc kubenswrapper[4856]: I1122 09:46:23.769408 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="274230c4-41e5-433a-8878-a09cd3ea7de8" containerName="galera" probeResult="failure" output="command timed out" Nov 22 09:46:23 crc kubenswrapper[4856]: I1122 09:46:23.769888 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="d4dcc1d5-4e57-45ff-931e-0be9bc3be546" containerName="galera" probeResult="failure" output="command timed out" Nov 22 09:46:23 crc kubenswrapper[4856]: I1122 09:46:23.770178 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="d4dcc1d5-4e57-45ff-931e-0be9bc3be546" containerName="galera" probeResult="failure" output="command timed out" Nov 22 09:46:23 crc kubenswrapper[4856]: I1122 09:46:23.772342 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="274230c4-41e5-433a-8878-a09cd3ea7de8" containerName="galera" probeResult="failure" output="command timed out" Nov 22 09:46:25 crc kubenswrapper[4856]: E1122 09:46:25.489812 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:46:25 crc kubenswrapper[4856]: E1122 09:46:25.492033 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:46:25 crc kubenswrapper[4856]: E1122 09:46:25.496055 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:46:25 crc kubenswrapper[4856]: E1122 09:46:25.496102 4856 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="466d6ab8-2d26-4845-85a4-d4e652a857e7" containerName="ovn-northd" Nov 22 09:46:28 crc kubenswrapper[4856]: I1122 09:46:28.549265 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_466d6ab8-2d26-4845-85a4-d4e652a857e7/ovn-northd/0.log" Nov 22 09:46:28 crc kubenswrapper[4856]: I1122 09:46:28.549779 4856 generic.go:334] "Generic (PLEG): container finished" podID="466d6ab8-2d26-4845-85a4-d4e652a857e7" containerID="34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" exitCode=139 Nov 22 09:46:28 crc kubenswrapper[4856]: I1122 09:46:28.549812 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"466d6ab8-2d26-4845-85a4-d4e652a857e7","Type":"ContainerDied","Data":"34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8"} Nov 22 09:46:30 crc kubenswrapper[4856]: E1122 09:46:30.486168 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8 is running failed: container process not found" containerID="34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:46:30 crc kubenswrapper[4856]: E1122 09:46:30.486729 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8 is running failed: container process not found" containerID="34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:46:30 crc kubenswrapper[4856]: E1122 09:46:30.487074 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8 is running failed: container process not found" containerID="34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:46:30 crc kubenswrapper[4856]: E1122 09:46:30.487111 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="466d6ab8-2d26-4845-85a4-d4e652a857e7" containerName="ovn-northd" Nov 22 09:46:35 crc kubenswrapper[4856]: E1122 09:46:35.485984 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8 is running failed: container process not found" containerID="34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:46:35 crc kubenswrapper[4856]: E1122 09:46:35.489024 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8 is running failed: container process not found" containerID="34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:46:35 crc kubenswrapper[4856]: E1122 09:46:35.489357 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8 is running failed: container process not found" containerID="34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:46:35 crc kubenswrapper[4856]: E1122 09:46:35.489389 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="466d6ab8-2d26-4845-85a4-d4e652a857e7" containerName="ovn-northd" Nov 22 09:46:36 crc kubenswrapper[4856]: I1122 09:46:36.710359 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:46:36 crc kubenswrapper[4856]: E1122 09:46:36.711062 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:46:40 crc kubenswrapper[4856]: E1122 09:46:40.485631 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8 is running failed: container process not found" containerID="34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:46:40 crc kubenswrapper[4856]: E1122 09:46:40.487529 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8 is running failed: container process not found" containerID="34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:46:40 crc kubenswrapper[4856]: E1122 09:46:40.487810 4856 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8 is running failed: container process not found" containerID="34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Nov 22 09:46:40 crc kubenswrapper[4856]: E1122 09:46:40.487837 4856 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 34a6027379eca4c528f3fddeba3135b428526616923ed44180d60177f88b9ec8 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="466d6ab8-2d26-4845-85a4-d4e652a857e7" containerName="ovn-northd" Nov 22 09:46:41 crc kubenswrapper[4856]: E1122 09:46:41.873741 4856 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:87d86758a49b8425a546c66207f21761" Nov 22 09:46:41 crc kubenswrapper[4856]: E1122 09:46:41.874210 4856 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:87d86758a49b8425a546c66207f21761" Nov 22 09:46:41 crc kubenswrapper[4856]: E1122 09:46:41.874461 4856 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:87d86758a49b8425a546c66207f21761,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfwz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(9ceb57cb-8794-40bb-97b2-d59671b89459): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 22 09:46:41 crc kubenswrapper[4856]: E1122 09:46:41.875795 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="9ceb57cb-8794-40bb-97b2-d59671b89459" Nov 22 09:46:42 crc kubenswrapper[4856]: I1122 09:46:42.708917 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_466d6ab8-2d26-4845-85a4-d4e652a857e7/ovn-northd/0.log" Nov 22 09:46:42 crc kubenswrapper[4856]: I1122 09:46:42.709330 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"466d6ab8-2d26-4845-85a4-d4e652a857e7","Type":"ContainerStarted","Data":"11820bc719ccc4165ea4430b198567fdc97f607cd41a9c7d80408227e0bea049"} Nov 22 09:46:42 crc kubenswrapper[4856]: E1122 09:46:42.712599 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:87d86758a49b8425a546c66207f21761\\\"\"" pod="openstack/tempest-tests-tempest" podUID="9ceb57cb-8794-40bb-97b2-d59671b89459" Nov 22 09:46:42 crc kubenswrapper[4856]: I1122 09:46:42.727850 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 22 09:46:50 crc kubenswrapper[4856]: I1122 09:46:50.709580 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:46:50 crc kubenswrapper[4856]: E1122 09:46:50.710453 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:46:53 crc kubenswrapper[4856]: I1122 09:46:53.942461 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 22 09:46:55 crc kubenswrapper[4856]: I1122 09:46:55.573040 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 22 09:46:55 crc kubenswrapper[4856]: I1122 09:46:55.822794 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9ceb57cb-8794-40bb-97b2-d59671b89459","Type":"ContainerStarted","Data":"091bcffc4f70006fec8c980ae3b20ba63a7c90420f7373fb904f04352d1923a7"} Nov 22 09:46:55 crc kubenswrapper[4856]: I1122 09:46:55.845137 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.565794354 podStartE2EDuration="1m24.845119565s" podCreationTimestamp="2025-11-22 09:45:31 +0000 UTC" firstStartedPulling="2025-11-22 09:45:33.660748759 +0000 UTC m=+9776.074142017" lastFinishedPulling="2025-11-22 09:46:53.94007396 +0000 UTC m=+9856.353467228" observedRunningTime="2025-11-22 09:46:55.838348283 +0000 UTC m=+9858.251741541" watchObservedRunningTime="2025-11-22 09:46:55.845119565 +0000 UTC m=+9858.258512823" Nov 22 09:47:05 crc kubenswrapper[4856]: I1122 09:47:05.709721 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:47:05 crc kubenswrapper[4856]: E1122 09:47:05.710624 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:47:20 crc kubenswrapper[4856]: I1122 09:47:20.710353 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:47:20 crc kubenswrapper[4856]: E1122 09:47:20.711195 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:47:32 crc kubenswrapper[4856]: I1122 09:47:32.710312 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:47:33 crc kubenswrapper[4856]: I1122 09:47:33.205763 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"abbde5eb742856a6b1aeecc35584b7c21bad82eee17d41dfdf37702229eadfce"} Nov 22 09:47:35 crc kubenswrapper[4856]: I1122 09:47:35.411142 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rx496"] Nov 22 09:47:35 crc kubenswrapper[4856]: I1122 09:47:35.414036 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:47:35 crc kubenswrapper[4856]: I1122 09:47:35.427263 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rx496"] Nov 22 09:47:35 crc kubenswrapper[4856]: I1122 09:47:35.537369 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4c809b-2668-4ef4-8d64-644a0fa26c85-catalog-content\") pod \"redhat-operators-rx496\" (UID: \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\") " pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:47:35 crc kubenswrapper[4856]: I1122 09:47:35.537530 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmmxf\" (UniqueName: \"kubernetes.io/projected/cc4c809b-2668-4ef4-8d64-644a0fa26c85-kube-api-access-dmmxf\") pod \"redhat-operators-rx496\" (UID: \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\") " pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:47:35 crc kubenswrapper[4856]: I1122 09:47:35.537717 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4c809b-2668-4ef4-8d64-644a0fa26c85-utilities\") pod \"redhat-operators-rx496\" (UID: \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\") " pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:47:35 crc kubenswrapper[4856]: I1122 09:47:35.639936 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4c809b-2668-4ef4-8d64-644a0fa26c85-utilities\") pod \"redhat-operators-rx496\" (UID: \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\") " pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:47:35 crc kubenswrapper[4856]: I1122 09:47:35.640017 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4c809b-2668-4ef4-8d64-644a0fa26c85-catalog-content\") pod \"redhat-operators-rx496\" (UID: \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\") " pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:47:35 crc kubenswrapper[4856]: I1122 09:47:35.640134 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmmxf\" (UniqueName: \"kubernetes.io/projected/cc4c809b-2668-4ef4-8d64-644a0fa26c85-kube-api-access-dmmxf\") pod \"redhat-operators-rx496\" (UID: \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\") " pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:47:35 crc kubenswrapper[4856]: I1122 09:47:35.640586 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4c809b-2668-4ef4-8d64-644a0fa26c85-catalog-content\") pod \"redhat-operators-rx496\" (UID: \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\") " pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:47:35 crc kubenswrapper[4856]: I1122 09:47:35.640623 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4c809b-2668-4ef4-8d64-644a0fa26c85-utilities\") pod \"redhat-operators-rx496\" (UID: \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\") " pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:47:35 crc kubenswrapper[4856]: I1122 09:47:35.969946 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmmxf\" (UniqueName: \"kubernetes.io/projected/cc4c809b-2668-4ef4-8d64-644a0fa26c85-kube-api-access-dmmxf\") pod \"redhat-operators-rx496\" (UID: \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\") " pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:47:36 crc kubenswrapper[4856]: I1122 09:47:36.049089 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:47:37 crc kubenswrapper[4856]: I1122 09:47:37.286879 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rx496"] Nov 22 09:47:38 crc kubenswrapper[4856]: I1122 09:47:38.276326 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx496" event={"ID":"cc4c809b-2668-4ef4-8d64-644a0fa26c85","Type":"ContainerStarted","Data":"f63047c36bbb3ba9826f5437dcc4aec866cd7299a2e8dd9e81d32956803f2a94"} Nov 22 09:47:41 crc kubenswrapper[4856]: I1122 09:47:41.437731 4856 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-8484479b76-8csj5" podUID="14290ea7-6928-401a-8a9e-3ab8e557570d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:47:41 crc kubenswrapper[4856]: I1122 09:47:41.438150 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-8484479b76-8csj5" podUID="14290ea7-6928-401a-8a9e-3ab8e557570d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.110:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 22 09:47:43 crc kubenswrapper[4856]: I1122 09:47:43.773685 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="d442d81d-f24e-4a27-bbb5-f25a1792bfca" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Nov 22 09:47:47 crc kubenswrapper[4856]: I1122 09:47:47.390045 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx496" event={"ID":"cc4c809b-2668-4ef4-8d64-644a0fa26c85","Type":"ContainerStarted","Data":"9959123f3add96162c60c3f78890824f6dbe6835f526cba8bba05851a0233379"} Nov 22 09:47:48 crc kubenswrapper[4856]: I1122 09:47:48.405837 4856 generic.go:334] "Generic (PLEG): container finished" podID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerID="9959123f3add96162c60c3f78890824f6dbe6835f526cba8bba05851a0233379" exitCode=0 Nov 22 09:47:48 crc kubenswrapper[4856]: I1122 09:47:48.405880 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx496" event={"ID":"cc4c809b-2668-4ef4-8d64-644a0fa26c85","Type":"ContainerDied","Data":"9959123f3add96162c60c3f78890824f6dbe6835f526cba8bba05851a0233379"} Nov 22 09:47:51 crc kubenswrapper[4856]: I1122 09:47:51.442399 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx496" event={"ID":"cc4c809b-2668-4ef4-8d64-644a0fa26c85","Type":"ContainerStarted","Data":"a929f1033cc3b573dfaf3fa5e0236bca414e322dc41f8d76ecf3ad4d8c3ff6b8"} Nov 22 09:48:12 crc kubenswrapper[4856]: I1122 09:48:12.644471 4856 generic.go:334] "Generic (PLEG): container finished" podID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerID="a929f1033cc3b573dfaf3fa5e0236bca414e322dc41f8d76ecf3ad4d8c3ff6b8" exitCode=0 Nov 22 09:48:12 crc kubenswrapper[4856]: I1122 09:48:12.644549 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx496" event={"ID":"cc4c809b-2668-4ef4-8d64-644a0fa26c85","Type":"ContainerDied","Data":"a929f1033cc3b573dfaf3fa5e0236bca414e322dc41f8d76ecf3ad4d8c3ff6b8"} Nov 22 09:48:13 crc kubenswrapper[4856]: I1122 09:48:13.713775 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx496" event={"ID":"cc4c809b-2668-4ef4-8d64-644a0fa26c85","Type":"ContainerStarted","Data":"0e0289fb904b5db74d77ee66d31416a1c2441ff26b4b7ff08a003367c71dc8bd"} Nov 22 09:48:16 crc kubenswrapper[4856]: I1122 09:48:16.050234 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:48:16 crc kubenswrapper[4856]: I1122 09:48:16.050939 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:48:17 crc kubenswrapper[4856]: I1122 09:48:17.099797 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rx496" podUID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerName="registry-server" probeResult="failure" output=< Nov 22 09:48:17 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 09:48:17 crc kubenswrapper[4856]: > Nov 22 09:48:27 crc kubenswrapper[4856]: I1122 09:48:27.121206 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rx496" podUID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerName="registry-server" probeResult="failure" output=< Nov 22 09:48:27 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 09:48:27 crc kubenswrapper[4856]: > Nov 22 09:48:37 crc kubenswrapper[4856]: I1122 09:48:37.099316 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rx496" podUID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerName="registry-server" probeResult="failure" output=< Nov 22 09:48:37 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 09:48:37 crc kubenswrapper[4856]: > Nov 22 09:48:47 crc kubenswrapper[4856]: I1122 09:48:47.099131 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rx496" podUID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerName="registry-server" probeResult="failure" output=< Nov 22 09:48:47 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 09:48:47 crc kubenswrapper[4856]: > Nov 22 09:48:56 crc kubenswrapper[4856]: I1122 09:48:56.099393 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:48:56 crc kubenswrapper[4856]: I1122 09:48:56.121824 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rx496" podStartSLOduration=56.481154679 podStartE2EDuration="1m21.121804988s" podCreationTimestamp="2025-11-22 09:47:35 +0000 UTC" firstStartedPulling="2025-11-22 09:47:48.409124711 +0000 UTC m=+9910.822517969" lastFinishedPulling="2025-11-22 09:48:13.04977502 +0000 UTC m=+9935.463168278" observedRunningTime="2025-11-22 09:48:13.746824108 +0000 UTC m=+9936.160217386" watchObservedRunningTime="2025-11-22 09:48:56.121804988 +0000 UTC m=+9978.535198276" Nov 22 09:48:56 crc kubenswrapper[4856]: I1122 09:48:56.148227 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:48:56 crc kubenswrapper[4856]: I1122 09:48:56.338826 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rx496"] Nov 22 09:48:57 crc kubenswrapper[4856]: I1122 09:48:57.153326 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rx496" podUID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerName="registry-server" containerID="cri-o://0e0289fb904b5db74d77ee66d31416a1c2441ff26b4b7ff08a003367c71dc8bd" gracePeriod=2 Nov 22 09:48:57 crc kubenswrapper[4856]: I1122 09:48:57.804855 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:48:57 crc kubenswrapper[4856]: I1122 09:48:57.901106 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4c809b-2668-4ef4-8d64-644a0fa26c85-catalog-content\") pod \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\" (UID: \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\") " Nov 22 09:48:57 crc kubenswrapper[4856]: I1122 09:48:57.901202 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmmxf\" (UniqueName: \"kubernetes.io/projected/cc4c809b-2668-4ef4-8d64-644a0fa26c85-kube-api-access-dmmxf\") pod \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\" (UID: \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\") " Nov 22 09:48:57 crc kubenswrapper[4856]: I1122 09:48:57.901359 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4c809b-2668-4ef4-8d64-644a0fa26c85-utilities\") pod \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\" (UID: \"cc4c809b-2668-4ef4-8d64-644a0fa26c85\") " Nov 22 09:48:57 crc kubenswrapper[4856]: I1122 09:48:57.902373 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc4c809b-2668-4ef4-8d64-644a0fa26c85-utilities" (OuterVolumeSpecName: "utilities") pod "cc4c809b-2668-4ef4-8d64-644a0fa26c85" (UID: "cc4c809b-2668-4ef4-8d64-644a0fa26c85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:48:57 crc kubenswrapper[4856]: I1122 09:48:57.913251 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc4c809b-2668-4ef4-8d64-644a0fa26c85-kube-api-access-dmmxf" (OuterVolumeSpecName: "kube-api-access-dmmxf") pod "cc4c809b-2668-4ef4-8d64-644a0fa26c85" (UID: "cc4c809b-2668-4ef4-8d64-644a0fa26c85"). InnerVolumeSpecName "kube-api-access-dmmxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:48:57 crc kubenswrapper[4856]: I1122 09:48:57.998034 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc4c809b-2668-4ef4-8d64-644a0fa26c85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc4c809b-2668-4ef4-8d64-644a0fa26c85" (UID: "cc4c809b-2668-4ef4-8d64-644a0fa26c85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.003882 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc4c809b-2668-4ef4-8d64-644a0fa26c85-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.003916 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc4c809b-2668-4ef4-8d64-644a0fa26c85-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.003929 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmmxf\" (UniqueName: \"kubernetes.io/projected/cc4c809b-2668-4ef4-8d64-644a0fa26c85-kube-api-access-dmmxf\") on node \"crc\" DevicePath \"\"" Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.164813 4856 generic.go:334] "Generic (PLEG): container finished" podID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerID="0e0289fb904b5db74d77ee66d31416a1c2441ff26b4b7ff08a003367c71dc8bd" exitCode=0 Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.164852 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx496" event={"ID":"cc4c809b-2668-4ef4-8d64-644a0fa26c85","Type":"ContainerDied","Data":"0e0289fb904b5db74d77ee66d31416a1c2441ff26b4b7ff08a003367c71dc8bd"} Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.164879 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rx496" event={"ID":"cc4c809b-2668-4ef4-8d64-644a0fa26c85","Type":"ContainerDied","Data":"f63047c36bbb3ba9826f5437dcc4aec866cd7299a2e8dd9e81d32956803f2a94"} Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.164883 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rx496" Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.164897 4856 scope.go:117] "RemoveContainer" containerID="0e0289fb904b5db74d77ee66d31416a1c2441ff26b4b7ff08a003367c71dc8bd" Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.193761 4856 scope.go:117] "RemoveContainer" containerID="a929f1033cc3b573dfaf3fa5e0236bca414e322dc41f8d76ecf3ad4d8c3ff6b8" Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.198412 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rx496"] Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.206534 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rx496"] Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.214394 4856 scope.go:117] "RemoveContainer" containerID="9959123f3add96162c60c3f78890824f6dbe6835f526cba8bba05851a0233379" Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.275660 4856 scope.go:117] "RemoveContainer" containerID="0e0289fb904b5db74d77ee66d31416a1c2441ff26b4b7ff08a003367c71dc8bd" Nov 22 09:48:58 crc kubenswrapper[4856]: E1122 09:48:58.276155 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e0289fb904b5db74d77ee66d31416a1c2441ff26b4b7ff08a003367c71dc8bd\": container with ID starting with 0e0289fb904b5db74d77ee66d31416a1c2441ff26b4b7ff08a003367c71dc8bd not found: ID does not exist" containerID="0e0289fb904b5db74d77ee66d31416a1c2441ff26b4b7ff08a003367c71dc8bd" Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.276202 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0289fb904b5db74d77ee66d31416a1c2441ff26b4b7ff08a003367c71dc8bd"} err="failed to get container status \"0e0289fb904b5db74d77ee66d31416a1c2441ff26b4b7ff08a003367c71dc8bd\": rpc error: code = NotFound desc = could not find container \"0e0289fb904b5db74d77ee66d31416a1c2441ff26b4b7ff08a003367c71dc8bd\": container with ID starting with 0e0289fb904b5db74d77ee66d31416a1c2441ff26b4b7ff08a003367c71dc8bd not found: ID does not exist" Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.276223 4856 scope.go:117] "RemoveContainer" containerID="a929f1033cc3b573dfaf3fa5e0236bca414e322dc41f8d76ecf3ad4d8c3ff6b8" Nov 22 09:48:58 crc kubenswrapper[4856]: E1122 09:48:58.276712 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a929f1033cc3b573dfaf3fa5e0236bca414e322dc41f8d76ecf3ad4d8c3ff6b8\": container with ID starting with a929f1033cc3b573dfaf3fa5e0236bca414e322dc41f8d76ecf3ad4d8c3ff6b8 not found: ID does not exist" containerID="a929f1033cc3b573dfaf3fa5e0236bca414e322dc41f8d76ecf3ad4d8c3ff6b8" Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.276761 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a929f1033cc3b573dfaf3fa5e0236bca414e322dc41f8d76ecf3ad4d8c3ff6b8"} err="failed to get container status \"a929f1033cc3b573dfaf3fa5e0236bca414e322dc41f8d76ecf3ad4d8c3ff6b8\": rpc error: code = NotFound desc = could not find container \"a929f1033cc3b573dfaf3fa5e0236bca414e322dc41f8d76ecf3ad4d8c3ff6b8\": container with ID starting with a929f1033cc3b573dfaf3fa5e0236bca414e322dc41f8d76ecf3ad4d8c3ff6b8 not found: ID does not exist" Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.276791 4856 scope.go:117] "RemoveContainer" containerID="9959123f3add96162c60c3f78890824f6dbe6835f526cba8bba05851a0233379" Nov 22 09:48:58 crc kubenswrapper[4856]: E1122 09:48:58.277106 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9959123f3add96162c60c3f78890824f6dbe6835f526cba8bba05851a0233379\": container with ID starting with 9959123f3add96162c60c3f78890824f6dbe6835f526cba8bba05851a0233379 not found: ID does not exist" containerID="9959123f3add96162c60c3f78890824f6dbe6835f526cba8bba05851a0233379" Nov 22 09:48:58 crc kubenswrapper[4856]: I1122 09:48:58.277148 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9959123f3add96162c60c3f78890824f6dbe6835f526cba8bba05851a0233379"} err="failed to get container status \"9959123f3add96162c60c3f78890824f6dbe6835f526cba8bba05851a0233379\": rpc error: code = NotFound desc = could not find container \"9959123f3add96162c60c3f78890824f6dbe6835f526cba8bba05851a0233379\": container with ID starting with 9959123f3add96162c60c3f78890824f6dbe6835f526cba8bba05851a0233379 not found: ID does not exist" Nov 22 09:48:59 crc kubenswrapper[4856]: I1122 09:48:58.721479 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" path="/var/lib/kubelet/pods/cc4c809b-2668-4ef4-8d64-644a0fa26c85/volumes" Nov 22 09:49:59 crc kubenswrapper[4856]: I1122 09:49:59.754337 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:49:59 crc kubenswrapper[4856]: I1122 09:49:59.754939 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:50:29 crc kubenswrapper[4856]: I1122 09:50:29.754527 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:50:29 crc kubenswrapper[4856]: I1122 09:50:29.755032 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:50:59 crc kubenswrapper[4856]: I1122 09:50:59.755064 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:50:59 crc kubenswrapper[4856]: I1122 09:50:59.755691 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:50:59 crc kubenswrapper[4856]: I1122 09:50:59.755752 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 09:50:59 crc kubenswrapper[4856]: I1122 09:50:59.756750 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"abbde5eb742856a6b1aeecc35584b7c21bad82eee17d41dfdf37702229eadfce"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:50:59 crc kubenswrapper[4856]: I1122 09:50:59.756826 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://abbde5eb742856a6b1aeecc35584b7c21bad82eee17d41dfdf37702229eadfce" gracePeriod=600 Nov 22 09:51:00 crc kubenswrapper[4856]: I1122 09:51:00.376797 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="abbde5eb742856a6b1aeecc35584b7c21bad82eee17d41dfdf37702229eadfce" exitCode=0 Nov 22 09:51:00 crc kubenswrapper[4856]: I1122 09:51:00.377139 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"abbde5eb742856a6b1aeecc35584b7c21bad82eee17d41dfdf37702229eadfce"} Nov 22 09:51:00 crc kubenswrapper[4856]: I1122 09:51:00.377170 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85"} Nov 22 09:51:00 crc kubenswrapper[4856]: I1122 09:51:00.377189 4856 scope.go:117] "RemoveContainer" containerID="55124b0483ef5b5619a054c3bc3cc8e1b60eb07c5d4b4abe7a80828b47bb94a4" Nov 22 09:53:29 crc kubenswrapper[4856]: I1122 09:53:29.754466 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:53:29 crc kubenswrapper[4856]: I1122 09:53:29.754921 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:53:59 crc kubenswrapper[4856]: I1122 09:53:59.754822 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:53:59 crc kubenswrapper[4856]: I1122 09:53:59.755426 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:54:08 crc kubenswrapper[4856]: I1122 09:54:08.993915 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-npnmf"] Nov 22 09:54:08 crc kubenswrapper[4856]: E1122 09:54:08.994920 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerName="registry-server" Nov 22 09:54:08 crc kubenswrapper[4856]: I1122 09:54:08.994936 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerName="registry-server" Nov 22 09:54:08 crc kubenswrapper[4856]: E1122 09:54:08.994957 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerName="extract-utilities" Nov 22 09:54:08 crc kubenswrapper[4856]: I1122 09:54:08.994965 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerName="extract-utilities" Nov 22 09:54:08 crc kubenswrapper[4856]: E1122 09:54:08.994990 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerName="extract-content" Nov 22 09:54:08 crc kubenswrapper[4856]: I1122 09:54:08.995075 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerName="extract-content" Nov 22 09:54:08 crc kubenswrapper[4856]: I1122 09:54:08.995319 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc4c809b-2668-4ef4-8d64-644a0fa26c85" containerName="registry-server" Nov 22 09:54:08 crc kubenswrapper[4856]: I1122 09:54:08.997229 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:09 crc kubenswrapper[4856]: I1122 09:54:09.004365 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npnmf"] Nov 22 09:54:09 crc kubenswrapper[4856]: I1122 09:54:09.045813 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvm92\" (UniqueName: \"kubernetes.io/projected/8810108a-453a-4d4a-806d-19989b87194e-kube-api-access-gvm92\") pod \"redhat-marketplace-npnmf\" (UID: \"8810108a-453a-4d4a-806d-19989b87194e\") " pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:09 crc kubenswrapper[4856]: I1122 09:54:09.046000 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8810108a-453a-4d4a-806d-19989b87194e-utilities\") pod \"redhat-marketplace-npnmf\" (UID: \"8810108a-453a-4d4a-806d-19989b87194e\") " pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:09 crc kubenswrapper[4856]: I1122 09:54:09.046021 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8810108a-453a-4d4a-806d-19989b87194e-catalog-content\") pod \"redhat-marketplace-npnmf\" (UID: \"8810108a-453a-4d4a-806d-19989b87194e\") " pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:09 crc kubenswrapper[4856]: I1122 09:54:09.147706 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8810108a-453a-4d4a-806d-19989b87194e-utilities\") pod \"redhat-marketplace-npnmf\" (UID: \"8810108a-453a-4d4a-806d-19989b87194e\") " pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:09 crc kubenswrapper[4856]: I1122 09:54:09.147765 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8810108a-453a-4d4a-806d-19989b87194e-catalog-content\") pod \"redhat-marketplace-npnmf\" (UID: \"8810108a-453a-4d4a-806d-19989b87194e\") " pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:09 crc kubenswrapper[4856]: I1122 09:54:09.147872 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvm92\" (UniqueName: \"kubernetes.io/projected/8810108a-453a-4d4a-806d-19989b87194e-kube-api-access-gvm92\") pod \"redhat-marketplace-npnmf\" (UID: \"8810108a-453a-4d4a-806d-19989b87194e\") " pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:09 crc kubenswrapper[4856]: I1122 09:54:09.148283 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8810108a-453a-4d4a-806d-19989b87194e-utilities\") pod \"redhat-marketplace-npnmf\" (UID: \"8810108a-453a-4d4a-806d-19989b87194e\") " pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:09 crc kubenswrapper[4856]: I1122 09:54:09.148437 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8810108a-453a-4d4a-806d-19989b87194e-catalog-content\") pod \"redhat-marketplace-npnmf\" (UID: \"8810108a-453a-4d4a-806d-19989b87194e\") " pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:09 crc kubenswrapper[4856]: I1122 09:54:09.174095 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvm92\" (UniqueName: \"kubernetes.io/projected/8810108a-453a-4d4a-806d-19989b87194e-kube-api-access-gvm92\") pod \"redhat-marketplace-npnmf\" (UID: \"8810108a-453a-4d4a-806d-19989b87194e\") " pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:09 crc kubenswrapper[4856]: I1122 09:54:09.324389 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:10 crc kubenswrapper[4856]: I1122 09:54:10.132121 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npnmf"] Nov 22 09:54:10 crc kubenswrapper[4856]: I1122 09:54:10.229448 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npnmf" event={"ID":"8810108a-453a-4d4a-806d-19989b87194e","Type":"ContainerStarted","Data":"09a8acb48e87ecbb238a4ddd5e4fa4d02da47b394794c14d4c6ee2b8d978fa88"} Nov 22 09:54:11 crc kubenswrapper[4856]: I1122 09:54:11.241746 4856 generic.go:334] "Generic (PLEG): container finished" podID="8810108a-453a-4d4a-806d-19989b87194e" containerID="ccbcc25537ea00ebcac294f0041d35fb0bc878e402e404cecad5ef70513d18ce" exitCode=0 Nov 22 09:54:11 crc kubenswrapper[4856]: I1122 09:54:11.241805 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npnmf" event={"ID":"8810108a-453a-4d4a-806d-19989b87194e","Type":"ContainerDied","Data":"ccbcc25537ea00ebcac294f0041d35fb0bc878e402e404cecad5ef70513d18ce"} Nov 22 09:54:11 crc kubenswrapper[4856]: I1122 09:54:11.244612 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 09:54:13 crc kubenswrapper[4856]: I1122 09:54:13.261971 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npnmf" event={"ID":"8810108a-453a-4d4a-806d-19989b87194e","Type":"ContainerStarted","Data":"45e5a3d39bd5123c682b0678f19cb7177bf4581dffb5efa3e7874441074c220d"} Nov 22 09:54:14 crc kubenswrapper[4856]: I1122 09:54:14.279301 4856 generic.go:334] "Generic (PLEG): container finished" podID="8810108a-453a-4d4a-806d-19989b87194e" containerID="45e5a3d39bd5123c682b0678f19cb7177bf4581dffb5efa3e7874441074c220d" exitCode=0 Nov 22 09:54:14 crc kubenswrapper[4856]: I1122 09:54:14.279402 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npnmf" event={"ID":"8810108a-453a-4d4a-806d-19989b87194e","Type":"ContainerDied","Data":"45e5a3d39bd5123c682b0678f19cb7177bf4581dffb5efa3e7874441074c220d"} Nov 22 09:54:16 crc kubenswrapper[4856]: I1122 09:54:16.299671 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npnmf" event={"ID":"8810108a-453a-4d4a-806d-19989b87194e","Type":"ContainerStarted","Data":"4a9aaa04868a94963b0dbc7d2998c623c820f5f99ae35b8df393c149ccbc658e"} Nov 22 09:54:17 crc kubenswrapper[4856]: I1122 09:54:17.336772 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-npnmf" podStartSLOduration=4.891253862 podStartE2EDuration="9.336751008s" podCreationTimestamp="2025-11-22 09:54:08 +0000 UTC" firstStartedPulling="2025-11-22 09:54:11.244287388 +0000 UTC m=+10293.657680646" lastFinishedPulling="2025-11-22 09:54:15.689784534 +0000 UTC m=+10298.103177792" observedRunningTime="2025-11-22 09:54:17.330258583 +0000 UTC m=+10299.743651851" watchObservedRunningTime="2025-11-22 09:54:17.336751008 +0000 UTC m=+10299.750144266" Nov 22 09:54:19 crc kubenswrapper[4856]: I1122 09:54:19.324590 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:19 crc kubenswrapper[4856]: I1122 09:54:19.324919 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:19 crc kubenswrapper[4856]: I1122 09:54:19.385824 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:29 crc kubenswrapper[4856]: I1122 09:54:29.389951 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:29 crc kubenswrapper[4856]: I1122 09:54:29.459269 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npnmf"] Nov 22 09:54:29 crc kubenswrapper[4856]: I1122 09:54:29.459620 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-npnmf" podUID="8810108a-453a-4d4a-806d-19989b87194e" containerName="registry-server" containerID="cri-o://4a9aaa04868a94963b0dbc7d2998c623c820f5f99ae35b8df393c149ccbc658e" gracePeriod=2 Nov 22 09:54:29 crc kubenswrapper[4856]: I1122 09:54:29.755304 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 09:54:29 crc kubenswrapper[4856]: I1122 09:54:29.755958 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 09:54:29 crc kubenswrapper[4856]: I1122 09:54:29.756205 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 09:54:29 crc kubenswrapper[4856]: I1122 09:54:29.758048 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 09:54:29 crc kubenswrapper[4856]: I1122 09:54:29.758363 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" gracePeriod=600 Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.441974 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" exitCode=0 Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.442033 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85"} Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.442309 4856 scope.go:117] "RemoveContainer" containerID="abbde5eb742856a6b1aeecc35584b7c21bad82eee17d41dfdf37702229eadfce" Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.445764 4856 generic.go:334] "Generic (PLEG): container finished" podID="8810108a-453a-4d4a-806d-19989b87194e" containerID="4a9aaa04868a94963b0dbc7d2998c623c820f5f99ae35b8df393c149ccbc658e" exitCode=0 Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.445798 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npnmf" event={"ID":"8810108a-453a-4d4a-806d-19989b87194e","Type":"ContainerDied","Data":"4a9aaa04868a94963b0dbc7d2998c623c820f5f99ae35b8df393c149ccbc658e"} Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.661674 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:30 crc kubenswrapper[4856]: E1122 09:54:30.710503 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.724154 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8810108a-453a-4d4a-806d-19989b87194e-utilities\") pod \"8810108a-453a-4d4a-806d-19989b87194e\" (UID: \"8810108a-453a-4d4a-806d-19989b87194e\") " Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.724362 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8810108a-453a-4d4a-806d-19989b87194e-catalog-content\") pod \"8810108a-453a-4d4a-806d-19989b87194e\" (UID: \"8810108a-453a-4d4a-806d-19989b87194e\") " Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.724391 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvm92\" (UniqueName: \"kubernetes.io/projected/8810108a-453a-4d4a-806d-19989b87194e-kube-api-access-gvm92\") pod \"8810108a-453a-4d4a-806d-19989b87194e\" (UID: \"8810108a-453a-4d4a-806d-19989b87194e\") " Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.726098 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8810108a-453a-4d4a-806d-19989b87194e-utilities" (OuterVolumeSpecName: "utilities") pod "8810108a-453a-4d4a-806d-19989b87194e" (UID: "8810108a-453a-4d4a-806d-19989b87194e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.731535 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8810108a-453a-4d4a-806d-19989b87194e-kube-api-access-gvm92" (OuterVolumeSpecName: "kube-api-access-gvm92") pod "8810108a-453a-4d4a-806d-19989b87194e" (UID: "8810108a-453a-4d4a-806d-19989b87194e"). InnerVolumeSpecName "kube-api-access-gvm92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.744922 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8810108a-453a-4d4a-806d-19989b87194e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8810108a-453a-4d4a-806d-19989b87194e" (UID: "8810108a-453a-4d4a-806d-19989b87194e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.826733 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8810108a-453a-4d4a-806d-19989b87194e-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.826767 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8810108a-453a-4d4a-806d-19989b87194e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:54:30 crc kubenswrapper[4856]: I1122 09:54:30.826779 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvm92\" (UniqueName: \"kubernetes.io/projected/8810108a-453a-4d4a-806d-19989b87194e-kube-api-access-gvm92\") on node \"crc\" DevicePath \"\"" Nov 22 09:54:31 crc kubenswrapper[4856]: I1122 09:54:31.458770 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:54:31 crc kubenswrapper[4856]: E1122 09:54:31.460266 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:54:31 crc kubenswrapper[4856]: I1122 09:54:31.460904 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npnmf" event={"ID":"8810108a-453a-4d4a-806d-19989b87194e","Type":"ContainerDied","Data":"09a8acb48e87ecbb238a4ddd5e4fa4d02da47b394794c14d4c6ee2b8d978fa88"} Nov 22 09:54:31 crc kubenswrapper[4856]: I1122 09:54:31.460953 4856 scope.go:117] "RemoveContainer" containerID="4a9aaa04868a94963b0dbc7d2998c623c820f5f99ae35b8df393c149ccbc658e" Nov 22 09:54:31 crc kubenswrapper[4856]: I1122 09:54:31.461031 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npnmf" Nov 22 09:54:31 crc kubenswrapper[4856]: I1122 09:54:31.493735 4856 scope.go:117] "RemoveContainer" containerID="45e5a3d39bd5123c682b0678f19cb7177bf4581dffb5efa3e7874441074c220d" Nov 22 09:54:31 crc kubenswrapper[4856]: I1122 09:54:31.539662 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npnmf"] Nov 22 09:54:31 crc kubenswrapper[4856]: I1122 09:54:31.550155 4856 scope.go:117] "RemoveContainer" containerID="ccbcc25537ea00ebcac294f0041d35fb0bc878e402e404cecad5ef70513d18ce" Nov 22 09:54:31 crc kubenswrapper[4856]: I1122 09:54:31.557006 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-npnmf"] Nov 22 09:54:32 crc kubenswrapper[4856]: I1122 09:54:32.732368 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8810108a-453a-4d4a-806d-19989b87194e" path="/var/lib/kubelet/pods/8810108a-453a-4d4a-806d-19989b87194e/volumes" Nov 22 09:54:46 crc kubenswrapper[4856]: I1122 09:54:46.709559 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:54:46 crc kubenswrapper[4856]: E1122 09:54:46.710372 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:55:00 crc kubenswrapper[4856]: I1122 09:55:00.713193 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:55:00 crc kubenswrapper[4856]: E1122 09:55:00.713973 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:55:13 crc kubenswrapper[4856]: I1122 09:55:13.709539 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:55:13 crc kubenswrapper[4856]: E1122 09:55:13.710916 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.459245 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9k4zn"] Nov 22 09:55:19 crc kubenswrapper[4856]: E1122 09:55:19.460295 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8810108a-453a-4d4a-806d-19989b87194e" containerName="extract-content" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.460313 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8810108a-453a-4d4a-806d-19989b87194e" containerName="extract-content" Nov 22 09:55:19 crc kubenswrapper[4856]: E1122 09:55:19.460327 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8810108a-453a-4d4a-806d-19989b87194e" containerName="extract-utilities" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.460335 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8810108a-453a-4d4a-806d-19989b87194e" containerName="extract-utilities" Nov 22 09:55:19 crc kubenswrapper[4856]: E1122 09:55:19.460391 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8810108a-453a-4d4a-806d-19989b87194e" containerName="registry-server" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.460402 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="8810108a-453a-4d4a-806d-19989b87194e" containerName="registry-server" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.460678 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="8810108a-453a-4d4a-806d-19989b87194e" containerName="registry-server" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.462634 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.473684 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9k4zn"] Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.631599 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e379f331-9d7a-4574-8060-fc919a7b3610-utilities\") pod \"community-operators-9k4zn\" (UID: \"e379f331-9d7a-4574-8060-fc919a7b3610\") " pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.632090 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b758\" (UniqueName: \"kubernetes.io/projected/e379f331-9d7a-4574-8060-fc919a7b3610-kube-api-access-5b758\") pod \"community-operators-9k4zn\" (UID: \"e379f331-9d7a-4574-8060-fc919a7b3610\") " pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.632563 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e379f331-9d7a-4574-8060-fc919a7b3610-catalog-content\") pod \"community-operators-9k4zn\" (UID: \"e379f331-9d7a-4574-8060-fc919a7b3610\") " pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.734291 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b758\" (UniqueName: \"kubernetes.io/projected/e379f331-9d7a-4574-8060-fc919a7b3610-kube-api-access-5b758\") pod \"community-operators-9k4zn\" (UID: \"e379f331-9d7a-4574-8060-fc919a7b3610\") " pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.734567 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e379f331-9d7a-4574-8060-fc919a7b3610-catalog-content\") pod \"community-operators-9k4zn\" (UID: \"e379f331-9d7a-4574-8060-fc919a7b3610\") " pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.735315 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e379f331-9d7a-4574-8060-fc919a7b3610-catalog-content\") pod \"community-operators-9k4zn\" (UID: \"e379f331-9d7a-4574-8060-fc919a7b3610\") " pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.735422 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e379f331-9d7a-4574-8060-fc919a7b3610-utilities\") pod \"community-operators-9k4zn\" (UID: \"e379f331-9d7a-4574-8060-fc919a7b3610\") " pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.735886 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e379f331-9d7a-4574-8060-fc919a7b3610-utilities\") pod \"community-operators-9k4zn\" (UID: \"e379f331-9d7a-4574-8060-fc919a7b3610\") " pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.755079 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b758\" (UniqueName: \"kubernetes.io/projected/e379f331-9d7a-4574-8060-fc919a7b3610-kube-api-access-5b758\") pod \"community-operators-9k4zn\" (UID: \"e379f331-9d7a-4574-8060-fc919a7b3610\") " pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:19 crc kubenswrapper[4856]: I1122 09:55:19.789241 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:20 crc kubenswrapper[4856]: I1122 09:55:20.802404 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9k4zn"] Nov 22 09:55:20 crc kubenswrapper[4856]: I1122 09:55:20.942341 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k4zn" event={"ID":"e379f331-9d7a-4574-8060-fc919a7b3610","Type":"ContainerStarted","Data":"dbc52ae65698e4c1a4a0eccb691ad77e2175053c093f3381287a06440208c7bb"} Nov 22 09:55:22 crc kubenswrapper[4856]: I1122 09:55:22.979206 4856 generic.go:334] "Generic (PLEG): container finished" podID="e379f331-9d7a-4574-8060-fc919a7b3610" containerID="11be47bfa11d716af372be4d2faa592a26b8d1b3f148ebe223bce9453a247d9b" exitCode=0 Nov 22 09:55:22 crc kubenswrapper[4856]: I1122 09:55:22.979435 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k4zn" event={"ID":"e379f331-9d7a-4574-8060-fc919a7b3610","Type":"ContainerDied","Data":"11be47bfa11d716af372be4d2faa592a26b8d1b3f148ebe223bce9453a247d9b"} Nov 22 09:55:26 crc kubenswrapper[4856]: I1122 09:55:26.709769 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:55:26 crc kubenswrapper[4856]: E1122 09:55:26.718953 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:55:29 crc kubenswrapper[4856]: I1122 09:55:29.037967 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k4zn" event={"ID":"e379f331-9d7a-4574-8060-fc919a7b3610","Type":"ContainerStarted","Data":"7029b3f2d98741659fe3d75006fc8afba79c076fda9d3a2cae0feadffec4e876"} Nov 22 09:55:36 crc kubenswrapper[4856]: I1122 09:55:36.105118 4856 generic.go:334] "Generic (PLEG): container finished" podID="e379f331-9d7a-4574-8060-fc919a7b3610" containerID="7029b3f2d98741659fe3d75006fc8afba79c076fda9d3a2cae0feadffec4e876" exitCode=0 Nov 22 09:55:36 crc kubenswrapper[4856]: I1122 09:55:36.105601 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k4zn" event={"ID":"e379f331-9d7a-4574-8060-fc919a7b3610","Type":"ContainerDied","Data":"7029b3f2d98741659fe3d75006fc8afba79c076fda9d3a2cae0feadffec4e876"} Nov 22 09:55:38 crc kubenswrapper[4856]: I1122 09:55:38.127945 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k4zn" event={"ID":"e379f331-9d7a-4574-8060-fc919a7b3610","Type":"ContainerStarted","Data":"c5577d3a726570406ea7aa98a7d5a011eccfcceee0353f5405febd884d49b0d3"} Nov 22 09:55:38 crc kubenswrapper[4856]: I1122 09:55:38.153298 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9k4zn" podStartSLOduration=4.983798933 podStartE2EDuration="19.153274904s" podCreationTimestamp="2025-11-22 09:55:19 +0000 UTC" firstStartedPulling="2025-11-22 09:55:22.981101097 +0000 UTC m=+10365.394494355" lastFinishedPulling="2025-11-22 09:55:37.150577068 +0000 UTC m=+10379.563970326" observedRunningTime="2025-11-22 09:55:38.14679277 +0000 UTC m=+10380.560186028" watchObservedRunningTime="2025-11-22 09:55:38.153274904 +0000 UTC m=+10380.566668162" Nov 22 09:55:39 crc kubenswrapper[4856]: I1122 09:55:39.789355 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:39 crc kubenswrapper[4856]: I1122 09:55:39.789680 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:39 crc kubenswrapper[4856]: I1122 09:55:39.845472 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:40 crc kubenswrapper[4856]: I1122 09:55:40.709864 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:55:40 crc kubenswrapper[4856]: E1122 09:55:40.710110 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:55:42 crc kubenswrapper[4856]: I1122 09:55:42.054210 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t77qs"] Nov 22 09:55:42 crc kubenswrapper[4856]: I1122 09:55:42.057390 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:55:42 crc kubenswrapper[4856]: I1122 09:55:42.084843 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t77qs"] Nov 22 09:55:42 crc kubenswrapper[4856]: I1122 09:55:42.189474 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-utilities\") pod \"certified-operators-t77qs\" (UID: \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\") " pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:55:42 crc kubenswrapper[4856]: I1122 09:55:42.189706 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgtdm\" (UniqueName: \"kubernetes.io/projected/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-kube-api-access-kgtdm\") pod \"certified-operators-t77qs\" (UID: \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\") " pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:55:42 crc kubenswrapper[4856]: I1122 09:55:42.189965 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-catalog-content\") pod \"certified-operators-t77qs\" (UID: \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\") " pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:55:42 crc kubenswrapper[4856]: I1122 09:55:42.291754 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-utilities\") pod \"certified-operators-t77qs\" (UID: \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\") " pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:55:42 crc kubenswrapper[4856]: I1122 09:55:42.291942 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgtdm\" (UniqueName: \"kubernetes.io/projected/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-kube-api-access-kgtdm\") pod \"certified-operators-t77qs\" (UID: \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\") " pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:55:42 crc kubenswrapper[4856]: I1122 09:55:42.292020 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-catalog-content\") pod \"certified-operators-t77qs\" (UID: \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\") " pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:55:42 crc kubenswrapper[4856]: I1122 09:55:42.292442 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-utilities\") pod \"certified-operators-t77qs\" (UID: \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\") " pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:55:42 crc kubenswrapper[4856]: I1122 09:55:42.292496 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-catalog-content\") pod \"certified-operators-t77qs\" (UID: \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\") " pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:55:42 crc kubenswrapper[4856]: I1122 09:55:42.321383 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgtdm\" (UniqueName: \"kubernetes.io/projected/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-kube-api-access-kgtdm\") pod \"certified-operators-t77qs\" (UID: \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\") " pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:55:42 crc kubenswrapper[4856]: I1122 09:55:42.396482 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:55:43 crc kubenswrapper[4856]: I1122 09:55:43.066310 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t77qs"] Nov 22 09:55:43 crc kubenswrapper[4856]: I1122 09:55:43.170053 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t77qs" event={"ID":"90cf5f3e-35b5-47d2-8389-d1184b3a52b9","Type":"ContainerStarted","Data":"2145737cc888202805f4dc51a179ac3cdeb3ee824bcdcd91c6b89fc9a1e08678"} Nov 22 09:55:44 crc kubenswrapper[4856]: I1122 09:55:44.180671 4856 generic.go:334] "Generic (PLEG): container finished" podID="90cf5f3e-35b5-47d2-8389-d1184b3a52b9" containerID="fa00551e8b2ff8d96822c3ab0254aab93e9d929c43adb9d1d2ad0facc193837e" exitCode=0 Nov 22 09:55:44 crc kubenswrapper[4856]: I1122 09:55:44.180833 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t77qs" event={"ID":"90cf5f3e-35b5-47d2-8389-d1184b3a52b9","Type":"ContainerDied","Data":"fa00551e8b2ff8d96822c3ab0254aab93e9d929c43adb9d1d2ad0facc193837e"} Nov 22 09:55:47 crc kubenswrapper[4856]: I1122 09:55:47.211525 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t77qs" event={"ID":"90cf5f3e-35b5-47d2-8389-d1184b3a52b9","Type":"ContainerStarted","Data":"e15fb077bbe588238ec31f5e7f9f1d399ad2c5c5b70d135618fa5b2f40b7289b"} Nov 22 09:55:49 crc kubenswrapper[4856]: I1122 09:55:49.849779 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:49 crc kubenswrapper[4856]: I1122 09:55:49.896728 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9k4zn"] Nov 22 09:55:50 crc kubenswrapper[4856]: I1122 09:55:50.239758 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9k4zn" podUID="e379f331-9d7a-4574-8060-fc919a7b3610" containerName="registry-server" containerID="cri-o://c5577d3a726570406ea7aa98a7d5a011eccfcceee0353f5405febd884d49b0d3" gracePeriod=2 Nov 22 09:55:50 crc kubenswrapper[4856]: I1122 09:55:50.790975 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:50 crc kubenswrapper[4856]: I1122 09:55:50.971995 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e379f331-9d7a-4574-8060-fc919a7b3610-catalog-content\") pod \"e379f331-9d7a-4574-8060-fc919a7b3610\" (UID: \"e379f331-9d7a-4574-8060-fc919a7b3610\") " Nov 22 09:55:50 crc kubenswrapper[4856]: I1122 09:55:50.972371 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e379f331-9d7a-4574-8060-fc919a7b3610-utilities\") pod \"e379f331-9d7a-4574-8060-fc919a7b3610\" (UID: \"e379f331-9d7a-4574-8060-fc919a7b3610\") " Nov 22 09:55:50 crc kubenswrapper[4856]: I1122 09:55:50.972488 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b758\" (UniqueName: \"kubernetes.io/projected/e379f331-9d7a-4574-8060-fc919a7b3610-kube-api-access-5b758\") pod \"e379f331-9d7a-4574-8060-fc919a7b3610\" (UID: \"e379f331-9d7a-4574-8060-fc919a7b3610\") " Nov 22 09:55:50 crc kubenswrapper[4856]: I1122 09:55:50.973419 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e379f331-9d7a-4574-8060-fc919a7b3610-utilities" (OuterVolumeSpecName: "utilities") pod "e379f331-9d7a-4574-8060-fc919a7b3610" (UID: "e379f331-9d7a-4574-8060-fc919a7b3610"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:55:50 crc kubenswrapper[4856]: I1122 09:55:50.980074 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e379f331-9d7a-4574-8060-fc919a7b3610-kube-api-access-5b758" (OuterVolumeSpecName: "kube-api-access-5b758") pod "e379f331-9d7a-4574-8060-fc919a7b3610" (UID: "e379f331-9d7a-4574-8060-fc919a7b3610"). InnerVolumeSpecName "kube-api-access-5b758". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.033795 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e379f331-9d7a-4574-8060-fc919a7b3610-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e379f331-9d7a-4574-8060-fc919a7b3610" (UID: "e379f331-9d7a-4574-8060-fc919a7b3610"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.076116 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e379f331-9d7a-4574-8060-fc919a7b3610-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.076151 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e379f331-9d7a-4574-8060-fc919a7b3610-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.076161 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b758\" (UniqueName: \"kubernetes.io/projected/e379f331-9d7a-4574-8060-fc919a7b3610-kube-api-access-5b758\") on node \"crc\" DevicePath \"\"" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.254332 4856 generic.go:334] "Generic (PLEG): container finished" podID="90cf5f3e-35b5-47d2-8389-d1184b3a52b9" containerID="e15fb077bbe588238ec31f5e7f9f1d399ad2c5c5b70d135618fa5b2f40b7289b" exitCode=0 Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.254405 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t77qs" event={"ID":"90cf5f3e-35b5-47d2-8389-d1184b3a52b9","Type":"ContainerDied","Data":"e15fb077bbe588238ec31f5e7f9f1d399ad2c5c5b70d135618fa5b2f40b7289b"} Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.257088 4856 generic.go:334] "Generic (PLEG): container finished" podID="e379f331-9d7a-4574-8060-fc919a7b3610" containerID="c5577d3a726570406ea7aa98a7d5a011eccfcceee0353f5405febd884d49b0d3" exitCode=0 Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.257120 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k4zn" event={"ID":"e379f331-9d7a-4574-8060-fc919a7b3610","Type":"ContainerDied","Data":"c5577d3a726570406ea7aa98a7d5a011eccfcceee0353f5405febd884d49b0d3"} Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.257147 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9k4zn" event={"ID":"e379f331-9d7a-4574-8060-fc919a7b3610","Type":"ContainerDied","Data":"dbc52ae65698e4c1a4a0eccb691ad77e2175053c093f3381287a06440208c7bb"} Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.257169 4856 scope.go:117] "RemoveContainer" containerID="c5577d3a726570406ea7aa98a7d5a011eccfcceee0353f5405febd884d49b0d3" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.257291 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9k4zn" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.287256 4856 scope.go:117] "RemoveContainer" containerID="7029b3f2d98741659fe3d75006fc8afba79c076fda9d3a2cae0feadffec4e876" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.337652 4856 scope.go:117] "RemoveContainer" containerID="11be47bfa11d716af372be4d2faa592a26b8d1b3f148ebe223bce9453a247d9b" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.386672 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9k4zn"] Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.443852 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9k4zn"] Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.448075 4856 scope.go:117] "RemoveContainer" containerID="c5577d3a726570406ea7aa98a7d5a011eccfcceee0353f5405febd884d49b0d3" Nov 22 09:55:51 crc kubenswrapper[4856]: E1122 09:55:51.458024 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5577d3a726570406ea7aa98a7d5a011eccfcceee0353f5405febd884d49b0d3\": container with ID starting with c5577d3a726570406ea7aa98a7d5a011eccfcceee0353f5405febd884d49b0d3 not found: ID does not exist" containerID="c5577d3a726570406ea7aa98a7d5a011eccfcceee0353f5405febd884d49b0d3" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.458081 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5577d3a726570406ea7aa98a7d5a011eccfcceee0353f5405febd884d49b0d3"} err="failed to get container status \"c5577d3a726570406ea7aa98a7d5a011eccfcceee0353f5405febd884d49b0d3\": rpc error: code = NotFound desc = could not find container \"c5577d3a726570406ea7aa98a7d5a011eccfcceee0353f5405febd884d49b0d3\": container with ID starting with c5577d3a726570406ea7aa98a7d5a011eccfcceee0353f5405febd884d49b0d3 not found: ID does not exist" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.458106 4856 scope.go:117] "RemoveContainer" containerID="7029b3f2d98741659fe3d75006fc8afba79c076fda9d3a2cae0feadffec4e876" Nov 22 09:55:51 crc kubenswrapper[4856]: E1122 09:55:51.468089 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7029b3f2d98741659fe3d75006fc8afba79c076fda9d3a2cae0feadffec4e876\": container with ID starting with 7029b3f2d98741659fe3d75006fc8afba79c076fda9d3a2cae0feadffec4e876 not found: ID does not exist" containerID="7029b3f2d98741659fe3d75006fc8afba79c076fda9d3a2cae0feadffec4e876" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.468149 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7029b3f2d98741659fe3d75006fc8afba79c076fda9d3a2cae0feadffec4e876"} err="failed to get container status \"7029b3f2d98741659fe3d75006fc8afba79c076fda9d3a2cae0feadffec4e876\": rpc error: code = NotFound desc = could not find container \"7029b3f2d98741659fe3d75006fc8afba79c076fda9d3a2cae0feadffec4e876\": container with ID starting with 7029b3f2d98741659fe3d75006fc8afba79c076fda9d3a2cae0feadffec4e876 not found: ID does not exist" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.468182 4856 scope.go:117] "RemoveContainer" containerID="11be47bfa11d716af372be4d2faa592a26b8d1b3f148ebe223bce9453a247d9b" Nov 22 09:55:51 crc kubenswrapper[4856]: E1122 09:55:51.470900 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11be47bfa11d716af372be4d2faa592a26b8d1b3f148ebe223bce9453a247d9b\": container with ID starting with 11be47bfa11d716af372be4d2faa592a26b8d1b3f148ebe223bce9453a247d9b not found: ID does not exist" containerID="11be47bfa11d716af372be4d2faa592a26b8d1b3f148ebe223bce9453a247d9b" Nov 22 09:55:51 crc kubenswrapper[4856]: I1122 09:55:51.470939 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11be47bfa11d716af372be4d2faa592a26b8d1b3f148ebe223bce9453a247d9b"} err="failed to get container status \"11be47bfa11d716af372be4d2faa592a26b8d1b3f148ebe223bce9453a247d9b\": rpc error: code = NotFound desc = could not find container \"11be47bfa11d716af372be4d2faa592a26b8d1b3f148ebe223bce9453a247d9b\": container with ID starting with 11be47bfa11d716af372be4d2faa592a26b8d1b3f148ebe223bce9453a247d9b not found: ID does not exist" Nov 22 09:55:52 crc kubenswrapper[4856]: I1122 09:55:52.270984 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t77qs" event={"ID":"90cf5f3e-35b5-47d2-8389-d1184b3a52b9","Type":"ContainerStarted","Data":"92a2593aba6991e1b95ed06bf5e8f1ef49d063e9237d0d6ab66100b988a7eb62"} Nov 22 09:55:52 crc kubenswrapper[4856]: I1122 09:55:52.291804 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t77qs" podStartSLOduration=2.676235873 podStartE2EDuration="10.29178379s" podCreationTimestamp="2025-11-22 09:55:42 +0000 UTC" firstStartedPulling="2025-11-22 09:55:44.182559672 +0000 UTC m=+10386.595952930" lastFinishedPulling="2025-11-22 09:55:51.798107589 +0000 UTC m=+10394.211500847" observedRunningTime="2025-11-22 09:55:52.289328514 +0000 UTC m=+10394.702721782" watchObservedRunningTime="2025-11-22 09:55:52.29178379 +0000 UTC m=+10394.705177048" Nov 22 09:55:52 crc kubenswrapper[4856]: I1122 09:55:52.397230 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:55:52 crc kubenswrapper[4856]: I1122 09:55:52.397275 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:55:52 crc kubenswrapper[4856]: I1122 09:55:52.710016 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:55:52 crc kubenswrapper[4856]: E1122 09:55:52.710559 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:55:52 crc kubenswrapper[4856]: I1122 09:55:52.721465 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e379f331-9d7a-4574-8060-fc919a7b3610" path="/var/lib/kubelet/pods/e379f331-9d7a-4574-8060-fc919a7b3610/volumes" Nov 22 09:55:53 crc kubenswrapper[4856]: I1122 09:55:53.448142 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-t77qs" podUID="90cf5f3e-35b5-47d2-8389-d1184b3a52b9" containerName="registry-server" probeResult="failure" output=< Nov 22 09:55:53 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 09:55:53 crc kubenswrapper[4856]: > Nov 22 09:56:02 crc kubenswrapper[4856]: I1122 09:56:02.450161 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:56:02 crc kubenswrapper[4856]: I1122 09:56:02.518055 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:56:02 crc kubenswrapper[4856]: I1122 09:56:02.701876 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t77qs"] Nov 22 09:56:04 crc kubenswrapper[4856]: I1122 09:56:04.381899 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t77qs" podUID="90cf5f3e-35b5-47d2-8389-d1184b3a52b9" containerName="registry-server" containerID="cri-o://92a2593aba6991e1b95ed06bf5e8f1ef49d063e9237d0d6ab66100b988a7eb62" gracePeriod=2 Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.008671 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.187645 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgtdm\" (UniqueName: \"kubernetes.io/projected/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-kube-api-access-kgtdm\") pod \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\" (UID: \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\") " Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.187729 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-utilities\") pod \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\" (UID: \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\") " Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.187824 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-catalog-content\") pod \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\" (UID: \"90cf5f3e-35b5-47d2-8389-d1184b3a52b9\") " Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.188776 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-utilities" (OuterVolumeSpecName: "utilities") pod "90cf5f3e-35b5-47d2-8389-d1184b3a52b9" (UID: "90cf5f3e-35b5-47d2-8389-d1184b3a52b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.201339 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-kube-api-access-kgtdm" (OuterVolumeSpecName: "kube-api-access-kgtdm") pod "90cf5f3e-35b5-47d2-8389-d1184b3a52b9" (UID: "90cf5f3e-35b5-47d2-8389-d1184b3a52b9"). InnerVolumeSpecName "kube-api-access-kgtdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.235847 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90cf5f3e-35b5-47d2-8389-d1184b3a52b9" (UID: "90cf5f3e-35b5-47d2-8389-d1184b3a52b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.290486 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.290530 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.290545 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgtdm\" (UniqueName: \"kubernetes.io/projected/90cf5f3e-35b5-47d2-8389-d1184b3a52b9-kube-api-access-kgtdm\") on node \"crc\" DevicePath \"\"" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.392144 4856 generic.go:334] "Generic (PLEG): container finished" podID="90cf5f3e-35b5-47d2-8389-d1184b3a52b9" containerID="92a2593aba6991e1b95ed06bf5e8f1ef49d063e9237d0d6ab66100b988a7eb62" exitCode=0 Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.392184 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t77qs" event={"ID":"90cf5f3e-35b5-47d2-8389-d1184b3a52b9","Type":"ContainerDied","Data":"92a2593aba6991e1b95ed06bf5e8f1ef49d063e9237d0d6ab66100b988a7eb62"} Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.392215 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t77qs" event={"ID":"90cf5f3e-35b5-47d2-8389-d1184b3a52b9","Type":"ContainerDied","Data":"2145737cc888202805f4dc51a179ac3cdeb3ee824bcdcd91c6b89fc9a1e08678"} Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.392233 4856 scope.go:117] "RemoveContainer" containerID="92a2593aba6991e1b95ed06bf5e8f1ef49d063e9237d0d6ab66100b988a7eb62" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.392254 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t77qs" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.413332 4856 scope.go:117] "RemoveContainer" containerID="e15fb077bbe588238ec31f5e7f9f1d399ad2c5c5b70d135618fa5b2f40b7289b" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.431238 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t77qs"] Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.439950 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t77qs"] Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.442725 4856 scope.go:117] "RemoveContainer" containerID="fa00551e8b2ff8d96822c3ab0254aab93e9d929c43adb9d1d2ad0facc193837e" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.497273 4856 scope.go:117] "RemoveContainer" containerID="92a2593aba6991e1b95ed06bf5e8f1ef49d063e9237d0d6ab66100b988a7eb62" Nov 22 09:56:05 crc kubenswrapper[4856]: E1122 09:56:05.497797 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92a2593aba6991e1b95ed06bf5e8f1ef49d063e9237d0d6ab66100b988a7eb62\": container with ID starting with 92a2593aba6991e1b95ed06bf5e8f1ef49d063e9237d0d6ab66100b988a7eb62 not found: ID does not exist" containerID="92a2593aba6991e1b95ed06bf5e8f1ef49d063e9237d0d6ab66100b988a7eb62" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.497859 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92a2593aba6991e1b95ed06bf5e8f1ef49d063e9237d0d6ab66100b988a7eb62"} err="failed to get container status \"92a2593aba6991e1b95ed06bf5e8f1ef49d063e9237d0d6ab66100b988a7eb62\": rpc error: code = NotFound desc = could not find container \"92a2593aba6991e1b95ed06bf5e8f1ef49d063e9237d0d6ab66100b988a7eb62\": container with ID starting with 92a2593aba6991e1b95ed06bf5e8f1ef49d063e9237d0d6ab66100b988a7eb62 not found: ID does not exist" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.497892 4856 scope.go:117] "RemoveContainer" containerID="e15fb077bbe588238ec31f5e7f9f1d399ad2c5c5b70d135618fa5b2f40b7289b" Nov 22 09:56:05 crc kubenswrapper[4856]: E1122 09:56:05.498340 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e15fb077bbe588238ec31f5e7f9f1d399ad2c5c5b70d135618fa5b2f40b7289b\": container with ID starting with e15fb077bbe588238ec31f5e7f9f1d399ad2c5c5b70d135618fa5b2f40b7289b not found: ID does not exist" containerID="e15fb077bbe588238ec31f5e7f9f1d399ad2c5c5b70d135618fa5b2f40b7289b" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.498370 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15fb077bbe588238ec31f5e7f9f1d399ad2c5c5b70d135618fa5b2f40b7289b"} err="failed to get container status \"e15fb077bbe588238ec31f5e7f9f1d399ad2c5c5b70d135618fa5b2f40b7289b\": rpc error: code = NotFound desc = could not find container \"e15fb077bbe588238ec31f5e7f9f1d399ad2c5c5b70d135618fa5b2f40b7289b\": container with ID starting with e15fb077bbe588238ec31f5e7f9f1d399ad2c5c5b70d135618fa5b2f40b7289b not found: ID does not exist" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.498393 4856 scope.go:117] "RemoveContainer" containerID="fa00551e8b2ff8d96822c3ab0254aab93e9d929c43adb9d1d2ad0facc193837e" Nov 22 09:56:05 crc kubenswrapper[4856]: E1122 09:56:05.498885 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa00551e8b2ff8d96822c3ab0254aab93e9d929c43adb9d1d2ad0facc193837e\": container with ID starting with fa00551e8b2ff8d96822c3ab0254aab93e9d929c43adb9d1d2ad0facc193837e not found: ID does not exist" containerID="fa00551e8b2ff8d96822c3ab0254aab93e9d929c43adb9d1d2ad0facc193837e" Nov 22 09:56:05 crc kubenswrapper[4856]: I1122 09:56:05.498908 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa00551e8b2ff8d96822c3ab0254aab93e9d929c43adb9d1d2ad0facc193837e"} err="failed to get container status \"fa00551e8b2ff8d96822c3ab0254aab93e9d929c43adb9d1d2ad0facc193837e\": rpc error: code = NotFound desc = could not find container \"fa00551e8b2ff8d96822c3ab0254aab93e9d929c43adb9d1d2ad0facc193837e\": container with ID starting with fa00551e8b2ff8d96822c3ab0254aab93e9d929c43adb9d1d2ad0facc193837e not found: ID does not exist" Nov 22 09:56:06 crc kubenswrapper[4856]: I1122 09:56:06.709929 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:56:06 crc kubenswrapper[4856]: E1122 09:56:06.710553 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:56:06 crc kubenswrapper[4856]: I1122 09:56:06.719448 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90cf5f3e-35b5-47d2-8389-d1184b3a52b9" path="/var/lib/kubelet/pods/90cf5f3e-35b5-47d2-8389-d1184b3a52b9/volumes" Nov 22 09:56:20 crc kubenswrapper[4856]: I1122 09:56:20.710667 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:56:20 crc kubenswrapper[4856]: E1122 09:56:20.712163 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:56:34 crc kubenswrapper[4856]: I1122 09:56:34.710535 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:56:34 crc kubenswrapper[4856]: E1122 09:56:34.711276 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:56:48 crc kubenswrapper[4856]: I1122 09:56:48.718223 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:56:48 crc kubenswrapper[4856]: E1122 09:56:48.718977 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:57:02 crc kubenswrapper[4856]: I1122 09:57:02.710227 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:57:02 crc kubenswrapper[4856]: E1122 09:57:02.711155 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:57:17 crc kubenswrapper[4856]: I1122 09:57:17.709894 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:57:17 crc kubenswrapper[4856]: E1122 09:57:17.710672 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:57:31 crc kubenswrapper[4856]: I1122 09:57:31.710591 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:57:31 crc kubenswrapper[4856]: E1122 09:57:31.711367 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:57:42 crc kubenswrapper[4856]: I1122 09:57:42.712833 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:57:42 crc kubenswrapper[4856]: E1122 09:57:42.713632 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.126625 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w44dw"] Nov 22 09:57:52 crc kubenswrapper[4856]: E1122 09:57:52.127907 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e379f331-9d7a-4574-8060-fc919a7b3610" containerName="extract-utilities" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.127928 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e379f331-9d7a-4574-8060-fc919a7b3610" containerName="extract-utilities" Nov 22 09:57:52 crc kubenswrapper[4856]: E1122 09:57:52.127950 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e379f331-9d7a-4574-8060-fc919a7b3610" containerName="extract-content" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.127963 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e379f331-9d7a-4574-8060-fc919a7b3610" containerName="extract-content" Nov 22 09:57:52 crc kubenswrapper[4856]: E1122 09:57:52.127990 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e379f331-9d7a-4574-8060-fc919a7b3610" containerName="registry-server" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.128003 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="e379f331-9d7a-4574-8060-fc919a7b3610" containerName="registry-server" Nov 22 09:57:52 crc kubenswrapper[4856]: E1122 09:57:52.128060 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90cf5f3e-35b5-47d2-8389-d1184b3a52b9" containerName="registry-server" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.128072 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="90cf5f3e-35b5-47d2-8389-d1184b3a52b9" containerName="registry-server" Nov 22 09:57:52 crc kubenswrapper[4856]: E1122 09:57:52.128104 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90cf5f3e-35b5-47d2-8389-d1184b3a52b9" containerName="extract-content" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.128116 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="90cf5f3e-35b5-47d2-8389-d1184b3a52b9" containerName="extract-content" Nov 22 09:57:52 crc kubenswrapper[4856]: E1122 09:57:52.128139 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90cf5f3e-35b5-47d2-8389-d1184b3a52b9" containerName="extract-utilities" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.128151 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="90cf5f3e-35b5-47d2-8389-d1184b3a52b9" containerName="extract-utilities" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.128497 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="e379f331-9d7a-4574-8060-fc919a7b3610" containerName="registry-server" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.128585 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="90cf5f3e-35b5-47d2-8389-d1184b3a52b9" containerName="registry-server" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.130776 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.137823 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w44dw"] Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.266832 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxktn\" (UniqueName: \"kubernetes.io/projected/7a265549-4a53-4638-8c8c-f430209d117d-kube-api-access-qxktn\") pod \"redhat-operators-w44dw\" (UID: \"7a265549-4a53-4638-8c8c-f430209d117d\") " pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.266939 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a265549-4a53-4638-8c8c-f430209d117d-utilities\") pod \"redhat-operators-w44dw\" (UID: \"7a265549-4a53-4638-8c8c-f430209d117d\") " pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.267002 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a265549-4a53-4638-8c8c-f430209d117d-catalog-content\") pod \"redhat-operators-w44dw\" (UID: \"7a265549-4a53-4638-8c8c-f430209d117d\") " pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.371157 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a265549-4a53-4638-8c8c-f430209d117d-utilities\") pod \"redhat-operators-w44dw\" (UID: \"7a265549-4a53-4638-8c8c-f430209d117d\") " pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.371322 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a265549-4a53-4638-8c8c-f430209d117d-catalog-content\") pod \"redhat-operators-w44dw\" (UID: \"7a265549-4a53-4638-8c8c-f430209d117d\") " pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.371470 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxktn\" (UniqueName: \"kubernetes.io/projected/7a265549-4a53-4638-8c8c-f430209d117d-kube-api-access-qxktn\") pod \"redhat-operators-w44dw\" (UID: \"7a265549-4a53-4638-8c8c-f430209d117d\") " pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.372047 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a265549-4a53-4638-8c8c-f430209d117d-utilities\") pod \"redhat-operators-w44dw\" (UID: \"7a265549-4a53-4638-8c8c-f430209d117d\") " pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.372143 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a265549-4a53-4638-8c8c-f430209d117d-catalog-content\") pod \"redhat-operators-w44dw\" (UID: \"7a265549-4a53-4638-8c8c-f430209d117d\") " pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.401863 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxktn\" (UniqueName: \"kubernetes.io/projected/7a265549-4a53-4638-8c8c-f430209d117d-kube-api-access-qxktn\") pod \"redhat-operators-w44dw\" (UID: \"7a265549-4a53-4638-8c8c-f430209d117d\") " pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:57:52 crc kubenswrapper[4856]: I1122 09:57:52.468946 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:57:53 crc kubenswrapper[4856]: I1122 09:57:53.019757 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w44dw"] Nov 22 09:57:53 crc kubenswrapper[4856]: I1122 09:57:53.457828 4856 generic.go:334] "Generic (PLEG): container finished" podID="7a265549-4a53-4638-8c8c-f430209d117d" containerID="5892339ca60701c5bacee4ba71f8c5f7a59817315d64f91b8ede581ac33d4fdb" exitCode=0 Nov 22 09:57:53 crc kubenswrapper[4856]: I1122 09:57:53.458654 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w44dw" event={"ID":"7a265549-4a53-4638-8c8c-f430209d117d","Type":"ContainerDied","Data":"5892339ca60701c5bacee4ba71f8c5f7a59817315d64f91b8ede581ac33d4fdb"} Nov 22 09:57:53 crc kubenswrapper[4856]: I1122 09:57:53.458745 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w44dw" event={"ID":"7a265549-4a53-4638-8c8c-f430209d117d","Type":"ContainerStarted","Data":"5d012efc0e5a1c64df22bcdc583ed326fa600440bf8a103d4862f1e87aa598f2"} Nov 22 09:57:54 crc kubenswrapper[4856]: I1122 09:57:54.470842 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w44dw" event={"ID":"7a265549-4a53-4638-8c8c-f430209d117d","Type":"ContainerStarted","Data":"e1986fb7d345fc0ad43a847e008e622ce90f7474ece0a72c11a6d6e953d266ec"} Nov 22 09:57:57 crc kubenswrapper[4856]: I1122 09:57:57.710984 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:57:57 crc kubenswrapper[4856]: E1122 09:57:57.711755 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:57:58 crc kubenswrapper[4856]: I1122 09:57:58.516665 4856 generic.go:334] "Generic (PLEG): container finished" podID="7a265549-4a53-4638-8c8c-f430209d117d" containerID="e1986fb7d345fc0ad43a847e008e622ce90f7474ece0a72c11a6d6e953d266ec" exitCode=0 Nov 22 09:57:58 crc kubenswrapper[4856]: I1122 09:57:58.516974 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w44dw" event={"ID":"7a265549-4a53-4638-8c8c-f430209d117d","Type":"ContainerDied","Data":"e1986fb7d345fc0ad43a847e008e622ce90f7474ece0a72c11a6d6e953d266ec"} Nov 22 09:57:59 crc kubenswrapper[4856]: I1122 09:57:59.534206 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w44dw" event={"ID":"7a265549-4a53-4638-8c8c-f430209d117d","Type":"ContainerStarted","Data":"9d353f419230aa6a247d04881ad23c94678c3b3e22c9a05680343c4499c48686"} Nov 22 09:57:59 crc kubenswrapper[4856]: I1122 09:57:59.552222 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w44dw" podStartSLOduration=2.115884934 podStartE2EDuration="7.552206835s" podCreationTimestamp="2025-11-22 09:57:52 +0000 UTC" firstStartedPulling="2025-11-22 09:57:53.45954675 +0000 UTC m=+10515.872940008" lastFinishedPulling="2025-11-22 09:57:58.895868641 +0000 UTC m=+10521.309261909" observedRunningTime="2025-11-22 09:57:59.551103215 +0000 UTC m=+10521.964496473" watchObservedRunningTime="2025-11-22 09:57:59.552206835 +0000 UTC m=+10521.965600093" Nov 22 09:58:02 crc kubenswrapper[4856]: I1122 09:58:02.469924 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:58:02 crc kubenswrapper[4856]: I1122 09:58:02.470432 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:58:03 crc kubenswrapper[4856]: I1122 09:58:03.512320 4856 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w44dw" podUID="7a265549-4a53-4638-8c8c-f430209d117d" containerName="registry-server" probeResult="failure" output=< Nov 22 09:58:03 crc kubenswrapper[4856]: timeout: failed to connect service ":50051" within 1s Nov 22 09:58:03 crc kubenswrapper[4856]: > Nov 22 09:58:08 crc kubenswrapper[4856]: I1122 09:58:08.720630 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:58:08 crc kubenswrapper[4856]: E1122 09:58:08.721791 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:58:12 crc kubenswrapper[4856]: I1122 09:58:12.519292 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:58:12 crc kubenswrapper[4856]: I1122 09:58:12.569333 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:58:12 crc kubenswrapper[4856]: I1122 09:58:12.756913 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w44dw"] Nov 22 09:58:13 crc kubenswrapper[4856]: I1122 09:58:13.667405 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w44dw" podUID="7a265549-4a53-4638-8c8c-f430209d117d" containerName="registry-server" containerID="cri-o://9d353f419230aa6a247d04881ad23c94678c3b3e22c9a05680343c4499c48686" gracePeriod=2 Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.242224 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.341182 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxktn\" (UniqueName: \"kubernetes.io/projected/7a265549-4a53-4638-8c8c-f430209d117d-kube-api-access-qxktn\") pod \"7a265549-4a53-4638-8c8c-f430209d117d\" (UID: \"7a265549-4a53-4638-8c8c-f430209d117d\") " Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.341320 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a265549-4a53-4638-8c8c-f430209d117d-catalog-content\") pod \"7a265549-4a53-4638-8c8c-f430209d117d\" (UID: \"7a265549-4a53-4638-8c8c-f430209d117d\") " Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.341580 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a265549-4a53-4638-8c8c-f430209d117d-utilities\") pod \"7a265549-4a53-4638-8c8c-f430209d117d\" (UID: \"7a265549-4a53-4638-8c8c-f430209d117d\") " Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.342220 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a265549-4a53-4638-8c8c-f430209d117d-utilities" (OuterVolumeSpecName: "utilities") pod "7a265549-4a53-4638-8c8c-f430209d117d" (UID: "7a265549-4a53-4638-8c8c-f430209d117d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.342446 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a265549-4a53-4638-8c8c-f430209d117d-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.346542 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a265549-4a53-4638-8c8c-f430209d117d-kube-api-access-qxktn" (OuterVolumeSpecName: "kube-api-access-qxktn") pod "7a265549-4a53-4638-8c8c-f430209d117d" (UID: "7a265549-4a53-4638-8c8c-f430209d117d"). InnerVolumeSpecName "kube-api-access-qxktn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.444617 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxktn\" (UniqueName: \"kubernetes.io/projected/7a265549-4a53-4638-8c8c-f430209d117d-kube-api-access-qxktn\") on node \"crc\" DevicePath \"\"" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.445410 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a265549-4a53-4638-8c8c-f430209d117d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a265549-4a53-4638-8c8c-f430209d117d" (UID: "7a265549-4a53-4638-8c8c-f430209d117d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.546685 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a265549-4a53-4638-8c8c-f430209d117d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.679813 4856 generic.go:334] "Generic (PLEG): container finished" podID="7a265549-4a53-4638-8c8c-f430209d117d" containerID="9d353f419230aa6a247d04881ad23c94678c3b3e22c9a05680343c4499c48686" exitCode=0 Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.679865 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w44dw" event={"ID":"7a265549-4a53-4638-8c8c-f430209d117d","Type":"ContainerDied","Data":"9d353f419230aa6a247d04881ad23c94678c3b3e22c9a05680343c4499c48686"} Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.679903 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w44dw" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.679918 4856 scope.go:117] "RemoveContainer" containerID="9d353f419230aa6a247d04881ad23c94678c3b3e22c9a05680343c4499c48686" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.679904 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w44dw" event={"ID":"7a265549-4a53-4638-8c8c-f430209d117d","Type":"ContainerDied","Data":"5d012efc0e5a1c64df22bcdc583ed326fa600440bf8a103d4862f1e87aa598f2"} Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.736624 4856 scope.go:117] "RemoveContainer" containerID="e1986fb7d345fc0ad43a847e008e622ce90f7474ece0a72c11a6d6e953d266ec" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.736923 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w44dw"] Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.740009 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w44dw"] Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.762628 4856 scope.go:117] "RemoveContainer" containerID="5892339ca60701c5bacee4ba71f8c5f7a59817315d64f91b8ede581ac33d4fdb" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.806400 4856 scope.go:117] "RemoveContainer" containerID="9d353f419230aa6a247d04881ad23c94678c3b3e22c9a05680343c4499c48686" Nov 22 09:58:14 crc kubenswrapper[4856]: E1122 09:58:14.806966 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d353f419230aa6a247d04881ad23c94678c3b3e22c9a05680343c4499c48686\": container with ID starting with 9d353f419230aa6a247d04881ad23c94678c3b3e22c9a05680343c4499c48686 not found: ID does not exist" containerID="9d353f419230aa6a247d04881ad23c94678c3b3e22c9a05680343c4499c48686" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.807001 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d353f419230aa6a247d04881ad23c94678c3b3e22c9a05680343c4499c48686"} err="failed to get container status \"9d353f419230aa6a247d04881ad23c94678c3b3e22c9a05680343c4499c48686\": rpc error: code = NotFound desc = could not find container \"9d353f419230aa6a247d04881ad23c94678c3b3e22c9a05680343c4499c48686\": container with ID starting with 9d353f419230aa6a247d04881ad23c94678c3b3e22c9a05680343c4499c48686 not found: ID does not exist" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.807047 4856 scope.go:117] "RemoveContainer" containerID="e1986fb7d345fc0ad43a847e008e622ce90f7474ece0a72c11a6d6e953d266ec" Nov 22 09:58:14 crc kubenswrapper[4856]: E1122 09:58:14.807421 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1986fb7d345fc0ad43a847e008e622ce90f7474ece0a72c11a6d6e953d266ec\": container with ID starting with e1986fb7d345fc0ad43a847e008e622ce90f7474ece0a72c11a6d6e953d266ec not found: ID does not exist" containerID="e1986fb7d345fc0ad43a847e008e622ce90f7474ece0a72c11a6d6e953d266ec" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.807478 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1986fb7d345fc0ad43a847e008e622ce90f7474ece0a72c11a6d6e953d266ec"} err="failed to get container status \"e1986fb7d345fc0ad43a847e008e622ce90f7474ece0a72c11a6d6e953d266ec\": rpc error: code = NotFound desc = could not find container \"e1986fb7d345fc0ad43a847e008e622ce90f7474ece0a72c11a6d6e953d266ec\": container with ID starting with e1986fb7d345fc0ad43a847e008e622ce90f7474ece0a72c11a6d6e953d266ec not found: ID does not exist" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.807530 4856 scope.go:117] "RemoveContainer" containerID="5892339ca60701c5bacee4ba71f8c5f7a59817315d64f91b8ede581ac33d4fdb" Nov 22 09:58:14 crc kubenswrapper[4856]: E1122 09:58:14.807820 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5892339ca60701c5bacee4ba71f8c5f7a59817315d64f91b8ede581ac33d4fdb\": container with ID starting with 5892339ca60701c5bacee4ba71f8c5f7a59817315d64f91b8ede581ac33d4fdb not found: ID does not exist" containerID="5892339ca60701c5bacee4ba71f8c5f7a59817315d64f91b8ede581ac33d4fdb" Nov 22 09:58:14 crc kubenswrapper[4856]: I1122 09:58:14.807849 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5892339ca60701c5bacee4ba71f8c5f7a59817315d64f91b8ede581ac33d4fdb"} err="failed to get container status \"5892339ca60701c5bacee4ba71f8c5f7a59817315d64f91b8ede581ac33d4fdb\": rpc error: code = NotFound desc = could not find container \"5892339ca60701c5bacee4ba71f8c5f7a59817315d64f91b8ede581ac33d4fdb\": container with ID starting with 5892339ca60701c5bacee4ba71f8c5f7a59817315d64f91b8ede581ac33d4fdb not found: ID does not exist" Nov 22 09:58:16 crc kubenswrapper[4856]: I1122 09:58:16.727194 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a265549-4a53-4638-8c8c-f430209d117d" path="/var/lib/kubelet/pods/7a265549-4a53-4638-8c8c-f430209d117d/volumes" Nov 22 09:58:21 crc kubenswrapper[4856]: I1122 09:58:21.711114 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:58:21 crc kubenswrapper[4856]: E1122 09:58:21.711860 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:58:36 crc kubenswrapper[4856]: I1122 09:58:36.710733 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:58:36 crc kubenswrapper[4856]: E1122 09:58:36.711681 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:58:48 crc kubenswrapper[4856]: I1122 09:58:48.716605 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:58:48 crc kubenswrapper[4856]: E1122 09:58:48.717461 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:58:59 crc kubenswrapper[4856]: I1122 09:58:59.720526 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:58:59 crc kubenswrapper[4856]: E1122 09:58:59.722009 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:59:14 crc kubenswrapper[4856]: I1122 09:59:14.711096 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:59:14 crc kubenswrapper[4856]: E1122 09:59:14.712611 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:59:29 crc kubenswrapper[4856]: I1122 09:59:29.710411 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:59:29 crc kubenswrapper[4856]: E1122 09:59:29.711184 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 09:59:42 crc kubenswrapper[4856]: I1122 09:59:42.710054 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 09:59:43 crc kubenswrapper[4856]: I1122 09:59:43.579392 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"7b8d0d88e8da694286b9829436d41459bda182635bbecc7206139ad174a04590"} Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.168939 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht"] Nov 22 10:00:00 crc kubenswrapper[4856]: E1122 10:00:00.169802 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a265549-4a53-4638-8c8c-f430209d117d" containerName="extract-content" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.169813 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a265549-4a53-4638-8c8c-f430209d117d" containerName="extract-content" Nov 22 10:00:00 crc kubenswrapper[4856]: E1122 10:00:00.169825 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a265549-4a53-4638-8c8c-f430209d117d" containerName="registry-server" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.169832 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a265549-4a53-4638-8c8c-f430209d117d" containerName="registry-server" Nov 22 10:00:00 crc kubenswrapper[4856]: E1122 10:00:00.169866 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a265549-4a53-4638-8c8c-f430209d117d" containerName="extract-utilities" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.169872 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a265549-4a53-4638-8c8c-f430209d117d" containerName="extract-utilities" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.170041 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a265549-4a53-4638-8c8c-f430209d117d" containerName="registry-server" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.170701 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.176446 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.176913 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.190422 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht"] Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.294799 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt5gm\" (UniqueName: \"kubernetes.io/projected/83ab1f64-b930-4051-934b-664976b0a413-kube-api-access-gt5gm\") pod \"collect-profiles-29396760-rmqht\" (UID: \"83ab1f64-b930-4051-934b-664976b0a413\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.294948 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83ab1f64-b930-4051-934b-664976b0a413-config-volume\") pod \"collect-profiles-29396760-rmqht\" (UID: \"83ab1f64-b930-4051-934b-664976b0a413\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.294975 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83ab1f64-b930-4051-934b-664976b0a413-secret-volume\") pod \"collect-profiles-29396760-rmqht\" (UID: \"83ab1f64-b930-4051-934b-664976b0a413\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.396948 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt5gm\" (UniqueName: \"kubernetes.io/projected/83ab1f64-b930-4051-934b-664976b0a413-kube-api-access-gt5gm\") pod \"collect-profiles-29396760-rmqht\" (UID: \"83ab1f64-b930-4051-934b-664976b0a413\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.397535 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83ab1f64-b930-4051-934b-664976b0a413-config-volume\") pod \"collect-profiles-29396760-rmqht\" (UID: \"83ab1f64-b930-4051-934b-664976b0a413\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.397571 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83ab1f64-b930-4051-934b-664976b0a413-secret-volume\") pod \"collect-profiles-29396760-rmqht\" (UID: \"83ab1f64-b930-4051-934b-664976b0a413\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.398589 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83ab1f64-b930-4051-934b-664976b0a413-config-volume\") pod \"collect-profiles-29396760-rmqht\" (UID: \"83ab1f64-b930-4051-934b-664976b0a413\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.404982 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83ab1f64-b930-4051-934b-664976b0a413-secret-volume\") pod \"collect-profiles-29396760-rmqht\" (UID: \"83ab1f64-b930-4051-934b-664976b0a413\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.418821 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt5gm\" (UniqueName: \"kubernetes.io/projected/83ab1f64-b930-4051-934b-664976b0a413-kube-api-access-gt5gm\") pod \"collect-profiles-29396760-rmqht\" (UID: \"83ab1f64-b930-4051-934b-664976b0a413\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" Nov 22 10:00:00 crc kubenswrapper[4856]: I1122 10:00:00.550436 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" Nov 22 10:00:01 crc kubenswrapper[4856]: I1122 10:00:01.057710 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht"] Nov 22 10:00:01 crc kubenswrapper[4856]: I1122 10:00:01.755837 4856 generic.go:334] "Generic (PLEG): container finished" podID="83ab1f64-b930-4051-934b-664976b0a413" containerID="c20c15a03037cb36f75639bf7a3a58882423109be0b52c4b801f7c186b45786f" exitCode=0 Nov 22 10:00:01 crc kubenswrapper[4856]: I1122 10:00:01.755888 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" event={"ID":"83ab1f64-b930-4051-934b-664976b0a413","Type":"ContainerDied","Data":"c20c15a03037cb36f75639bf7a3a58882423109be0b52c4b801f7c186b45786f"} Nov 22 10:00:01 crc kubenswrapper[4856]: I1122 10:00:01.756113 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" event={"ID":"83ab1f64-b930-4051-934b-664976b0a413","Type":"ContainerStarted","Data":"d41af43bf6dc72dfd1e9262e3d2b5637b062d3f41d822b5cc37c3050c6037f9c"} Nov 22 10:00:03 crc kubenswrapper[4856]: I1122 10:00:03.160191 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" Nov 22 10:00:03 crc kubenswrapper[4856]: I1122 10:00:03.304444 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt5gm\" (UniqueName: \"kubernetes.io/projected/83ab1f64-b930-4051-934b-664976b0a413-kube-api-access-gt5gm\") pod \"83ab1f64-b930-4051-934b-664976b0a413\" (UID: \"83ab1f64-b930-4051-934b-664976b0a413\") " Nov 22 10:00:03 crc kubenswrapper[4856]: I1122 10:00:03.304532 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83ab1f64-b930-4051-934b-664976b0a413-config-volume\") pod \"83ab1f64-b930-4051-934b-664976b0a413\" (UID: \"83ab1f64-b930-4051-934b-664976b0a413\") " Nov 22 10:00:03 crc kubenswrapper[4856]: I1122 10:00:03.304652 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83ab1f64-b930-4051-934b-664976b0a413-secret-volume\") pod \"83ab1f64-b930-4051-934b-664976b0a413\" (UID: \"83ab1f64-b930-4051-934b-664976b0a413\") " Nov 22 10:00:03 crc kubenswrapper[4856]: I1122 10:00:03.305212 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83ab1f64-b930-4051-934b-664976b0a413-config-volume" (OuterVolumeSpecName: "config-volume") pod "83ab1f64-b930-4051-934b-664976b0a413" (UID: "83ab1f64-b930-4051-934b-664976b0a413"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:00:03 crc kubenswrapper[4856]: I1122 10:00:03.310532 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83ab1f64-b930-4051-934b-664976b0a413-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "83ab1f64-b930-4051-934b-664976b0a413" (UID: "83ab1f64-b930-4051-934b-664976b0a413"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:00:03 crc kubenswrapper[4856]: I1122 10:00:03.311204 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ab1f64-b930-4051-934b-664976b0a413-kube-api-access-gt5gm" (OuterVolumeSpecName: "kube-api-access-gt5gm") pod "83ab1f64-b930-4051-934b-664976b0a413" (UID: "83ab1f64-b930-4051-934b-664976b0a413"). InnerVolumeSpecName "kube-api-access-gt5gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:00:03 crc kubenswrapper[4856]: I1122 10:00:03.406666 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt5gm\" (UniqueName: \"kubernetes.io/projected/83ab1f64-b930-4051-934b-664976b0a413-kube-api-access-gt5gm\") on node \"crc\" DevicePath \"\"" Nov 22 10:00:03 crc kubenswrapper[4856]: I1122 10:00:03.406702 4856 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83ab1f64-b930-4051-934b-664976b0a413-config-volume\") on node \"crc\" DevicePath \"\"" Nov 22 10:00:03 crc kubenswrapper[4856]: I1122 10:00:03.406713 4856 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83ab1f64-b930-4051-934b-664976b0a413-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 22 10:00:03 crc kubenswrapper[4856]: I1122 10:00:03.775032 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" event={"ID":"83ab1f64-b930-4051-934b-664976b0a413","Type":"ContainerDied","Data":"d41af43bf6dc72dfd1e9262e3d2b5637b062d3f41d822b5cc37c3050c6037f9c"} Nov 22 10:00:03 crc kubenswrapper[4856]: I1122 10:00:03.775074 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d41af43bf6dc72dfd1e9262e3d2b5637b062d3f41d822b5cc37c3050c6037f9c" Nov 22 10:00:03 crc kubenswrapper[4856]: I1122 10:00:03.775083 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29396760-rmqht" Nov 22 10:00:04 crc kubenswrapper[4856]: I1122 10:00:04.243717 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r"] Nov 22 10:00:04 crc kubenswrapper[4856]: I1122 10:00:04.251983 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29396715-smf5r"] Nov 22 10:00:04 crc kubenswrapper[4856]: I1122 10:00:04.728619 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48401830-c3ac-4955-a5d1-125bcf6a70a3" path="/var/lib/kubelet/pods/48401830-c3ac-4955-a5d1-125bcf6a70a3/volumes" Nov 22 10:00:51 crc kubenswrapper[4856]: I1122 10:00:51.233102 4856 scope.go:117] "RemoveContainer" containerID="c843aa8c4cdaedaf3736714f0c456037aabe7edabc24a864ef358d3d2af4b1d6" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.563987 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29396761-54k82"] Nov 22 10:01:00 crc kubenswrapper[4856]: E1122 10:01:00.564946 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ab1f64-b930-4051-934b-664976b0a413" containerName="collect-profiles" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.564960 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ab1f64-b930-4051-934b-664976b0a413" containerName="collect-profiles" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.565186 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ab1f64-b930-4051-934b-664976b0a413" containerName="collect-profiles" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.565948 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.617165 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396761-54k82"] Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.717546 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-combined-ca-bundle\") pod \"keystone-cron-29396761-54k82\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.717623 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4f47\" (UniqueName: \"kubernetes.io/projected/c785adf0-bfc7-4bcd-83c1-f5f346583e47-kube-api-access-r4f47\") pod \"keystone-cron-29396761-54k82\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.717701 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-config-data\") pod \"keystone-cron-29396761-54k82\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.717769 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-fernet-keys\") pod \"keystone-cron-29396761-54k82\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.819583 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-fernet-keys\") pod \"keystone-cron-29396761-54k82\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.820034 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-combined-ca-bundle\") pod \"keystone-cron-29396761-54k82\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.820075 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4f47\" (UniqueName: \"kubernetes.io/projected/c785adf0-bfc7-4bcd-83c1-f5f346583e47-kube-api-access-r4f47\") pod \"keystone-cron-29396761-54k82\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.820149 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-config-data\") pod \"keystone-cron-29396761-54k82\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.826862 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-config-data\") pod \"keystone-cron-29396761-54k82\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.826941 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-fernet-keys\") pod \"keystone-cron-29396761-54k82\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.829102 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-combined-ca-bundle\") pod \"keystone-cron-29396761-54k82\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.840279 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4f47\" (UniqueName: \"kubernetes.io/projected/c785adf0-bfc7-4bcd-83c1-f5f346583e47-kube-api-access-r4f47\") pod \"keystone-cron-29396761-54k82\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:00 crc kubenswrapper[4856]: I1122 10:01:00.889679 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:01 crc kubenswrapper[4856]: I1122 10:01:01.348452 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29396761-54k82"] Nov 22 10:01:02 crc kubenswrapper[4856]: I1122 10:01:02.344162 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396761-54k82" event={"ID":"c785adf0-bfc7-4bcd-83c1-f5f346583e47","Type":"ContainerStarted","Data":"aed85089c94906b4c9818170d6bd5c4ff930ef4ff5354e4b6b0f71462d74f375"} Nov 22 10:01:02 crc kubenswrapper[4856]: I1122 10:01:02.344576 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396761-54k82" event={"ID":"c785adf0-bfc7-4bcd-83c1-f5f346583e47","Type":"ContainerStarted","Data":"d62fee1eaa799e9d59a4d955007f8d06f9557993a332ef28978a065d892cc209"} Nov 22 10:01:02 crc kubenswrapper[4856]: I1122 10:01:02.366264 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29396761-54k82" podStartSLOduration=2.366246771 podStartE2EDuration="2.366246771s" podCreationTimestamp="2025-11-22 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:01:02.357750812 +0000 UTC m=+10704.771144090" watchObservedRunningTime="2025-11-22 10:01:02.366246771 +0000 UTC m=+10704.779640029" Nov 22 10:01:05 crc kubenswrapper[4856]: I1122 10:01:05.373240 4856 generic.go:334] "Generic (PLEG): container finished" podID="c785adf0-bfc7-4bcd-83c1-f5f346583e47" containerID="aed85089c94906b4c9818170d6bd5c4ff930ef4ff5354e4b6b0f71462d74f375" exitCode=0 Nov 22 10:01:05 crc kubenswrapper[4856]: I1122 10:01:05.373361 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396761-54k82" event={"ID":"c785adf0-bfc7-4bcd-83c1-f5f346583e47","Type":"ContainerDied","Data":"aed85089c94906b4c9818170d6bd5c4ff930ef4ff5354e4b6b0f71462d74f375"} Nov 22 10:01:06 crc kubenswrapper[4856]: I1122 10:01:06.729251 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:06 crc kubenswrapper[4856]: I1122 10:01:06.848369 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-combined-ca-bundle\") pod \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " Nov 22 10:01:06 crc kubenswrapper[4856]: I1122 10:01:06.848443 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4f47\" (UniqueName: \"kubernetes.io/projected/c785adf0-bfc7-4bcd-83c1-f5f346583e47-kube-api-access-r4f47\") pod \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " Nov 22 10:01:06 crc kubenswrapper[4856]: I1122 10:01:06.848578 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-fernet-keys\") pod \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " Nov 22 10:01:06 crc kubenswrapper[4856]: I1122 10:01:06.848636 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-config-data\") pod \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\" (UID: \"c785adf0-bfc7-4bcd-83c1-f5f346583e47\") " Nov 22 10:01:06 crc kubenswrapper[4856]: I1122 10:01:06.853643 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c785adf0-bfc7-4bcd-83c1-f5f346583e47" (UID: "c785adf0-bfc7-4bcd-83c1-f5f346583e47"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:01:06 crc kubenswrapper[4856]: I1122 10:01:06.853729 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c785adf0-bfc7-4bcd-83c1-f5f346583e47-kube-api-access-r4f47" (OuterVolumeSpecName: "kube-api-access-r4f47") pod "c785adf0-bfc7-4bcd-83c1-f5f346583e47" (UID: "c785adf0-bfc7-4bcd-83c1-f5f346583e47"). InnerVolumeSpecName "kube-api-access-r4f47". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:01:06 crc kubenswrapper[4856]: I1122 10:01:06.878169 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c785adf0-bfc7-4bcd-83c1-f5f346583e47" (UID: "c785adf0-bfc7-4bcd-83c1-f5f346583e47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:01:06 crc kubenswrapper[4856]: I1122 10:01:06.901224 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-config-data" (OuterVolumeSpecName: "config-data") pod "c785adf0-bfc7-4bcd-83c1-f5f346583e47" (UID: "c785adf0-bfc7-4bcd-83c1-f5f346583e47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:01:06 crc kubenswrapper[4856]: I1122 10:01:06.951741 4856 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 22 10:01:06 crc kubenswrapper[4856]: I1122 10:01:06.951776 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:01:06 crc kubenswrapper[4856]: I1122 10:01:06.951787 4856 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c785adf0-bfc7-4bcd-83c1-f5f346583e47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 22 10:01:06 crc kubenswrapper[4856]: I1122 10:01:06.951799 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4f47\" (UniqueName: \"kubernetes.io/projected/c785adf0-bfc7-4bcd-83c1-f5f346583e47-kube-api-access-r4f47\") on node \"crc\" DevicePath \"\"" Nov 22 10:01:07 crc kubenswrapper[4856]: I1122 10:01:07.390424 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29396761-54k82" event={"ID":"c785adf0-bfc7-4bcd-83c1-f5f346583e47","Type":"ContainerDied","Data":"d62fee1eaa799e9d59a4d955007f8d06f9557993a332ef28978a065d892cc209"} Nov 22 10:01:07 crc kubenswrapper[4856]: I1122 10:01:07.390470 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d62fee1eaa799e9d59a4d955007f8d06f9557993a332ef28978a065d892cc209" Nov 22 10:01:07 crc kubenswrapper[4856]: I1122 10:01:07.390497 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29396761-54k82" Nov 22 10:01:21 crc kubenswrapper[4856]: I1122 10:01:21.520179 4856 generic.go:334] "Generic (PLEG): container finished" podID="9ceb57cb-8794-40bb-97b2-d59671b89459" containerID="091bcffc4f70006fec8c980ae3b20ba63a7c90420f7373fb904f04352d1923a7" exitCode=0 Nov 22 10:01:21 crc kubenswrapper[4856]: I1122 10:01:21.520244 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9ceb57cb-8794-40bb-97b2-d59671b89459","Type":"ContainerDied","Data":"091bcffc4f70006fec8c980ae3b20ba63a7c90420f7373fb904f04352d1923a7"} Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.882055 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.964649 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-openstack-config-secret\") pod \"9ceb57cb-8794-40bb-97b2-d59671b89459\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.964826 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9ceb57cb-8794-40bb-97b2-d59671b89459-test-operator-ephemeral-temporary\") pod \"9ceb57cb-8794-40bb-97b2-d59671b89459\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.964892 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfwz9\" (UniqueName: \"kubernetes.io/projected/9ceb57cb-8794-40bb-97b2-d59671b89459-kube-api-access-bfwz9\") pod \"9ceb57cb-8794-40bb-97b2-d59671b89459\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.964930 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-ca-certs\") pod \"9ceb57cb-8794-40bb-97b2-d59671b89459\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.964957 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-ssh-key\") pod \"9ceb57cb-8794-40bb-97b2-d59671b89459\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.965026 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9ceb57cb-8794-40bb-97b2-d59671b89459-test-operator-ephemeral-workdir\") pod \"9ceb57cb-8794-40bb-97b2-d59671b89459\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.965055 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ceb57cb-8794-40bb-97b2-d59671b89459-openstack-config\") pod \"9ceb57cb-8794-40bb-97b2-d59671b89459\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.965112 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"9ceb57cb-8794-40bb-97b2-d59671b89459\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.965158 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ceb57cb-8794-40bb-97b2-d59671b89459-config-data\") pod \"9ceb57cb-8794-40bb-97b2-d59671b89459\" (UID: \"9ceb57cb-8794-40bb-97b2-d59671b89459\") " Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.966348 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ceb57cb-8794-40bb-97b2-d59671b89459-config-data" (OuterVolumeSpecName: "config-data") pod "9ceb57cb-8794-40bb-97b2-d59671b89459" (UID: "9ceb57cb-8794-40bb-97b2-d59671b89459"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.966991 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ceb57cb-8794-40bb-97b2-d59671b89459-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "9ceb57cb-8794-40bb-97b2-d59671b89459" (UID: "9ceb57cb-8794-40bb-97b2-d59671b89459"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.978720 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "9ceb57cb-8794-40bb-97b2-d59671b89459" (UID: "9ceb57cb-8794-40bb-97b2-d59671b89459"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 22 10:01:22 crc kubenswrapper[4856]: I1122 10:01:22.979061 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ceb57cb-8794-40bb-97b2-d59671b89459-kube-api-access-bfwz9" (OuterVolumeSpecName: "kube-api-access-bfwz9") pod "9ceb57cb-8794-40bb-97b2-d59671b89459" (UID: "9ceb57cb-8794-40bb-97b2-d59671b89459"). InnerVolumeSpecName "kube-api-access-bfwz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.003040 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "9ceb57cb-8794-40bb-97b2-d59671b89459" (UID: "9ceb57cb-8794-40bb-97b2-d59671b89459"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.012152 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ceb57cb-8794-40bb-97b2-d59671b89459-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "9ceb57cb-8794-40bb-97b2-d59671b89459" (UID: "9ceb57cb-8794-40bb-97b2-d59671b89459"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.015173 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9ceb57cb-8794-40bb-97b2-d59671b89459" (UID: "9ceb57cb-8794-40bb-97b2-d59671b89459"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.015219 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9ceb57cb-8794-40bb-97b2-d59671b89459" (UID: "9ceb57cb-8794-40bb-97b2-d59671b89459"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.030811 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ceb57cb-8794-40bb-97b2-d59671b89459-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9ceb57cb-8794-40bb-97b2-d59671b89459" (UID: "9ceb57cb-8794-40bb-97b2-d59671b89459"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.067742 4856 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9ceb57cb-8794-40bb-97b2-d59671b89459-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.067775 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfwz9\" (UniqueName: \"kubernetes.io/projected/9ceb57cb-8794-40bb-97b2-d59671b89459-kube-api-access-bfwz9\") on node \"crc\" DevicePath \"\"" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.067785 4856 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.067792 4856 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.067803 4856 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9ceb57cb-8794-40bb-97b2-d59671b89459-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.067813 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9ceb57cb-8794-40bb-97b2-d59671b89459-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.067850 4856 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.067860 4856 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ceb57cb-8794-40bb-97b2-d59671b89459-config-data\") on node \"crc\" DevicePath \"\"" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.067869 4856 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9ceb57cb-8794-40bb-97b2-d59671b89459-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.116815 4856 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.170165 4856 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.537598 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9ceb57cb-8794-40bb-97b2-d59671b89459","Type":"ContainerDied","Data":"db5a297fdee10da587b0c07acdb07d0b4029dc9b1ee84f269418ca4bd8761c33"} Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.537658 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db5a297fdee10da587b0c07acdb07d0b4029dc9b1ee84f269418ca4bd8761c33" Nov 22 10:01:23 crc kubenswrapper[4856]: I1122 10:01:23.537705 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.467058 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 22 10:01:32 crc kubenswrapper[4856]: E1122 10:01:32.468394 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ceb57cb-8794-40bb-97b2-d59671b89459" containerName="tempest-tests-tempest-tests-runner" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.468414 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ceb57cb-8794-40bb-97b2-d59671b89459" containerName="tempest-tests-tempest-tests-runner" Nov 22 10:01:32 crc kubenswrapper[4856]: E1122 10:01:32.468449 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c785adf0-bfc7-4bcd-83c1-f5f346583e47" containerName="keystone-cron" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.468457 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="c785adf0-bfc7-4bcd-83c1-f5f346583e47" containerName="keystone-cron" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.468708 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="c785adf0-bfc7-4bcd-83c1-f5f346583e47" containerName="keystone-cron" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.468731 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ceb57cb-8794-40bb-97b2-d59671b89459" containerName="tempest-tests-tempest-tests-runner" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.469643 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.473589 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nvzxv" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.478042 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.583597 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b6q2\" (UniqueName: \"kubernetes.io/projected/f649b139-1eb1-4f25-b521-e33f00b0731f-kube-api-access-6b6q2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f649b139-1eb1-4f25-b521-e33f00b0731f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.583913 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f649b139-1eb1-4f25-b521-e33f00b0731f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.686344 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f649b139-1eb1-4f25-b521-e33f00b0731f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.686444 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b6q2\" (UniqueName: \"kubernetes.io/projected/f649b139-1eb1-4f25-b521-e33f00b0731f-kube-api-access-6b6q2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f649b139-1eb1-4f25-b521-e33f00b0731f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.686928 4856 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f649b139-1eb1-4f25-b521-e33f00b0731f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.717798 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b6q2\" (UniqueName: \"kubernetes.io/projected/f649b139-1eb1-4f25-b521-e33f00b0731f-kube-api-access-6b6q2\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f649b139-1eb1-4f25-b521-e33f00b0731f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.764027 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"f649b139-1eb1-4f25-b521-e33f00b0731f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 10:01:32 crc kubenswrapper[4856]: I1122 10:01:32.847080 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 22 10:01:33 crc kubenswrapper[4856]: I1122 10:01:33.298556 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 22 10:01:33 crc kubenswrapper[4856]: I1122 10:01:33.301552 4856 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 22 10:01:33 crc kubenswrapper[4856]: I1122 10:01:33.642896 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"f649b139-1eb1-4f25-b521-e33f00b0731f","Type":"ContainerStarted","Data":"99d12ab3b71650e4fb4afad694e6f0840c6ac1afd22f94eebf11f4d7aa9f11c4"} Nov 22 10:01:34 crc kubenswrapper[4856]: I1122 10:01:34.658215 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"f649b139-1eb1-4f25-b521-e33f00b0731f","Type":"ContainerStarted","Data":"55457c2b0bad455e223e2ee66ca1bbbfb228a4203447639db946a9834f978515"} Nov 22 10:01:34 crc kubenswrapper[4856]: I1122 10:01:34.686803 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.778392415 podStartE2EDuration="2.68677992s" podCreationTimestamp="2025-11-22 10:01:32 +0000 UTC" firstStartedPulling="2025-11-22 10:01:33.301307871 +0000 UTC m=+10735.714701129" lastFinishedPulling="2025-11-22 10:01:34.209695376 +0000 UTC m=+10736.623088634" observedRunningTime="2025-11-22 10:01:34.678871947 +0000 UTC m=+10737.092265205" watchObservedRunningTime="2025-11-22 10:01:34.68677992 +0000 UTC m=+10737.100173188" Nov 22 10:01:59 crc kubenswrapper[4856]: I1122 10:01:59.754781 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:01:59 crc kubenswrapper[4856]: I1122 10:01:59.755225 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:02:29 crc kubenswrapper[4856]: I1122 10:02:29.754085 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:02:29 crc kubenswrapper[4856]: I1122 10:02:29.754646 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:02:32 crc kubenswrapper[4856]: I1122 10:02:32.402386 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5sxxb/must-gather-hj5wk"] Nov 22 10:02:32 crc kubenswrapper[4856]: I1122 10:02:32.404708 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/must-gather-hj5wk" Nov 22 10:02:32 crc kubenswrapper[4856]: I1122 10:02:32.412237 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5sxxb"/"openshift-service-ca.crt" Nov 22 10:02:32 crc kubenswrapper[4856]: I1122 10:02:32.412268 4856 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-5sxxb"/"default-dockercfg-cdfj7" Nov 22 10:02:32 crc kubenswrapper[4856]: I1122 10:02:32.412654 4856 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-5sxxb"/"kube-root-ca.crt" Nov 22 10:02:32 crc kubenswrapper[4856]: I1122 10:02:32.415780 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5sxxb/must-gather-hj5wk"] Nov 22 10:02:32 crc kubenswrapper[4856]: I1122 10:02:32.516292 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvvsf\" (UniqueName: \"kubernetes.io/projected/1a1ca3cb-fb3f-420c-8caf-1787dd762c29-kube-api-access-xvvsf\") pod \"must-gather-hj5wk\" (UID: \"1a1ca3cb-fb3f-420c-8caf-1787dd762c29\") " pod="openshift-must-gather-5sxxb/must-gather-hj5wk" Nov 22 10:02:32 crc kubenswrapper[4856]: I1122 10:02:32.516360 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1a1ca3cb-fb3f-420c-8caf-1787dd762c29-must-gather-output\") pod \"must-gather-hj5wk\" (UID: \"1a1ca3cb-fb3f-420c-8caf-1787dd762c29\") " pod="openshift-must-gather-5sxxb/must-gather-hj5wk" Nov 22 10:02:32 crc kubenswrapper[4856]: I1122 10:02:32.617876 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvvsf\" (UniqueName: \"kubernetes.io/projected/1a1ca3cb-fb3f-420c-8caf-1787dd762c29-kube-api-access-xvvsf\") pod \"must-gather-hj5wk\" (UID: \"1a1ca3cb-fb3f-420c-8caf-1787dd762c29\") " pod="openshift-must-gather-5sxxb/must-gather-hj5wk" Nov 22 10:02:32 crc kubenswrapper[4856]: I1122 10:02:32.617949 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1a1ca3cb-fb3f-420c-8caf-1787dd762c29-must-gather-output\") pod \"must-gather-hj5wk\" (UID: \"1a1ca3cb-fb3f-420c-8caf-1787dd762c29\") " pod="openshift-must-gather-5sxxb/must-gather-hj5wk" Nov 22 10:02:32 crc kubenswrapper[4856]: I1122 10:02:32.618343 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1a1ca3cb-fb3f-420c-8caf-1787dd762c29-must-gather-output\") pod \"must-gather-hj5wk\" (UID: \"1a1ca3cb-fb3f-420c-8caf-1787dd762c29\") " pod="openshift-must-gather-5sxxb/must-gather-hj5wk" Nov 22 10:02:32 crc kubenswrapper[4856]: I1122 10:02:32.659312 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvvsf\" (UniqueName: \"kubernetes.io/projected/1a1ca3cb-fb3f-420c-8caf-1787dd762c29-kube-api-access-xvvsf\") pod \"must-gather-hj5wk\" (UID: \"1a1ca3cb-fb3f-420c-8caf-1787dd762c29\") " pod="openshift-must-gather-5sxxb/must-gather-hj5wk" Nov 22 10:02:32 crc kubenswrapper[4856]: I1122 10:02:32.724713 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/must-gather-hj5wk" Nov 22 10:02:33 crc kubenswrapper[4856]: I1122 10:02:33.376632 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5sxxb/must-gather-hj5wk"] Nov 22 10:02:33 crc kubenswrapper[4856]: W1122 10:02:33.387082 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a1ca3cb_fb3f_420c_8caf_1787dd762c29.slice/crio-f1f606fac7942d6d29c8058fb6dec4dac2b5c46d8038c69c33f2ba5b77387527 WatchSource:0}: Error finding container f1f606fac7942d6d29c8058fb6dec4dac2b5c46d8038c69c33f2ba5b77387527: Status 404 returned error can't find the container with id f1f606fac7942d6d29c8058fb6dec4dac2b5c46d8038c69c33f2ba5b77387527 Nov 22 10:02:34 crc kubenswrapper[4856]: I1122 10:02:34.205399 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5sxxb/must-gather-hj5wk" event={"ID":"1a1ca3cb-fb3f-420c-8caf-1787dd762c29","Type":"ContainerStarted","Data":"f1f606fac7942d6d29c8058fb6dec4dac2b5c46d8038c69c33f2ba5b77387527"} Nov 22 10:02:40 crc kubenswrapper[4856]: I1122 10:02:40.300998 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5sxxb/must-gather-hj5wk" event={"ID":"1a1ca3cb-fb3f-420c-8caf-1787dd762c29","Type":"ContainerStarted","Data":"f953fe7f427535f2f5e24645c11e09aed13aebea34b8e12a7325003582fb3d5b"} Nov 22 10:02:40 crc kubenswrapper[4856]: I1122 10:02:40.301587 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5sxxb/must-gather-hj5wk" event={"ID":"1a1ca3cb-fb3f-420c-8caf-1787dd762c29","Type":"ContainerStarted","Data":"d0a9054d4703c7c961fdc2ced123462e2d3e8643152e6eb94c48b76222515a44"} Nov 22 10:02:40 crc kubenswrapper[4856]: I1122 10:02:40.329613 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5sxxb/must-gather-hj5wk" podStartSLOduration=2.025956958 podStartE2EDuration="8.329486995s" podCreationTimestamp="2025-11-22 10:02:32 +0000 UTC" firstStartedPulling="2025-11-22 10:02:33.389812557 +0000 UTC m=+10795.803205815" lastFinishedPulling="2025-11-22 10:02:39.693342594 +0000 UTC m=+10802.106735852" observedRunningTime="2025-11-22 10:02:40.318065036 +0000 UTC m=+10802.731458304" watchObservedRunningTime="2025-11-22 10:02:40.329486995 +0000 UTC m=+10802.742880293" Nov 22 10:02:44 crc kubenswrapper[4856]: I1122 10:02:44.509305 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5sxxb/crc-debug-888gn"] Nov 22 10:02:44 crc kubenswrapper[4856]: I1122 10:02:44.511172 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/crc-debug-888gn" Nov 22 10:02:44 crc kubenswrapper[4856]: I1122 10:02:44.694273 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/634bf465-8a8e-44ac-b807-bcc2e1052f29-host\") pod \"crc-debug-888gn\" (UID: \"634bf465-8a8e-44ac-b807-bcc2e1052f29\") " pod="openshift-must-gather-5sxxb/crc-debug-888gn" Nov 22 10:02:44 crc kubenswrapper[4856]: I1122 10:02:44.695428 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5ll\" (UniqueName: \"kubernetes.io/projected/634bf465-8a8e-44ac-b807-bcc2e1052f29-kube-api-access-jz5ll\") pod \"crc-debug-888gn\" (UID: \"634bf465-8a8e-44ac-b807-bcc2e1052f29\") " pod="openshift-must-gather-5sxxb/crc-debug-888gn" Nov 22 10:02:44 crc kubenswrapper[4856]: I1122 10:02:44.798420 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz5ll\" (UniqueName: \"kubernetes.io/projected/634bf465-8a8e-44ac-b807-bcc2e1052f29-kube-api-access-jz5ll\") pod \"crc-debug-888gn\" (UID: \"634bf465-8a8e-44ac-b807-bcc2e1052f29\") " pod="openshift-must-gather-5sxxb/crc-debug-888gn" Nov 22 10:02:44 crc kubenswrapper[4856]: I1122 10:02:44.798980 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/634bf465-8a8e-44ac-b807-bcc2e1052f29-host\") pod \"crc-debug-888gn\" (UID: \"634bf465-8a8e-44ac-b807-bcc2e1052f29\") " pod="openshift-must-gather-5sxxb/crc-debug-888gn" Nov 22 10:02:44 crc kubenswrapper[4856]: I1122 10:02:44.799167 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/634bf465-8a8e-44ac-b807-bcc2e1052f29-host\") pod \"crc-debug-888gn\" (UID: \"634bf465-8a8e-44ac-b807-bcc2e1052f29\") " pod="openshift-must-gather-5sxxb/crc-debug-888gn" Nov 22 10:02:44 crc kubenswrapper[4856]: I1122 10:02:44.817017 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz5ll\" (UniqueName: \"kubernetes.io/projected/634bf465-8a8e-44ac-b807-bcc2e1052f29-kube-api-access-jz5ll\") pod \"crc-debug-888gn\" (UID: \"634bf465-8a8e-44ac-b807-bcc2e1052f29\") " pod="openshift-must-gather-5sxxb/crc-debug-888gn" Nov 22 10:02:44 crc kubenswrapper[4856]: I1122 10:02:44.832047 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/crc-debug-888gn" Nov 22 10:02:45 crc kubenswrapper[4856]: I1122 10:02:45.351477 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5sxxb/crc-debug-888gn" event={"ID":"634bf465-8a8e-44ac-b807-bcc2e1052f29","Type":"ContainerStarted","Data":"dc8c85ba98834d088073114da962d29f64c0af5fa1a392d959cda64d970063f9"} Nov 22 10:02:58 crc kubenswrapper[4856]: I1122 10:02:58.481695 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5sxxb/crc-debug-888gn" event={"ID":"634bf465-8a8e-44ac-b807-bcc2e1052f29","Type":"ContainerStarted","Data":"185bdc038c80d3bd9cdc02b30664667545ae52e4a32d557f4659f69683853145"} Nov 22 10:02:58 crc kubenswrapper[4856]: I1122 10:02:58.506712 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5sxxb/crc-debug-888gn" podStartSLOduration=1.4043504470000001 podStartE2EDuration="14.506688276s" podCreationTimestamp="2025-11-22 10:02:44 +0000 UTC" firstStartedPulling="2025-11-22 10:02:44.883391761 +0000 UTC m=+10807.296785019" lastFinishedPulling="2025-11-22 10:02:57.98572958 +0000 UTC m=+10820.399122848" observedRunningTime="2025-11-22 10:02:58.493272184 +0000 UTC m=+10820.906665442" watchObservedRunningTime="2025-11-22 10:02:58.506688276 +0000 UTC m=+10820.920081534" Nov 22 10:02:59 crc kubenswrapper[4856]: I1122 10:02:59.754298 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:02:59 crc kubenswrapper[4856]: I1122 10:02:59.754619 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:02:59 crc kubenswrapper[4856]: I1122 10:02:59.754666 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 10:02:59 crc kubenswrapper[4856]: I1122 10:02:59.755434 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b8d0d88e8da694286b9829436d41459bda182635bbecc7206139ad174a04590"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 10:02:59 crc kubenswrapper[4856]: I1122 10:02:59.755485 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://7b8d0d88e8da694286b9829436d41459bda182635bbecc7206139ad174a04590" gracePeriod=600 Nov 22 10:03:00 crc kubenswrapper[4856]: I1122 10:03:00.502498 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="7b8d0d88e8da694286b9829436d41459bda182635bbecc7206139ad174a04590" exitCode=0 Nov 22 10:03:00 crc kubenswrapper[4856]: I1122 10:03:00.502547 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"7b8d0d88e8da694286b9829436d41459bda182635bbecc7206139ad174a04590"} Nov 22 10:03:00 crc kubenswrapper[4856]: I1122 10:03:00.503008 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77"} Nov 22 10:03:00 crc kubenswrapper[4856]: I1122 10:03:00.503030 4856 scope.go:117] "RemoveContainer" containerID="ae1832405cd7f9b639e977fce4b64e569c8c331c435c6239675b6e7a3e763a85" Nov 22 10:03:53 crc kubenswrapper[4856]: I1122 10:03:53.060627 4856 generic.go:334] "Generic (PLEG): container finished" podID="634bf465-8a8e-44ac-b807-bcc2e1052f29" containerID="185bdc038c80d3bd9cdc02b30664667545ae52e4a32d557f4659f69683853145" exitCode=0 Nov 22 10:03:53 crc kubenswrapper[4856]: I1122 10:03:53.060715 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5sxxb/crc-debug-888gn" event={"ID":"634bf465-8a8e-44ac-b807-bcc2e1052f29","Type":"ContainerDied","Data":"185bdc038c80d3bd9cdc02b30664667545ae52e4a32d557f4659f69683853145"} Nov 22 10:03:54 crc kubenswrapper[4856]: I1122 10:03:54.179404 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/crc-debug-888gn" Nov 22 10:03:54 crc kubenswrapper[4856]: I1122 10:03:54.226606 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5sxxb/crc-debug-888gn"] Nov 22 10:03:54 crc kubenswrapper[4856]: I1122 10:03:54.235791 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5sxxb/crc-debug-888gn"] Nov 22 10:03:54 crc kubenswrapper[4856]: I1122 10:03:54.341569 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz5ll\" (UniqueName: \"kubernetes.io/projected/634bf465-8a8e-44ac-b807-bcc2e1052f29-kube-api-access-jz5ll\") pod \"634bf465-8a8e-44ac-b807-bcc2e1052f29\" (UID: \"634bf465-8a8e-44ac-b807-bcc2e1052f29\") " Nov 22 10:03:54 crc kubenswrapper[4856]: I1122 10:03:54.341722 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/634bf465-8a8e-44ac-b807-bcc2e1052f29-host\") pod \"634bf465-8a8e-44ac-b807-bcc2e1052f29\" (UID: \"634bf465-8a8e-44ac-b807-bcc2e1052f29\") " Nov 22 10:03:54 crc kubenswrapper[4856]: I1122 10:03:54.341852 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/634bf465-8a8e-44ac-b807-bcc2e1052f29-host" (OuterVolumeSpecName: "host") pod "634bf465-8a8e-44ac-b807-bcc2e1052f29" (UID: "634bf465-8a8e-44ac-b807-bcc2e1052f29"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:03:54 crc kubenswrapper[4856]: I1122 10:03:54.342252 4856 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/634bf465-8a8e-44ac-b807-bcc2e1052f29-host\") on node \"crc\" DevicePath \"\"" Nov 22 10:03:54 crc kubenswrapper[4856]: I1122 10:03:54.347750 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/634bf465-8a8e-44ac-b807-bcc2e1052f29-kube-api-access-jz5ll" (OuterVolumeSpecName: "kube-api-access-jz5ll") pod "634bf465-8a8e-44ac-b807-bcc2e1052f29" (UID: "634bf465-8a8e-44ac-b807-bcc2e1052f29"). InnerVolumeSpecName "kube-api-access-jz5ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:03:54 crc kubenswrapper[4856]: I1122 10:03:54.444187 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz5ll\" (UniqueName: \"kubernetes.io/projected/634bf465-8a8e-44ac-b807-bcc2e1052f29-kube-api-access-jz5ll\") on node \"crc\" DevicePath \"\"" Nov 22 10:03:54 crc kubenswrapper[4856]: I1122 10:03:54.721778 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="634bf465-8a8e-44ac-b807-bcc2e1052f29" path="/var/lib/kubelet/pods/634bf465-8a8e-44ac-b807-bcc2e1052f29/volumes" Nov 22 10:03:55 crc kubenswrapper[4856]: I1122 10:03:55.078538 4856 scope.go:117] "RemoveContainer" containerID="185bdc038c80d3bd9cdc02b30664667545ae52e4a32d557f4659f69683853145" Nov 22 10:03:55 crc kubenswrapper[4856]: I1122 10:03:55.078593 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/crc-debug-888gn" Nov 22 10:03:55 crc kubenswrapper[4856]: I1122 10:03:55.438618 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5sxxb/crc-debug-v6zdm"] Nov 22 10:03:55 crc kubenswrapper[4856]: E1122 10:03:55.439028 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="634bf465-8a8e-44ac-b807-bcc2e1052f29" containerName="container-00" Nov 22 10:03:55 crc kubenswrapper[4856]: I1122 10:03:55.439039 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="634bf465-8a8e-44ac-b807-bcc2e1052f29" containerName="container-00" Nov 22 10:03:55 crc kubenswrapper[4856]: I1122 10:03:55.439264 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="634bf465-8a8e-44ac-b807-bcc2e1052f29" containerName="container-00" Nov 22 10:03:55 crc kubenswrapper[4856]: I1122 10:03:55.440063 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" Nov 22 10:03:55 crc kubenswrapper[4856]: I1122 10:03:55.566094 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02a68f45-b4b6-425e-a05e-b77055737757-host\") pod \"crc-debug-v6zdm\" (UID: \"02a68f45-b4b6-425e-a05e-b77055737757\") " pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" Nov 22 10:03:55 crc kubenswrapper[4856]: I1122 10:03:55.566442 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8d9l\" (UniqueName: \"kubernetes.io/projected/02a68f45-b4b6-425e-a05e-b77055737757-kube-api-access-d8d9l\") pod \"crc-debug-v6zdm\" (UID: \"02a68f45-b4b6-425e-a05e-b77055737757\") " pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" Nov 22 10:03:55 crc kubenswrapper[4856]: I1122 10:03:55.668820 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02a68f45-b4b6-425e-a05e-b77055737757-host\") pod \"crc-debug-v6zdm\" (UID: \"02a68f45-b4b6-425e-a05e-b77055737757\") " pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" Nov 22 10:03:55 crc kubenswrapper[4856]: I1122 10:03:55.668954 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8d9l\" (UniqueName: \"kubernetes.io/projected/02a68f45-b4b6-425e-a05e-b77055737757-kube-api-access-d8d9l\") pod \"crc-debug-v6zdm\" (UID: \"02a68f45-b4b6-425e-a05e-b77055737757\") " pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" Nov 22 10:03:55 crc kubenswrapper[4856]: I1122 10:03:55.668957 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02a68f45-b4b6-425e-a05e-b77055737757-host\") pod \"crc-debug-v6zdm\" (UID: \"02a68f45-b4b6-425e-a05e-b77055737757\") " pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" Nov 22 10:03:55 crc kubenswrapper[4856]: I1122 10:03:55.701168 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8d9l\" (UniqueName: \"kubernetes.io/projected/02a68f45-b4b6-425e-a05e-b77055737757-kube-api-access-d8d9l\") pod \"crc-debug-v6zdm\" (UID: \"02a68f45-b4b6-425e-a05e-b77055737757\") " pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" Nov 22 10:03:55 crc kubenswrapper[4856]: I1122 10:03:55.759475 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" Nov 22 10:03:56 crc kubenswrapper[4856]: I1122 10:03:56.095600 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" event={"ID":"02a68f45-b4b6-425e-a05e-b77055737757","Type":"ContainerStarted","Data":"eb95c2301512441d74911286cee23b4d49f00ac92914ec24f612ac9d2e356376"} Nov 22 10:03:56 crc kubenswrapper[4856]: I1122 10:03:56.096124 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" event={"ID":"02a68f45-b4b6-425e-a05e-b77055737757","Type":"ContainerStarted","Data":"ed3ee2179b007f2457d1548f6e145d50c235b808d58d99ad2e21784d8da99501"} Nov 22 10:03:56 crc kubenswrapper[4856]: I1122 10:03:56.112921 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" podStartSLOduration=1.112904312 podStartE2EDuration="1.112904312s" podCreationTimestamp="2025-11-22 10:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 10:03:56.107857996 +0000 UTC m=+10878.521251254" watchObservedRunningTime="2025-11-22 10:03:56.112904312 +0000 UTC m=+10878.526297570" Nov 22 10:03:57 crc kubenswrapper[4856]: I1122 10:03:57.105330 4856 generic.go:334] "Generic (PLEG): container finished" podID="02a68f45-b4b6-425e-a05e-b77055737757" containerID="eb95c2301512441d74911286cee23b4d49f00ac92914ec24f612ac9d2e356376" exitCode=0 Nov 22 10:03:57 crc kubenswrapper[4856]: I1122 10:03:57.105363 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" event={"ID":"02a68f45-b4b6-425e-a05e-b77055737757","Type":"ContainerDied","Data":"eb95c2301512441d74911286cee23b4d49f00ac92914ec24f612ac9d2e356376"} Nov 22 10:03:58 crc kubenswrapper[4856]: I1122 10:03:58.228657 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" Nov 22 10:03:58 crc kubenswrapper[4856]: I1122 10:03:58.415421 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8d9l\" (UniqueName: \"kubernetes.io/projected/02a68f45-b4b6-425e-a05e-b77055737757-kube-api-access-d8d9l\") pod \"02a68f45-b4b6-425e-a05e-b77055737757\" (UID: \"02a68f45-b4b6-425e-a05e-b77055737757\") " Nov 22 10:03:58 crc kubenswrapper[4856]: I1122 10:03:58.415467 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02a68f45-b4b6-425e-a05e-b77055737757-host\") pod \"02a68f45-b4b6-425e-a05e-b77055737757\" (UID: \"02a68f45-b4b6-425e-a05e-b77055737757\") " Nov 22 10:03:58 crc kubenswrapper[4856]: I1122 10:03:58.415813 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02a68f45-b4b6-425e-a05e-b77055737757-host" (OuterVolumeSpecName: "host") pod "02a68f45-b4b6-425e-a05e-b77055737757" (UID: "02a68f45-b4b6-425e-a05e-b77055737757"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:03:58 crc kubenswrapper[4856]: I1122 10:03:58.416366 4856 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/02a68f45-b4b6-425e-a05e-b77055737757-host\") on node \"crc\" DevicePath \"\"" Nov 22 10:03:58 crc kubenswrapper[4856]: I1122 10:03:58.424154 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a68f45-b4b6-425e-a05e-b77055737757-kube-api-access-d8d9l" (OuterVolumeSpecName: "kube-api-access-d8d9l") pod "02a68f45-b4b6-425e-a05e-b77055737757" (UID: "02a68f45-b4b6-425e-a05e-b77055737757"). InnerVolumeSpecName "kube-api-access-d8d9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:03:58 crc kubenswrapper[4856]: I1122 10:03:58.517802 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8d9l\" (UniqueName: \"kubernetes.io/projected/02a68f45-b4b6-425e-a05e-b77055737757-kube-api-access-d8d9l\") on node \"crc\" DevicePath \"\"" Nov 22 10:03:58 crc kubenswrapper[4856]: I1122 10:03:58.870347 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5sxxb/crc-debug-v6zdm"] Nov 22 10:03:58 crc kubenswrapper[4856]: I1122 10:03:58.878697 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5sxxb/crc-debug-v6zdm"] Nov 22 10:03:59 crc kubenswrapper[4856]: I1122 10:03:59.127231 4856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed3ee2179b007f2457d1548f6e145d50c235b808d58d99ad2e21784d8da99501" Nov 22 10:03:59 crc kubenswrapper[4856]: I1122 10:03:59.127281 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/crc-debug-v6zdm" Nov 22 10:04:00 crc kubenswrapper[4856]: I1122 10:04:00.034284 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5sxxb/crc-debug-pf9jc"] Nov 22 10:04:00 crc kubenswrapper[4856]: E1122 10:04:00.034744 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a68f45-b4b6-425e-a05e-b77055737757" containerName="container-00" Nov 22 10:04:00 crc kubenswrapper[4856]: I1122 10:04:00.034760 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a68f45-b4b6-425e-a05e-b77055737757" containerName="container-00" Nov 22 10:04:00 crc kubenswrapper[4856]: I1122 10:04:00.034971 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a68f45-b4b6-425e-a05e-b77055737757" containerName="container-00" Nov 22 10:04:00 crc kubenswrapper[4856]: I1122 10:04:00.035686 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/crc-debug-pf9jc" Nov 22 10:04:00 crc kubenswrapper[4856]: I1122 10:04:00.147215 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9rbm\" (UniqueName: \"kubernetes.io/projected/4d9b4634-f1d6-4d79-b403-66ffe67bf188-kube-api-access-d9rbm\") pod \"crc-debug-pf9jc\" (UID: \"4d9b4634-f1d6-4d79-b403-66ffe67bf188\") " pod="openshift-must-gather-5sxxb/crc-debug-pf9jc" Nov 22 10:04:00 crc kubenswrapper[4856]: I1122 10:04:00.147782 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d9b4634-f1d6-4d79-b403-66ffe67bf188-host\") pod \"crc-debug-pf9jc\" (UID: \"4d9b4634-f1d6-4d79-b403-66ffe67bf188\") " pod="openshift-must-gather-5sxxb/crc-debug-pf9jc" Nov 22 10:04:00 crc kubenswrapper[4856]: I1122 10:04:00.249952 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d9b4634-f1d6-4d79-b403-66ffe67bf188-host\") pod \"crc-debug-pf9jc\" (UID: \"4d9b4634-f1d6-4d79-b403-66ffe67bf188\") " pod="openshift-must-gather-5sxxb/crc-debug-pf9jc" Nov 22 10:04:00 crc kubenswrapper[4856]: I1122 10:04:00.250065 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9rbm\" (UniqueName: \"kubernetes.io/projected/4d9b4634-f1d6-4d79-b403-66ffe67bf188-kube-api-access-d9rbm\") pod \"crc-debug-pf9jc\" (UID: \"4d9b4634-f1d6-4d79-b403-66ffe67bf188\") " pod="openshift-must-gather-5sxxb/crc-debug-pf9jc" Nov 22 10:04:00 crc kubenswrapper[4856]: I1122 10:04:00.250177 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d9b4634-f1d6-4d79-b403-66ffe67bf188-host\") pod \"crc-debug-pf9jc\" (UID: \"4d9b4634-f1d6-4d79-b403-66ffe67bf188\") " pod="openshift-must-gather-5sxxb/crc-debug-pf9jc" Nov 22 10:04:00 crc kubenswrapper[4856]: I1122 10:04:00.268145 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9rbm\" (UniqueName: \"kubernetes.io/projected/4d9b4634-f1d6-4d79-b403-66ffe67bf188-kube-api-access-d9rbm\") pod \"crc-debug-pf9jc\" (UID: \"4d9b4634-f1d6-4d79-b403-66ffe67bf188\") " pod="openshift-must-gather-5sxxb/crc-debug-pf9jc" Nov 22 10:04:00 crc kubenswrapper[4856]: I1122 10:04:00.352590 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/crc-debug-pf9jc" Nov 22 10:04:00 crc kubenswrapper[4856]: W1122 10:04:00.413042 4856 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d9b4634_f1d6_4d79_b403_66ffe67bf188.slice/crio-6b7dd56865e45041edf4ff08a51ee2c8d784a628215194ac0289ae64989ee920 WatchSource:0}: Error finding container 6b7dd56865e45041edf4ff08a51ee2c8d784a628215194ac0289ae64989ee920: Status 404 returned error can't find the container with id 6b7dd56865e45041edf4ff08a51ee2c8d784a628215194ac0289ae64989ee920 Nov 22 10:04:01 crc kubenswrapper[4856]: I1122 10:04:01.037771 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a68f45-b4b6-425e-a05e-b77055737757" path="/var/lib/kubelet/pods/02a68f45-b4b6-425e-a05e-b77055737757/volumes" Nov 22 10:04:01 crc kubenswrapper[4856]: I1122 10:04:01.249088 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5sxxb/crc-debug-pf9jc" event={"ID":"4d9b4634-f1d6-4d79-b403-66ffe67bf188","Type":"ContainerStarted","Data":"6b7dd56865e45041edf4ff08a51ee2c8d784a628215194ac0289ae64989ee920"} Nov 22 10:04:02 crc kubenswrapper[4856]: I1122 10:04:02.260779 4856 generic.go:334] "Generic (PLEG): container finished" podID="4d9b4634-f1d6-4d79-b403-66ffe67bf188" containerID="42acb4cd9955c62618dbe8b3bf7a8f672b48463280dfca105dbb4a0ed07003eb" exitCode=0 Nov 22 10:04:02 crc kubenswrapper[4856]: I1122 10:04:02.260817 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5sxxb/crc-debug-pf9jc" event={"ID":"4d9b4634-f1d6-4d79-b403-66ffe67bf188","Type":"ContainerDied","Data":"42acb4cd9955c62618dbe8b3bf7a8f672b48463280dfca105dbb4a0ed07003eb"} Nov 22 10:04:02 crc kubenswrapper[4856]: I1122 10:04:02.303274 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5sxxb/crc-debug-pf9jc"] Nov 22 10:04:02 crc kubenswrapper[4856]: I1122 10:04:02.314761 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5sxxb/crc-debug-pf9jc"] Nov 22 10:04:03 crc kubenswrapper[4856]: I1122 10:04:03.374176 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/crc-debug-pf9jc" Nov 22 10:04:03 crc kubenswrapper[4856]: I1122 10:04:03.383641 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d9b4634-f1d6-4d79-b403-66ffe67bf188-host\") pod \"4d9b4634-f1d6-4d79-b403-66ffe67bf188\" (UID: \"4d9b4634-f1d6-4d79-b403-66ffe67bf188\") " Nov 22 10:04:03 crc kubenswrapper[4856]: I1122 10:04:03.383791 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d9b4634-f1d6-4d79-b403-66ffe67bf188-host" (OuterVolumeSpecName: "host") pod "4d9b4634-f1d6-4d79-b403-66ffe67bf188" (UID: "4d9b4634-f1d6-4d79-b403-66ffe67bf188"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 22 10:04:03 crc kubenswrapper[4856]: I1122 10:04:03.383893 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9rbm\" (UniqueName: \"kubernetes.io/projected/4d9b4634-f1d6-4d79-b403-66ffe67bf188-kube-api-access-d9rbm\") pod \"4d9b4634-f1d6-4d79-b403-66ffe67bf188\" (UID: \"4d9b4634-f1d6-4d79-b403-66ffe67bf188\") " Nov 22 10:04:03 crc kubenswrapper[4856]: I1122 10:04:03.384339 4856 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4d9b4634-f1d6-4d79-b403-66ffe67bf188-host\") on node \"crc\" DevicePath \"\"" Nov 22 10:04:03 crc kubenswrapper[4856]: I1122 10:04:03.388377 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9b4634-f1d6-4d79-b403-66ffe67bf188-kube-api-access-d9rbm" (OuterVolumeSpecName: "kube-api-access-d9rbm") pod "4d9b4634-f1d6-4d79-b403-66ffe67bf188" (UID: "4d9b4634-f1d6-4d79-b403-66ffe67bf188"). InnerVolumeSpecName "kube-api-access-d9rbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:04:03 crc kubenswrapper[4856]: I1122 10:04:03.485430 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9rbm\" (UniqueName: \"kubernetes.io/projected/4d9b4634-f1d6-4d79-b403-66ffe67bf188-kube-api-access-d9rbm\") on node \"crc\" DevicePath \"\"" Nov 22 10:04:04 crc kubenswrapper[4856]: I1122 10:04:04.281741 4856 scope.go:117] "RemoveContainer" containerID="42acb4cd9955c62618dbe8b3bf7a8f672b48463280dfca105dbb4a0ed07003eb" Nov 22 10:04:04 crc kubenswrapper[4856]: I1122 10:04:04.281795 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/crc-debug-pf9jc" Nov 22 10:04:04 crc kubenswrapper[4856]: I1122 10:04:04.720277 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9b4634-f1d6-4d79-b403-66ffe67bf188" path="/var/lib/kubelet/pods/4d9b4634-f1d6-4d79-b403-66ffe67bf188/volumes" Nov 22 10:05:29 crc kubenswrapper[4856]: I1122 10:05:29.754119 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:05:29 crc kubenswrapper[4856]: I1122 10:05:29.754743 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.435935 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cgtk4"] Nov 22 10:05:35 crc kubenswrapper[4856]: E1122 10:05:35.436827 4856 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9b4634-f1d6-4d79-b403-66ffe67bf188" containerName="container-00" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.436841 4856 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9b4634-f1d6-4d79-b403-66ffe67bf188" containerName="container-00" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.437060 4856 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9b4634-f1d6-4d79-b403-66ffe67bf188" containerName="container-00" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.438447 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.448120 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgtk4"] Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.600947 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcscj\" (UniqueName: \"kubernetes.io/projected/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-kube-api-access-lcscj\") pod \"redhat-marketplace-cgtk4\" (UID: \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\") " pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.601037 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-utilities\") pod \"redhat-marketplace-cgtk4\" (UID: \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\") " pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.601092 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-catalog-content\") pod \"redhat-marketplace-cgtk4\" (UID: \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\") " pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.702814 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-catalog-content\") pod \"redhat-marketplace-cgtk4\" (UID: \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\") " pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.703172 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcscj\" (UniqueName: \"kubernetes.io/projected/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-kube-api-access-lcscj\") pod \"redhat-marketplace-cgtk4\" (UID: \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\") " pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.703235 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-utilities\") pod \"redhat-marketplace-cgtk4\" (UID: \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\") " pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.703693 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-utilities\") pod \"redhat-marketplace-cgtk4\" (UID: \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\") " pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.703905 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-catalog-content\") pod \"redhat-marketplace-cgtk4\" (UID: \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\") " pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.734833 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcscj\" (UniqueName: \"kubernetes.io/projected/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-kube-api-access-lcscj\") pod \"redhat-marketplace-cgtk4\" (UID: \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\") " pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:35 crc kubenswrapper[4856]: I1122 10:05:35.761387 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:36 crc kubenswrapper[4856]: I1122 10:05:36.253487 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgtk4"] Nov 22 10:05:37 crc kubenswrapper[4856]: I1122 10:05:37.203566 4856 generic.go:334] "Generic (PLEG): container finished" podID="3c496f17-ba0d-48f8-ab9f-16f4bca355b0" containerID="a546e5bb217a8e93e46f7e19ad40d4a34c62059260563114a72bc6cdf4c88bdd" exitCode=0 Nov 22 10:05:37 crc kubenswrapper[4856]: I1122 10:05:37.203648 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgtk4" event={"ID":"3c496f17-ba0d-48f8-ab9f-16f4bca355b0","Type":"ContainerDied","Data":"a546e5bb217a8e93e46f7e19ad40d4a34c62059260563114a72bc6cdf4c88bdd"} Nov 22 10:05:37 crc kubenswrapper[4856]: I1122 10:05:37.204140 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgtk4" event={"ID":"3c496f17-ba0d-48f8-ab9f-16f4bca355b0","Type":"ContainerStarted","Data":"68700d77bb1ea520f4c1ed41d86ff603b538b8f8afa241871fa39fa72421b12e"} Nov 22 10:05:40 crc kubenswrapper[4856]: I1122 10:05:40.239243 4856 generic.go:334] "Generic (PLEG): container finished" podID="3c496f17-ba0d-48f8-ab9f-16f4bca355b0" containerID="869dd136da93dd687333506b8e329262cbfea7a74f077dbb8ceacfa6d650c597" exitCode=0 Nov 22 10:05:40 crc kubenswrapper[4856]: I1122 10:05:40.240103 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgtk4" event={"ID":"3c496f17-ba0d-48f8-ab9f-16f4bca355b0","Type":"ContainerDied","Data":"869dd136da93dd687333506b8e329262cbfea7a74f077dbb8ceacfa6d650c597"} Nov 22 10:05:41 crc kubenswrapper[4856]: I1122 10:05:41.250288 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgtk4" event={"ID":"3c496f17-ba0d-48f8-ab9f-16f4bca355b0","Type":"ContainerStarted","Data":"9871a107cfff26522721e504f4432942aa07da4d61904d6126509c2b53ada4fd"} Nov 22 10:05:41 crc kubenswrapper[4856]: I1122 10:05:41.273295 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cgtk4" podStartSLOduration=2.790701149 podStartE2EDuration="6.273277391s" podCreationTimestamp="2025-11-22 10:05:35 +0000 UTC" firstStartedPulling="2025-11-22 10:05:37.205322077 +0000 UTC m=+10979.618715335" lastFinishedPulling="2025-11-22 10:05:40.687898319 +0000 UTC m=+10983.101291577" observedRunningTime="2025-11-22 10:05:41.267386482 +0000 UTC m=+10983.680779740" watchObservedRunningTime="2025-11-22 10:05:41.273277391 +0000 UTC m=+10983.686670649" Nov 22 10:05:45 crc kubenswrapper[4856]: I1122 10:05:45.763380 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:45 crc kubenswrapper[4856]: I1122 10:05:45.764836 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:45 crc kubenswrapper[4856]: I1122 10:05:45.814927 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:46 crc kubenswrapper[4856]: I1122 10:05:46.364611 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:46 crc kubenswrapper[4856]: I1122 10:05:46.416691 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgtk4"] Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.345414 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cgtk4" podUID="3c496f17-ba0d-48f8-ab9f-16f4bca355b0" containerName="registry-server" containerID="cri-o://9871a107cfff26522721e504f4432942aa07da4d61904d6126509c2b53ada4fd" gracePeriod=2 Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.463187 4856 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jdls2"] Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.478678 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdls2"] Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.478804 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.562524 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt4bc\" (UniqueName: \"kubernetes.io/projected/91238ace-3e23-4415-a6d1-64dd06bfd6e0-kube-api-access-nt4bc\") pod \"community-operators-jdls2\" (UID: \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\") " pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.562620 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91238ace-3e23-4415-a6d1-64dd06bfd6e0-utilities\") pod \"community-operators-jdls2\" (UID: \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\") " pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.562692 4856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91238ace-3e23-4415-a6d1-64dd06bfd6e0-catalog-content\") pod \"community-operators-jdls2\" (UID: \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\") " pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.667712 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt4bc\" (UniqueName: \"kubernetes.io/projected/91238ace-3e23-4415-a6d1-64dd06bfd6e0-kube-api-access-nt4bc\") pod \"community-operators-jdls2\" (UID: \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\") " pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.667821 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91238ace-3e23-4415-a6d1-64dd06bfd6e0-utilities\") pod \"community-operators-jdls2\" (UID: \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\") " pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.667884 4856 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91238ace-3e23-4415-a6d1-64dd06bfd6e0-catalog-content\") pod \"community-operators-jdls2\" (UID: \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\") " pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.668421 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91238ace-3e23-4415-a6d1-64dd06bfd6e0-utilities\") pod \"community-operators-jdls2\" (UID: \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\") " pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.668486 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91238ace-3e23-4415-a6d1-64dd06bfd6e0-catalog-content\") pod \"community-operators-jdls2\" (UID: \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\") " pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.688203 4856 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt4bc\" (UniqueName: \"kubernetes.io/projected/91238ace-3e23-4415-a6d1-64dd06bfd6e0-kube-api-access-nt4bc\") pod \"community-operators-jdls2\" (UID: \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\") " pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.842868 4856 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:48 crc kubenswrapper[4856]: I1122 10:05:48.940218 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.076312 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcscj\" (UniqueName: \"kubernetes.io/projected/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-kube-api-access-lcscj\") pod \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\" (UID: \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\") " Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.076500 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-catalog-content\") pod \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\" (UID: \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\") " Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.076562 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-utilities\") pod \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\" (UID: \"3c496f17-ba0d-48f8-ab9f-16f4bca355b0\") " Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.077881 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-utilities" (OuterVolumeSpecName: "utilities") pod "3c496f17-ba0d-48f8-ab9f-16f4bca355b0" (UID: "3c496f17-ba0d-48f8-ab9f-16f4bca355b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.083568 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-kube-api-access-lcscj" (OuterVolumeSpecName: "kube-api-access-lcscj") pod "3c496f17-ba0d-48f8-ab9f-16f4bca355b0" (UID: "3c496f17-ba0d-48f8-ab9f-16f4bca355b0"). InnerVolumeSpecName "kube-api-access-lcscj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.110951 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c496f17-ba0d-48f8-ab9f-16f4bca355b0" (UID: "3c496f17-ba0d-48f8-ab9f-16f4bca355b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.179297 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.179357 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.179395 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcscj\" (UniqueName: \"kubernetes.io/projected/3c496f17-ba0d-48f8-ab9f-16f4bca355b0-kube-api-access-lcscj\") on node \"crc\" DevicePath \"\"" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.356269 4856 generic.go:334] "Generic (PLEG): container finished" podID="3c496f17-ba0d-48f8-ab9f-16f4bca355b0" containerID="9871a107cfff26522721e504f4432942aa07da4d61904d6126509c2b53ada4fd" exitCode=0 Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.356320 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgtk4" event={"ID":"3c496f17-ba0d-48f8-ab9f-16f4bca355b0","Type":"ContainerDied","Data":"9871a107cfff26522721e504f4432942aa07da4d61904d6126509c2b53ada4fd"} Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.356351 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cgtk4" event={"ID":"3c496f17-ba0d-48f8-ab9f-16f4bca355b0","Type":"ContainerDied","Data":"68700d77bb1ea520f4c1ed41d86ff603b538b8f8afa241871fa39fa72421b12e"} Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.356372 4856 scope.go:117] "RemoveContainer" containerID="9871a107cfff26522721e504f4432942aa07da4d61904d6126509c2b53ada4fd" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.356551 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cgtk4" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.391865 4856 scope.go:117] "RemoveContainer" containerID="869dd136da93dd687333506b8e329262cbfea7a74f077dbb8ceacfa6d650c597" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.404655 4856 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jdls2"] Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.412906 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgtk4"] Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.422181 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cgtk4"] Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.429085 4856 scope.go:117] "RemoveContainer" containerID="a546e5bb217a8e93e46f7e19ad40d4a34c62059260563114a72bc6cdf4c88bdd" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.445714 4856 scope.go:117] "RemoveContainer" containerID="9871a107cfff26522721e504f4432942aa07da4d61904d6126509c2b53ada4fd" Nov 22 10:05:49 crc kubenswrapper[4856]: E1122 10:05:49.446496 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9871a107cfff26522721e504f4432942aa07da4d61904d6126509c2b53ada4fd\": container with ID starting with 9871a107cfff26522721e504f4432942aa07da4d61904d6126509c2b53ada4fd not found: ID does not exist" containerID="9871a107cfff26522721e504f4432942aa07da4d61904d6126509c2b53ada4fd" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.446563 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9871a107cfff26522721e504f4432942aa07da4d61904d6126509c2b53ada4fd"} err="failed to get container status \"9871a107cfff26522721e504f4432942aa07da4d61904d6126509c2b53ada4fd\": rpc error: code = NotFound desc = could not find container \"9871a107cfff26522721e504f4432942aa07da4d61904d6126509c2b53ada4fd\": container with ID starting with 9871a107cfff26522721e504f4432942aa07da4d61904d6126509c2b53ada4fd not found: ID does not exist" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.446588 4856 scope.go:117] "RemoveContainer" containerID="869dd136da93dd687333506b8e329262cbfea7a74f077dbb8ceacfa6d650c597" Nov 22 10:05:49 crc kubenswrapper[4856]: E1122 10:05:49.446910 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"869dd136da93dd687333506b8e329262cbfea7a74f077dbb8ceacfa6d650c597\": container with ID starting with 869dd136da93dd687333506b8e329262cbfea7a74f077dbb8ceacfa6d650c597 not found: ID does not exist" containerID="869dd136da93dd687333506b8e329262cbfea7a74f077dbb8ceacfa6d650c597" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.446943 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"869dd136da93dd687333506b8e329262cbfea7a74f077dbb8ceacfa6d650c597"} err="failed to get container status \"869dd136da93dd687333506b8e329262cbfea7a74f077dbb8ceacfa6d650c597\": rpc error: code = NotFound desc = could not find container \"869dd136da93dd687333506b8e329262cbfea7a74f077dbb8ceacfa6d650c597\": container with ID starting with 869dd136da93dd687333506b8e329262cbfea7a74f077dbb8ceacfa6d650c597 not found: ID does not exist" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.446966 4856 scope.go:117] "RemoveContainer" containerID="a546e5bb217a8e93e46f7e19ad40d4a34c62059260563114a72bc6cdf4c88bdd" Nov 22 10:05:49 crc kubenswrapper[4856]: E1122 10:05:49.447283 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a546e5bb217a8e93e46f7e19ad40d4a34c62059260563114a72bc6cdf4c88bdd\": container with ID starting with a546e5bb217a8e93e46f7e19ad40d4a34c62059260563114a72bc6cdf4c88bdd not found: ID does not exist" containerID="a546e5bb217a8e93e46f7e19ad40d4a34c62059260563114a72bc6cdf4c88bdd" Nov 22 10:05:49 crc kubenswrapper[4856]: I1122 10:05:49.447322 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a546e5bb217a8e93e46f7e19ad40d4a34c62059260563114a72bc6cdf4c88bdd"} err="failed to get container status \"a546e5bb217a8e93e46f7e19ad40d4a34c62059260563114a72bc6cdf4c88bdd\": rpc error: code = NotFound desc = could not find container \"a546e5bb217a8e93e46f7e19ad40d4a34c62059260563114a72bc6cdf4c88bdd\": container with ID starting with a546e5bb217a8e93e46f7e19ad40d4a34c62059260563114a72bc6cdf4c88bdd not found: ID does not exist" Nov 22 10:05:50 crc kubenswrapper[4856]: I1122 10:05:50.373756 4856 generic.go:334] "Generic (PLEG): container finished" podID="91238ace-3e23-4415-a6d1-64dd06bfd6e0" containerID="56060096b3257d387c8c96885451df4984c6d74421733134d6c4162a6ab39874" exitCode=0 Nov 22 10:05:50 crc kubenswrapper[4856]: I1122 10:05:50.373795 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdls2" event={"ID":"91238ace-3e23-4415-a6d1-64dd06bfd6e0","Type":"ContainerDied","Data":"56060096b3257d387c8c96885451df4984c6d74421733134d6c4162a6ab39874"} Nov 22 10:05:50 crc kubenswrapper[4856]: I1122 10:05:50.373816 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdls2" event={"ID":"91238ace-3e23-4415-a6d1-64dd06bfd6e0","Type":"ContainerStarted","Data":"b1c517a04fa37a6bef643ea0da5413fbad26b54c2e3f520831bbc8ef209a6fce"} Nov 22 10:05:50 crc kubenswrapper[4856]: I1122 10:05:50.720806 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c496f17-ba0d-48f8-ab9f-16f4bca355b0" path="/var/lib/kubelet/pods/3c496f17-ba0d-48f8-ab9f-16f4bca355b0/volumes" Nov 22 10:05:51 crc kubenswrapper[4856]: I1122 10:05:51.383801 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdls2" event={"ID":"91238ace-3e23-4415-a6d1-64dd06bfd6e0","Type":"ContainerStarted","Data":"e617a3df52e162159a4d82209c45a99ffea426788942b9b5e0073caabfb642ad"} Nov 22 10:05:52 crc kubenswrapper[4856]: I1122 10:05:52.291854 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0982dad5-4a0f-43a7-a561-a90a5c6a2070/init-config-reloader/0.log" Nov 22 10:05:52 crc kubenswrapper[4856]: I1122 10:05:52.396785 4856 generic.go:334] "Generic (PLEG): container finished" podID="91238ace-3e23-4415-a6d1-64dd06bfd6e0" containerID="e617a3df52e162159a4d82209c45a99ffea426788942b9b5e0073caabfb642ad" exitCode=0 Nov 22 10:05:52 crc kubenswrapper[4856]: I1122 10:05:52.396839 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdls2" event={"ID":"91238ace-3e23-4415-a6d1-64dd06bfd6e0","Type":"ContainerDied","Data":"e617a3df52e162159a4d82209c45a99ffea426788942b9b5e0073caabfb642ad"} Nov 22 10:05:52 crc kubenswrapper[4856]: I1122 10:05:52.494934 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0982dad5-4a0f-43a7-a561-a90a5c6a2070/init-config-reloader/0.log" Nov 22 10:05:52 crc kubenswrapper[4856]: I1122 10:05:52.514822 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0982dad5-4a0f-43a7-a561-a90a5c6a2070/alertmanager/0.log" Nov 22 10:05:52 crc kubenswrapper[4856]: I1122 10:05:52.555838 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_0982dad5-4a0f-43a7-a561-a90a5c6a2070/config-reloader/0.log" Nov 22 10:05:52 crc kubenswrapper[4856]: I1122 10:05:52.772816 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b3bdd433-fc71-456a-8e71-69b05aa2f6c9/aodh-api/0.log" Nov 22 10:05:52 crc kubenswrapper[4856]: I1122 10:05:52.825726 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b3bdd433-fc71-456a-8e71-69b05aa2f6c9/aodh-evaluator/0.log" Nov 22 10:05:52 crc kubenswrapper[4856]: I1122 10:05:52.993975 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b3bdd433-fc71-456a-8e71-69b05aa2f6c9/aodh-notifier/0.log" Nov 22 10:05:53 crc kubenswrapper[4856]: I1122 10:05:53.012552 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b3bdd433-fc71-456a-8e71-69b05aa2f6c9/aodh-listener/0.log" Nov 22 10:05:53 crc kubenswrapper[4856]: I1122 10:05:53.085281 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7ff8c9cc54-8k24x_fcb86f9c-fee1-46d6-acac-20f49f472dfa/barbican-api/0.log" Nov 22 10:05:53 crc kubenswrapper[4856]: I1122 10:05:53.221325 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7ff8c9cc54-8k24x_fcb86f9c-fee1-46d6-acac-20f49f472dfa/barbican-api-log/0.log" Nov 22 10:05:53 crc kubenswrapper[4856]: I1122 10:05:53.306039 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-689d9fcc78-qzcr4_51e24c6d-a8b8-44a4-8654-8e8623dc844f/barbican-keystone-listener/0.log" Nov 22 10:05:53 crc kubenswrapper[4856]: I1122 10:05:53.413066 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdls2" event={"ID":"91238ace-3e23-4415-a6d1-64dd06bfd6e0","Type":"ContainerStarted","Data":"b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba"} Nov 22 10:05:53 crc kubenswrapper[4856]: I1122 10:05:53.433329 4856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jdls2" podStartSLOduration=2.877377336 podStartE2EDuration="5.433315262s" podCreationTimestamp="2025-11-22 10:05:48 +0000 UTC" firstStartedPulling="2025-11-22 10:05:50.376353377 +0000 UTC m=+10992.789746635" lastFinishedPulling="2025-11-22 10:05:52.932291303 +0000 UTC m=+10995.345684561" observedRunningTime="2025-11-22 10:05:53.429360225 +0000 UTC m=+10995.842753493" watchObservedRunningTime="2025-11-22 10:05:53.433315262 +0000 UTC m=+10995.846708510" Nov 22 10:05:53 crc kubenswrapper[4856]: I1122 10:05:53.572585 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-955d7597c-vxs4h_8cad1422-5ab8-4d58-8f88-730c9e301ae9/barbican-worker/0.log" Nov 22 10:05:53 crc kubenswrapper[4856]: I1122 10:05:53.635993 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-955d7597c-vxs4h_8cad1422-5ab8-4d58-8f88-730c9e301ae9/barbican-worker-log/0.log" Nov 22 10:05:53 crc kubenswrapper[4856]: I1122 10:05:53.715803 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-689d9fcc78-qzcr4_51e24c6d-a8b8-44a4-8654-8e8623dc844f/barbican-keystone-listener-log/0.log" Nov 22 10:05:53 crc kubenswrapper[4856]: I1122 10:05:53.867405 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-openstack-openstack-cell1-5tw45_edda8fe7-9e3d-4753-86c7-539cc18590d5/bootstrap-openstack-openstack-cell1/0.log" Nov 22 10:05:53 crc kubenswrapper[4856]: I1122 10:05:53.994263 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d442d81d-f24e-4a27-bbb5-f25a1792bfca/ceilometer-central-agent/0.log" Nov 22 10:05:54 crc kubenswrapper[4856]: I1122 10:05:54.178676 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d442d81d-f24e-4a27-bbb5-f25a1792bfca/ceilometer-notification-agent/0.log" Nov 22 10:05:54 crc kubenswrapper[4856]: I1122 10:05:54.181309 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d442d81d-f24e-4a27-bbb5-f25a1792bfca/proxy-httpd/0.log" Nov 22 10:05:54 crc kubenswrapper[4856]: I1122 10:05:54.205843 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d442d81d-f24e-4a27-bbb5-f25a1792bfca/sg-core/0.log" Nov 22 10:05:54 crc kubenswrapper[4856]: I1122 10:05:54.446594 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5ba26c8a-e031-4fa4-85e3-e13e63ef1448/cinder-api/0.log" Nov 22 10:05:54 crc kubenswrapper[4856]: I1122 10:05:54.507616 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_5ba26c8a-e031-4fa4-85e3-e13e63ef1448/cinder-api-log/0.log" Nov 22 10:05:54 crc kubenswrapper[4856]: I1122 10:05:54.518693 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_56903f1f-89ce-4eca-bd84-0cd0e3814079/cinder-scheduler/0.log" Nov 22 10:05:54 crc kubenswrapper[4856]: I1122 10:05:54.723013 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_56903f1f-89ce-4eca-bd84-0cd0e3814079/probe/0.log" Nov 22 10:05:54 crc kubenswrapper[4856]: I1122 10:05:54.760057 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-openstack-openstack-cell1-hxn22_2279f2ab-cdc9-4bbb-9d75-4f259de8f544/configure-network-openstack-openstack-cell1/0.log" Nov 22 10:05:55 crc kubenswrapper[4856]: I1122 10:05:55.020630 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-openstack-openstack-cell1-qhn5d_1ae2a389-4844-467a-a2a7-2296bdb9275b/configure-os-openstack-openstack-cell1/0.log" Nov 22 10:05:55 crc kubenswrapper[4856]: I1122 10:05:55.086027 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-779cdcc5bf-chvxh_7fdfcc7b-57a0-42bc-9ee5-df8530a53345/init/0.log" Nov 22 10:05:55 crc kubenswrapper[4856]: I1122 10:05:55.375524 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-779cdcc5bf-chvxh_7fdfcc7b-57a0-42bc-9ee5-df8530a53345/init/0.log" Nov 22 10:05:55 crc kubenswrapper[4856]: I1122 10:05:55.384611 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-openstack-openstack-cell1-59zw7_1019a693-31a9-4b08-bc98-878920e83124/download-cache-openstack-openstack-cell1/0.log" Nov 22 10:05:55 crc kubenswrapper[4856]: I1122 10:05:55.386169 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-779cdcc5bf-chvxh_7fdfcc7b-57a0-42bc-9ee5-df8530a53345/dnsmasq-dns/0.log" Nov 22 10:05:55 crc kubenswrapper[4856]: I1122 10:05:55.653832 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0dbf1025-16ed-4933-8207-61bb390843a6/glance-log/0.log" Nov 22 10:05:55 crc kubenswrapper[4856]: I1122 10:05:55.683841 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_0dbf1025-16ed-4933-8207-61bb390843a6/glance-httpd/0.log" Nov 22 10:05:55 crc kubenswrapper[4856]: I1122 10:05:55.865562 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b5cd3215-7fab-4cdf-acfe-b72f972a3d86/glance-httpd/0.log" Nov 22 10:05:55 crc kubenswrapper[4856]: I1122 10:05:55.890486 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b5cd3215-7fab-4cdf-acfe-b72f972a3d86/glance-log/0.log" Nov 22 10:05:56 crc kubenswrapper[4856]: I1122 10:05:56.291452 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6d55bbbf85-9nqnt_ad9a5183-bb59-4674-8656-2a931e90c81f/heat-engine/0.log" Nov 22 10:05:56 crc kubenswrapper[4856]: I1122 10:05:56.701610 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8484479b76-8csj5_14290ea7-6928-401a-8a9e-3ab8e557570d/horizon/0.log" Nov 22 10:05:56 crc kubenswrapper[4856]: I1122 10:05:56.813585 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6df48cd58f-ngxlf_19d88c37-ea75-4207-bb92-9265863c4da6/heat-api/0.log" Nov 22 10:05:56 crc kubenswrapper[4856]: I1122 10:05:56.857027 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5dfbf757c6-zbhzc_6d443d4c-63dd-49d9-ba0e-815576ade7a6/heat-cfnapi/0.log" Nov 22 10:05:56 crc kubenswrapper[4856]: I1122 10:05:56.888619 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-openstack-openstack-cell1-lqv7q_c16cd078-a2a8-4021-aa0a-60dd1aabbe02/install-certs-openstack-openstack-cell1/0.log" Nov 22 10:05:57 crc kubenswrapper[4856]: I1122 10:05:57.080903 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-openstack-openstack-cell1-zflxc_0b3118f9-cb97-4f71-95d4-65c235c904dc/install-os-openstack-openstack-cell1/0.log" Nov 22 10:05:57 crc kubenswrapper[4856]: I1122 10:05:57.377354 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29396701-rhfc7_9860533e-121c-4025-b616-da777f3db9a3/keystone-cron/0.log" Nov 22 10:05:57 crc kubenswrapper[4856]: I1122 10:05:57.382848 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8484479b76-8csj5_14290ea7-6928-401a-8a9e-3ab8e557570d/horizon-log/0.log" Nov 22 10:05:57 crc kubenswrapper[4856]: I1122 10:05:57.724407 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29396761-54k82_c785adf0-bfc7-4bcd-83c1-f5f346583e47/keystone-cron/0.log" Nov 22 10:05:57 crc kubenswrapper[4856]: I1122 10:05:57.840827 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_61795c46-ac49-454d-9ea8-36e6b921c1c5/kube-state-metrics/0.log" Nov 22 10:05:57 crc kubenswrapper[4856]: I1122 10:05:57.896304 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7c69fc876-sckl8_cf49c47a-9ac6-4ff6-b4a4-0e2ed09006aa/keystone-api/0.log" Nov 22 10:05:58 crc kubenswrapper[4856]: I1122 10:05:58.026056 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-openstack-openstack-cell1-bld8w_8a9f3905-ecd4-4d91-9d32-89e0c6bf5c44/libvirt-openstack-openstack-cell1/0.log" Nov 22 10:05:58 crc kubenswrapper[4856]: I1122 10:05:58.475797 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8445b95697-hfkrr_4b1f18e1-3e4e-4337-b2e2-e4363d635895/neutron-httpd/0.log" Nov 22 10:05:58 crc kubenswrapper[4856]: I1122 10:05:58.549453 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dhcp-openstack-openstack-cell1-qz9nr_4021796f-1cba-4573-9efa-4ed786ba2251/neutron-dhcp-openstack-openstack-cell1/0.log" Nov 22 10:05:58 crc kubenswrapper[4856]: I1122 10:05:58.803813 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-openstack-openstack-cell1-k5j8z_c65c99da-b7aa-4e12-9973-9d87da7c85af/neutron-metadata-openstack-openstack-cell1/0.log" Nov 22 10:05:58 crc kubenswrapper[4856]: I1122 10:05:58.843174 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:58 crc kubenswrapper[4856]: I1122 10:05:58.843644 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:58 crc kubenswrapper[4856]: I1122 10:05:58.901152 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-8445b95697-hfkrr_4b1f18e1-3e4e-4337-b2e2-e4363d635895/neutron-api/0.log" Nov 22 10:05:58 crc kubenswrapper[4856]: I1122 10:05:58.901583 4856 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:59 crc kubenswrapper[4856]: I1122 10:05:59.074994 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-sriov-openstack-openstack-cell1-xgb8k_3cfedb7a-57e2-4533-95c9-4c691087caed/neutron-sriov-openstack-openstack-cell1/0.log" Nov 22 10:05:59 crc kubenswrapper[4856]: I1122 10:05:59.428031 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_e066a9cb-49d5-4f3f-9e6c-fd3c10084936/nova-cell0-conductor-conductor/0.log" Nov 22 10:05:59 crc kubenswrapper[4856]: I1122 10:05:59.510353 4856 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:05:59 crc kubenswrapper[4856]: I1122 10:05:59.513845 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1320212e-aa18-4900-8d1f-6935e2d18225/nova-api-log/0.log" Nov 22 10:05:59 crc kubenswrapper[4856]: I1122 10:05:59.564226 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdls2"] Nov 22 10:05:59 crc kubenswrapper[4856]: I1122 10:05:59.752251 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_09683c8e-5f3c-4c9f-ab27-59ba9a51387e/nova-cell1-conductor-conductor/0.log" Nov 22 10:05:59 crc kubenswrapper[4856]: I1122 10:05:59.753924 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:05:59 crc kubenswrapper[4856]: I1122 10:05:59.753986 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:05:59 crc kubenswrapper[4856]: I1122 10:05:59.762050 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1320212e-aa18-4900-8d1f-6935e2d18225/nova-api-api/0.log" Nov 22 10:05:59 crc kubenswrapper[4856]: I1122 10:05:59.805682 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_19ba91be-fe9b-4d3e-a85c-0f5236cfd60b/memcached/0.log" Nov 22 10:05:59 crc kubenswrapper[4856]: I1122 10:05:59.947467 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6998f0c9-8a9c-4d8c-9549-412b52efd19e/nova-cell1-novncproxy-novncproxy/0.log" Nov 22 10:06:00 crc kubenswrapper[4856]: I1122 10:06:00.071177 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cellxxnrq_1bdbb850-a5cf-4f8e-ae2e-88655ceda16c/nova-cell1-openstack-nova-compute-ffu-cell1-openstack-cell1/0.log" Nov 22 10:06:00 crc kubenswrapper[4856]: I1122 10:06:00.120210 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-openstack-openstack-cell1-2znnb_30440752-c1e1-4e68-b1af-ac6ee184d1c6/nova-cell1-openstack-openstack-cell1/0.log" Nov 22 10:06:00 crc kubenswrapper[4856]: I1122 10:06:00.342211 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_12e51766-4906-4715-8a2e-ba76c14f18cc/nova-metadata-log/0.log" Nov 22 10:06:00 crc kubenswrapper[4856]: I1122 10:06:00.493193 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_fe29e444-cca9-41c8-920a-70302a80bf99/nova-scheduler-scheduler/0.log" Nov 22 10:06:00 crc kubenswrapper[4856]: I1122 10:06:00.607947 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d4dcc1d5-4e57-45ff-931e-0be9bc3be546/mysql-bootstrap/0.log" Nov 22 10:06:00 crc kubenswrapper[4856]: I1122 10:06:00.773398 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d4dcc1d5-4e57-45ff-931e-0be9bc3be546/galera/0.log" Nov 22 10:06:00 crc kubenswrapper[4856]: I1122 10:06:00.825699 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d4dcc1d5-4e57-45ff-931e-0be9bc3be546/mysql-bootstrap/0.log" Nov 22 10:06:00 crc kubenswrapper[4856]: I1122 10:06:00.900579 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_274230c4-41e5-433a-8878-a09cd3ea7de8/mysql-bootstrap/0.log" Nov 22 10:06:00 crc kubenswrapper[4856]: I1122 10:06:00.972639 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_12e51766-4906-4715-8a2e-ba76c14f18cc/nova-metadata-metadata/0.log" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.030148 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_274230c4-41e5-433a-8878-a09cd3ea7de8/galera/0.log" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.086771 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_274230c4-41e5-433a-8878-a09cd3ea7de8/mysql-bootstrap/0.log" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.086990 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1a757c5a-d91e-485c-bf37-0d90b5e87f89/openstackclient/0.log" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.225477 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_466d6ab8-2d26-4845-85a4-d4e652a857e7/openstack-network-exporter/0.log" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.271216 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_466d6ab8-2d26-4845-85a4-d4e652a857e7/ovn-northd/1.log" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.290444 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_466d6ab8-2d26-4845-85a4-d4e652a857e7/ovn-northd/0.log" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.454158 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_837136ea-05b1-42f9-8af2-806dba026c53/openstack-network-exporter/0.log" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.481855 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jdls2" podUID="91238ace-3e23-4415-a6d1-64dd06bfd6e0" containerName="registry-server" containerID="cri-o://b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba" gracePeriod=2 Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.506086 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-openstack-openstack-cell1-znkhd_f3674061-72ad-4651-b5f4-29795684fe8e/ovn-openstack-openstack-cell1/0.log" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.544297 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_837136ea-05b1-42f9-8af2-806dba026c53/ovsdbserver-nb/0.log" Nov 22 10:06:01 crc kubenswrapper[4856]: E1122 10:06:01.557291 4856 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91238ace_3e23_4415_a6d1_64dd06bfd6e0.slice/crio-conmon-b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba.scope\": RecentStats: unable to find data in memory cache]" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.714617 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_bdda5e7f-56ae-4427-8099-7e1291cc5296/openstack-network-exporter/0.log" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.765111 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_bdda5e7f-56ae-4427-8099-7e1291cc5296/ovsdbserver-nb/0.log" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.767277 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_a6728e6f-b4b0-45fc-8745-d9c657c6146f/openstack-network-exporter/0.log" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.959375 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_a6728e6f-b4b0-45fc-8745-d9c657c6146f/ovsdbserver-nb/0.log" Nov 22 10:06:01 crc kubenswrapper[4856]: I1122 10:06:01.976093 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_60cf15d0-8906-47ae-8fb0-ca49be28e48d/ovsdbserver-sb/0.log" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.023063 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_60cf15d0-8906-47ae-8fb0-ca49be28e48d/openstack-network-exporter/0.log" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.074374 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.218709 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_f9ef9a9e-2b5f-4833-ae0c-9b205e862eda/openstack-network-exporter/0.log" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.222942 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_f9ef9a9e-2b5f-4833-ae0c-9b205e862eda/ovsdbserver-sb/0.log" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.245106 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91238ace-3e23-4415-a6d1-64dd06bfd6e0-catalog-content\") pod \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\" (UID: \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\") " Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.245155 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91238ace-3e23-4415-a6d1-64dd06bfd6e0-utilities\") pod \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\" (UID: \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\") " Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.245202 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nt4bc\" (UniqueName: \"kubernetes.io/projected/91238ace-3e23-4415-a6d1-64dd06bfd6e0-kube-api-access-nt4bc\") pod \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\" (UID: \"91238ace-3e23-4415-a6d1-64dd06bfd6e0\") " Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.246132 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91238ace-3e23-4415-a6d1-64dd06bfd6e0-utilities" (OuterVolumeSpecName: "utilities") pod "91238ace-3e23-4415-a6d1-64dd06bfd6e0" (UID: "91238ace-3e23-4415-a6d1-64dd06bfd6e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.252281 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91238ace-3e23-4415-a6d1-64dd06bfd6e0-kube-api-access-nt4bc" (OuterVolumeSpecName: "kube-api-access-nt4bc") pod "91238ace-3e23-4415-a6d1-64dd06bfd6e0" (UID: "91238ace-3e23-4415-a6d1-64dd06bfd6e0"). InnerVolumeSpecName "kube-api-access-nt4bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.306848 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91238ace-3e23-4415-a6d1-64dd06bfd6e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91238ace-3e23-4415-a6d1-64dd06bfd6e0" (UID: "91238ace-3e23-4415-a6d1-64dd06bfd6e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.327665 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_789e84c3-d8c1-43e1-8024-de34dc89e648/openstack-network-exporter/0.log" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.348070 4856 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91238ace-3e23-4415-a6d1-64dd06bfd6e0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.348113 4856 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91238ace-3e23-4415-a6d1-64dd06bfd6e0-utilities\") on node \"crc\" DevicePath \"\"" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.348127 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nt4bc\" (UniqueName: \"kubernetes.io/projected/91238ace-3e23-4415-a6d1-64dd06bfd6e0-kube-api-access-nt4bc\") on node \"crc\" DevicePath \"\"" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.389431 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_789e84c3-d8c1-43e1-8024-de34dc89e648/ovsdbserver-sb/0.log" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.492386 4856 generic.go:334] "Generic (PLEG): container finished" podID="91238ace-3e23-4415-a6d1-64dd06bfd6e0" containerID="b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba" exitCode=0 Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.492426 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdls2" event={"ID":"91238ace-3e23-4415-a6d1-64dd06bfd6e0","Type":"ContainerDied","Data":"b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba"} Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.492455 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jdls2" event={"ID":"91238ace-3e23-4415-a6d1-64dd06bfd6e0","Type":"ContainerDied","Data":"b1c517a04fa37a6bef643ea0da5413fbad26b54c2e3f520831bbc8ef209a6fce"} Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.492471 4856 scope.go:117] "RemoveContainer" containerID="b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.492620 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jdls2" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.529918 4856 scope.go:117] "RemoveContainer" containerID="e617a3df52e162159a4d82209c45a99ffea426788942b9b5e0073caabfb642ad" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.540488 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jdls2"] Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.545581 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-764f7c8c76-8hp76_c6cb4a05-65f9-4ff0-814d-f7530da47c97/placement-api/0.log" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.562682 4856 scope.go:117] "RemoveContainer" containerID="56060096b3257d387c8c96885451df4984c6d74421733134d6c4162a6ab39874" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.568291 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jdls2"] Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.614830 4856 scope.go:117] "RemoveContainer" containerID="b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba" Nov 22 10:06:02 crc kubenswrapper[4856]: E1122 10:06:02.615673 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba\": container with ID starting with b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba not found: ID does not exist" containerID="b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.615704 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba"} err="failed to get container status \"b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba\": rpc error: code = NotFound desc = could not find container \"b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba\": container with ID starting with b79a66b5514340a8c1ce6fb9046b237904973a4d15b37b4d38adae7b7717d8ba not found: ID does not exist" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.615729 4856 scope.go:117] "RemoveContainer" containerID="e617a3df52e162159a4d82209c45a99ffea426788942b9b5e0073caabfb642ad" Nov 22 10:06:02 crc kubenswrapper[4856]: E1122 10:06:02.619897 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e617a3df52e162159a4d82209c45a99ffea426788942b9b5e0073caabfb642ad\": container with ID starting with e617a3df52e162159a4d82209c45a99ffea426788942b9b5e0073caabfb642ad not found: ID does not exist" containerID="e617a3df52e162159a4d82209c45a99ffea426788942b9b5e0073caabfb642ad" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.619942 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e617a3df52e162159a4d82209c45a99ffea426788942b9b5e0073caabfb642ad"} err="failed to get container status \"e617a3df52e162159a4d82209c45a99ffea426788942b9b5e0073caabfb642ad\": rpc error: code = NotFound desc = could not find container \"e617a3df52e162159a4d82209c45a99ffea426788942b9b5e0073caabfb642ad\": container with ID starting with e617a3df52e162159a4d82209c45a99ffea426788942b9b5e0073caabfb642ad not found: ID does not exist" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.619968 4856 scope.go:117] "RemoveContainer" containerID="56060096b3257d387c8c96885451df4984c6d74421733134d6c4162a6ab39874" Nov 22 10:06:02 crc kubenswrapper[4856]: E1122 10:06:02.620182 4856 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56060096b3257d387c8c96885451df4984c6d74421733134d6c4162a6ab39874\": container with ID starting with 56060096b3257d387c8c96885451df4984c6d74421733134d6c4162a6ab39874 not found: ID does not exist" containerID="56060096b3257d387c8c96885451df4984c6d74421733134d6c4162a6ab39874" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.620197 4856 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56060096b3257d387c8c96885451df4984c6d74421733134d6c4162a6ab39874"} err="failed to get container status \"56060096b3257d387c8c96885451df4984c6d74421733134d6c4162a6ab39874\": rpc error: code = NotFound desc = could not find container \"56060096b3257d387c8c96885451df4984c6d74421733134d6c4162a6ab39874\": container with ID starting with 56060096b3257d387c8c96885451df4984c6d74421733134d6c4162a6ab39874 not found: ID does not exist" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.667160 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_pre-adoption-validation-openstack-pre-adoption-openstack-cswwg9_21bb02ee-d25f-4c9d-95a8-84f642661787/pre-adoption-validation-openstack-pre-adoption-openstack-cell1/0.log" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.685647 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-764f7c8c76-8hp76_c6cb4a05-65f9-4ff0-814d-f7530da47c97/placement-log/0.log" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.727642 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91238ace-3e23-4415-a6d1-64dd06bfd6e0" path="/var/lib/kubelet/pods/91238ace-3e23-4415-a6d1-64dd06bfd6e0/volumes" Nov 22 10:06:02 crc kubenswrapper[4856]: I1122 10:06:02.838461 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3690a9de-19a8-491f-bf84-3fff9a9d52b3/init-config-reloader/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.018334 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3690a9de-19a8-491f-bf84-3fff9a9d52b3/init-config-reloader/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.057224 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3690a9de-19a8-491f-bf84-3fff9a9d52b3/thanos-sidecar/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.060396 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3690a9de-19a8-491f-bf84-3fff9a9d52b3/prometheus/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.063907 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3690a9de-19a8-491f-bf84-3fff9a9d52b3/config-reloader/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.176988 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7ea8e244-352d-4f27-86b8-2036996316e2/setup-container/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.347854 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f1cbd452-2d8c-428f-98a7-325984950be2/setup-container/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.359369 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7ea8e244-352d-4f27-86b8-2036996316e2/setup-container/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.376805 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7ea8e244-352d-4f27-86b8-2036996316e2/rabbitmq/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.544450 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f1cbd452-2d8c-428f-98a7-325984950be2/setup-container/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.592441 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_f1cbd452-2d8c-428f-98a7-325984950be2/rabbitmq/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.654308 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-openstack-openstack-cell1-bscsv_ad7d4bc8-7324-4941-9bdb-c870dbcba3ed/reboot-os-openstack-openstack-cell1/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.764605 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-openstack-openstack-cell1-9mthx_3d96cb97-55b2-4bec-a4dc-6065d4143687/run-os-openstack-openstack-cell1/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.846749 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-openstack-kvrgl_dc5da2fb-1405-4be9-adca-169ef62d4f19/ssh-known-hosts-openstack/0.log" Nov 22 10:06:03 crc kubenswrapper[4856]: I1122 10:06:03.995725 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-68fcd9d79d-pb2lw_24afd937-020f-43ff-beec-3bccac3dffec/proxy-server/0.log" Nov 22 10:06:04 crc kubenswrapper[4856]: I1122 10:06:04.036456 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-68fcd9d79d-pb2lw_24afd937-020f-43ff-beec-3bccac3dffec/proxy-httpd/0.log" Nov 22 10:06:04 crc kubenswrapper[4856]: I1122 10:06:04.075774 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2sgjz_58ad7851-b35b-4ce2-a8d7-c7a8d0f81d9a/swift-ring-rebalance/0.log" Nov 22 10:06:04 crc kubenswrapper[4856]: I1122 10:06:04.244395 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-openstack-openstack-cell1-rhs7q_0845a70f-bedf-4495-8e38-207547e02a31/telemetry-openstack-openstack-cell1/0.log" Nov 22 10:06:04 crc kubenswrapper[4856]: I1122 10:06:04.316617 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9ceb57cb-8794-40bb-97b2-d59671b89459/tempest-tests-tempest-tests-runner/0.log" Nov 22 10:06:04 crc kubenswrapper[4856]: I1122 10:06:04.415786 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_f649b139-1eb1-4f25-b521-e33f00b0731f/test-operator-logs-container/0.log" Nov 22 10:06:04 crc kubenswrapper[4856]: I1122 10:06:04.565722 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tripleo-cleanup-tripleo-cleanup-openstack-cell1-885gx_bf97b43b-e761-42f4-bd6b-837f60e9598c/tripleo-cleanup-tripleo-cleanup-openstack-cell1/0.log" Nov 22 10:06:04 crc kubenswrapper[4856]: I1122 10:06:04.616724 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-openstack-openstack-cell1-5c5b9_5e2c9028-9241-4e80-b568-edbac775f871/validate-network-openstack-openstack-cell1/0.log" Nov 22 10:06:27 crc kubenswrapper[4856]: I1122 10:06:27.613870 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj_e4a4c291-e079-478c-a3fb-86c0e9eceb07/util/0.log" Nov 22 10:06:27 crc kubenswrapper[4856]: I1122 10:06:27.822433 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj_e4a4c291-e079-478c-a3fb-86c0e9eceb07/pull/0.log" Nov 22 10:06:27 crc kubenswrapper[4856]: I1122 10:06:27.842056 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj_e4a4c291-e079-478c-a3fb-86c0e9eceb07/util/0.log" Nov 22 10:06:27 crc kubenswrapper[4856]: I1122 10:06:27.878919 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj_e4a4c291-e079-478c-a3fb-86c0e9eceb07/pull/0.log" Nov 22 10:06:28 crc kubenswrapper[4856]: I1122 10:06:28.036569 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj_e4a4c291-e079-478c-a3fb-86c0e9eceb07/pull/0.log" Nov 22 10:06:28 crc kubenswrapper[4856]: I1122 10:06:28.075428 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj_e4a4c291-e079-478c-a3fb-86c0e9eceb07/extract/0.log" Nov 22 10:06:28 crc kubenswrapper[4856]: I1122 10:06:28.104158 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1ca9b138781dcf125934bc878376abf75f877c2252ee8cf8f3500b7287fgjhj_e4a4c291-e079-478c-a3fb-86c0e9eceb07/util/0.log" Nov 22 10:06:28 crc kubenswrapper[4856]: I1122 10:06:28.267973 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7768f8c84f-8hnhw_5ac8c521-cea0-4bdf-a90c-5d61cff9e30d/kube-rbac-proxy/0.log" Nov 22 10:06:28 crc kubenswrapper[4856]: I1122 10:06:28.338320 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7768f8c84f-8hnhw_5ac8c521-cea0-4bdf-a90c-5d61cff9e30d/manager/0.log" Nov 22 10:06:28 crc kubenswrapper[4856]: I1122 10:06:28.430543 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6d8fd67bf7-w92jz_6726415a-8b70-4cde-80fa-5e9954cacb16/kube-rbac-proxy/0.log" Nov 22 10:06:28 crc kubenswrapper[4856]: I1122 10:06:28.577831 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6d8fd67bf7-w92jz_6726415a-8b70-4cde-80fa-5e9954cacb16/manager/0.log" Nov 22 10:06:28 crc kubenswrapper[4856]: I1122 10:06:28.591994 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-56dfb6b67f-f29xt_1a831555-0593-4c78-9b32-8469445182c6/kube-rbac-proxy/0.log" Nov 22 10:06:28 crc kubenswrapper[4856]: I1122 10:06:28.653084 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-56dfb6b67f-f29xt_1a831555-0593-4c78-9b32-8469445182c6/manager/0.log" Nov 22 10:06:28 crc kubenswrapper[4856]: I1122 10:06:28.849377 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8667fbf6f6-bsv2v_b8bdffad-516a-4927-8319-72b583afead1/kube-rbac-proxy/0.log" Nov 22 10:06:28 crc kubenswrapper[4856]: I1122 10:06:28.987387 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8667fbf6f6-bsv2v_b8bdffad-516a-4927-8319-72b583afead1/manager/0.log" Nov 22 10:06:29 crc kubenswrapper[4856]: I1122 10:06:29.013457 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-bf4c6585d-29dmw_11bca657-d3dd-4ecc-b2a7-fc430d0e27d9/kube-rbac-proxy/0.log" Nov 22 10:06:29 crc kubenswrapper[4856]: I1122 10:06:29.080251 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-bf4c6585d-29dmw_11bca657-d3dd-4ecc-b2a7-fc430d0e27d9/manager/0.log" Nov 22 10:06:29 crc kubenswrapper[4856]: I1122 10:06:29.204916 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d86b44686-v7qv6_8c6edaa5-7bd8-4fbb-bee5-92735fe2d2de/kube-rbac-proxy/0.log" Nov 22 10:06:29 crc kubenswrapper[4856]: I1122 10:06:29.302043 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d86b44686-v7qv6_8c6edaa5-7bd8-4fbb-bee5-92735fe2d2de/manager/0.log" Nov 22 10:06:29 crc kubenswrapper[4856]: I1122 10:06:29.416958 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-769d9c7585-xgmp2_33b6c3db-1c77-452f-a0b6-26ed5d261a15/kube-rbac-proxy/0.log" Nov 22 10:06:29 crc kubenswrapper[4856]: I1122 10:06:29.565558 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5c75d7c94b-2cv5t_5fa0bc39-1657-44bf-9c49-0bdee78de9bd/kube-rbac-proxy/0.log" Nov 22 10:06:29 crc kubenswrapper[4856]: I1122 10:06:29.679282 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5c75d7c94b-2cv5t_5fa0bc39-1657-44bf-9c49-0bdee78de9bd/manager/0.log" Nov 22 10:06:29 crc kubenswrapper[4856]: I1122 10:06:29.718946 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-769d9c7585-xgmp2_33b6c3db-1c77-452f-a0b6-26ed5d261a15/manager/0.log" Nov 22 10:06:29 crc kubenswrapper[4856]: I1122 10:06:29.754334 4856 patch_prober.go:28] interesting pod/machine-config-daemon-klt85 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 22 10:06:29 crc kubenswrapper[4856]: I1122 10:06:29.754406 4856 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 22 10:06:29 crc kubenswrapper[4856]: I1122 10:06:29.754463 4856 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-klt85" Nov 22 10:06:29 crc kubenswrapper[4856]: I1122 10:06:29.755414 4856 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77"} pod="openshift-machine-config-operator/machine-config-daemon-klt85" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 22 10:06:29 crc kubenswrapper[4856]: I1122 10:06:29.755480 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerName="machine-config-daemon" containerID="cri-o://31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" gracePeriod=600 Nov 22 10:06:29 crc kubenswrapper[4856]: E1122 10:06:29.887437 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:06:30 crc kubenswrapper[4856]: I1122 10:06:30.042618 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7879fb76fd-cnd64_c5a2bc4d-cfa9-4f96-add5-8e498f4caf7e/kube-rbac-proxy/0.log" Nov 22 10:06:30 crc kubenswrapper[4856]: I1122 10:06:30.188342 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7bb88cb858-89ntq_d160dfd5-d7c2-4004-9b82-e6883be21331/kube-rbac-proxy/0.log" Nov 22 10:06:30 crc kubenswrapper[4856]: I1122 10:06:30.247425 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7879fb76fd-cnd64_c5a2bc4d-cfa9-4f96-add5-8e498f4caf7e/manager/0.log" Nov 22 10:06:30 crc kubenswrapper[4856]: I1122 10:06:30.295628 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7bb88cb858-89ntq_d160dfd5-d7c2-4004-9b82-e6883be21331/manager/0.log" Nov 22 10:06:30 crc kubenswrapper[4856]: I1122 10:06:30.419919 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6f8c5b86cb-5gk8v_da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708/kube-rbac-proxy/0.log" Nov 22 10:06:30 crc kubenswrapper[4856]: I1122 10:06:30.474359 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6f8c5b86cb-5gk8v_da5eefaf-63ce-4e1b-8cbe-4f7b4d67e708/manager/0.log" Nov 22 10:06:30 crc kubenswrapper[4856]: I1122 10:06:30.625731 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-66b7d6f598-fkrzv_4d026193-be5d-4202-9379-adbff15842b6/kube-rbac-proxy/0.log" Nov 22 10:06:30 crc kubenswrapper[4856]: I1122 10:06:30.764798 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-66b7d6f598-fkrzv_4d026193-be5d-4202-9379-adbff15842b6/manager/0.log" Nov 22 10:06:30 crc kubenswrapper[4856]: I1122 10:06:30.788385 4856 generic.go:334] "Generic (PLEG): container finished" podID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" exitCode=0 Nov 22 10:06:30 crc kubenswrapper[4856]: I1122 10:06:30.788428 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerDied","Data":"31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77"} Nov 22 10:06:30 crc kubenswrapper[4856]: I1122 10:06:30.788458 4856 scope.go:117] "RemoveContainer" containerID="7b8d0d88e8da694286b9829436d41459bda182635bbecc7206139ad174a04590" Nov 22 10:06:30 crc kubenswrapper[4856]: I1122 10:06:30.789140 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:06:30 crc kubenswrapper[4856]: E1122 10:06:30.789421 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:06:30 crc kubenswrapper[4856]: I1122 10:06:30.800576 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-86d796d84d-rk8pp_d923f559-33c8-4832-8eec-c8b1879ba8cd/kube-rbac-proxy/0.log" Nov 22 10:06:31 crc kubenswrapper[4856]: I1122 10:06:31.029675 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6fdc856c5d-nmllh_837a0948-1f0d-4478-8e0a-fd8f897dd107/kube-rbac-proxy/0.log" Nov 22 10:06:31 crc kubenswrapper[4856]: I1122 10:06:31.103246 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6fdc856c5d-nmllh_837a0948-1f0d-4478-8e0a-fd8f897dd107/manager/0.log" Nov 22 10:06:31 crc kubenswrapper[4856]: I1122 10:06:31.116637 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-86d796d84d-rk8pp_d923f559-33c8-4832-8eec-c8b1879ba8cd/manager/0.log" Nov 22 10:06:31 crc kubenswrapper[4856]: I1122 10:06:31.310394 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr_9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1/manager/0.log" Nov 22 10:06:31 crc kubenswrapper[4856]: I1122 10:06:31.340207 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-79d88dcd44h5nrr_9a706ecd-d4b5-402c-a5e5-1cfb7244bcf1/kube-rbac-proxy/0.log" Nov 22 10:06:31 crc kubenswrapper[4856]: I1122 10:06:31.508609 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6cb9dc54f8-zm2ph_e08c83ff-ad65-4b8d-8ce9-e21c467aa01f/kube-rbac-proxy/0.log" Nov 22 10:06:31 crc kubenswrapper[4856]: I1122 10:06:31.630793 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8486c7f98b-j2sjt_d3b50ae8-2e9c-4c5e-ae72-b31f10dfc37f/kube-rbac-proxy/0.log" Nov 22 10:06:31 crc kubenswrapper[4856]: I1122 10:06:31.933577 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8486c7f98b-j2sjt_d3b50ae8-2e9c-4c5e-ae72-b31f10dfc37f/operator/0.log" Nov 22 10:06:32 crc kubenswrapper[4856]: I1122 10:06:32.048121 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5bdf4f7f7f-nns6d_7119f7f3-e9e5-49db-afec-6c3b9fbe5a97/kube-rbac-proxy/0.log" Nov 22 10:06:32 crc kubenswrapper[4856]: I1122 10:06:32.099128 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-k5p5q_53a40c3b-f70f-4fa6-80f9-bbbc6ef4a5f0/registry-server/0.log" Nov 22 10:06:32 crc kubenswrapper[4856]: I1122 10:06:32.277388 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5bdf4f7f7f-nns6d_7119f7f3-e9e5-49db-afec-6c3b9fbe5a97/manager/0.log" Nov 22 10:06:32 crc kubenswrapper[4856]: I1122 10:06:32.306184 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-6dc664666c-tgk9d_bd3472b0-3e99-46e7-bef3-dbd8283ce6de/kube-rbac-proxy/0.log" Nov 22 10:06:32 crc kubenswrapper[4856]: I1122 10:06:32.441104 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-6dc664666c-tgk9d_bd3472b0-3e99-46e7-bef3-dbd8283ce6de/manager/0.log" Nov 22 10:06:32 crc kubenswrapper[4856]: I1122 10:06:32.577220 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-jmsvc_b9b9c1ca-f17c-4fbb-805e-4464e3b93b02/operator/0.log" Nov 22 10:06:32 crc kubenswrapper[4856]: I1122 10:06:32.684408 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-799cb6ffd6-qf6ld_7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0/kube-rbac-proxy/0.log" Nov 22 10:06:32 crc kubenswrapper[4856]: I1122 10:06:32.960188 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-799cb6ffd6-qf6ld_7ccbf572-7a98-4769-9f8a-c7dea0b8a6d0/manager/0.log" Nov 22 10:06:32 crc kubenswrapper[4856]: I1122 10:06:32.992716 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7798859c74-47z2l_d7809224-a0c8-47fa-91ac-2f02578819fe/kube-rbac-proxy/0.log" Nov 22 10:06:33 crc kubenswrapper[4856]: I1122 10:06:33.181853 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8464cf66df-swr7c_ea4c3d48-c5fc-498d-a095-455572fcbb9e/kube-rbac-proxy/0.log" Nov 22 10:06:33 crc kubenswrapper[4856]: I1122 10:06:33.236244 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-7798859c74-47z2l_d7809224-a0c8-47fa-91ac-2f02578819fe/manager/0.log" Nov 22 10:06:33 crc kubenswrapper[4856]: I1122 10:06:33.266831 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-8464cf66df-swr7c_ea4c3d48-c5fc-498d-a095-455572fcbb9e/manager/0.log" Nov 22 10:06:33 crc kubenswrapper[4856]: I1122 10:06:33.476864 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cd4fb6f79-pg9z9_e2be6208-86a1-4604-bddc-a3bd98258537/kube-rbac-proxy/0.log" Nov 22 10:06:33 crc kubenswrapper[4856]: I1122 10:06:33.529220 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-7cd4fb6f79-pg9z9_e2be6208-86a1-4604-bddc-a3bd98258537/manager/0.log" Nov 22 10:06:34 crc kubenswrapper[4856]: I1122 10:06:34.014450 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6cb9dc54f8-zm2ph_e08c83ff-ad65-4b8d-8ce9-e21c467aa01f/manager/0.log" Nov 22 10:06:41 crc kubenswrapper[4856]: I1122 10:06:41.710028 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:06:41 crc kubenswrapper[4856]: E1122 10:06:41.710710 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:06:50 crc kubenswrapper[4856]: I1122 10:06:50.816956 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hzskf_3a58051f-3a17-420b-aad3-453e819b7b85/control-plane-machine-set-operator/0.log" Nov 22 10:06:50 crc kubenswrapper[4856]: I1122 10:06:50.955921 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2szb8_4dff5c22-ed64-4f83-9f80-3c618d5585ab/kube-rbac-proxy/0.log" Nov 22 10:06:50 crc kubenswrapper[4856]: I1122 10:06:50.988260 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-2szb8_4dff5c22-ed64-4f83-9f80-3c618d5585ab/machine-api-operator/0.log" Nov 22 10:06:55 crc kubenswrapper[4856]: I1122 10:06:55.710612 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:06:55 crc kubenswrapper[4856]: E1122 10:06:55.711356 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:07:03 crc kubenswrapper[4856]: I1122 10:07:03.606684 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-7svc5_6d18d974-0eb5-4949-9632-f8f0d00946b5/cert-manager-controller/0.log" Nov 22 10:07:03 crc kubenswrapper[4856]: I1122 10:07:03.743015 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-jw8xt_be8f5834-2ddb-4156-8185-ae87e19cb6f6/cert-manager-cainjector/0.log" Nov 22 10:07:03 crc kubenswrapper[4856]: I1122 10:07:03.794878 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-28g8j_252a44ce-6594-4999-9785-22cabfc6b0d5/cert-manager-webhook/0.log" Nov 22 10:07:07 crc kubenswrapper[4856]: I1122 10:07:07.709741 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:07:07 crc kubenswrapper[4856]: E1122 10:07:07.710386 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:07:16 crc kubenswrapper[4856]: I1122 10:07:16.809272 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-7qvtc_3cb14566-2d38-4393-bdf4-cf9d06a764fd/nmstate-console-plugin/0.log" Nov 22 10:07:17 crc kubenswrapper[4856]: I1122 10:07:17.011151 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wrrsc_902c7237-e48c-4e23-a3fa-88b76d745120/nmstate-handler/0.log" Nov 22 10:07:17 crc kubenswrapper[4856]: I1122 10:07:17.082460 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-gkdzn_01cfaa66-61e4-414d-b456-8a6c64a2ed5a/nmstate-metrics/0.log" Nov 22 10:07:17 crc kubenswrapper[4856]: I1122 10:07:17.095915 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-gkdzn_01cfaa66-61e4-414d-b456-8a6c64a2ed5a/kube-rbac-proxy/0.log" Nov 22 10:07:17 crc kubenswrapper[4856]: I1122 10:07:17.306926 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-pnd6q_9fb0076b-cac2-41cc-aa7b-a02bb1e64c28/nmstate-webhook/0.log" Nov 22 10:07:17 crc kubenswrapper[4856]: I1122 10:07:17.321385 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-wfj97_ded5842b-24c9-4039-ba91-2bed9c39a83b/nmstate-operator/0.log" Nov 22 10:07:19 crc kubenswrapper[4856]: I1122 10:07:19.710308 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:07:19 crc kubenswrapper[4856]: E1122 10:07:19.711013 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:07:30 crc kubenswrapper[4856]: I1122 10:07:30.710336 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:07:30 crc kubenswrapper[4856]: E1122 10:07:30.711074 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:07:32 crc kubenswrapper[4856]: I1122 10:07:32.244456 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-fbctt_f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92/kube-rbac-proxy/0.log" Nov 22 10:07:32 crc kubenswrapper[4856]: I1122 10:07:32.413775 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/cp-frr-files/0.log" Nov 22 10:07:32 crc kubenswrapper[4856]: I1122 10:07:32.678817 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-fbctt_f1afef1a-d731-41ed-a7fe-e4e0dcf7ca92/controller/0.log" Nov 22 10:07:32 crc kubenswrapper[4856]: I1122 10:07:32.693009 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/cp-reloader/0.log" Nov 22 10:07:32 crc kubenswrapper[4856]: I1122 10:07:32.704287 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/cp-metrics/0.log" Nov 22 10:07:32 crc kubenswrapper[4856]: I1122 10:07:32.747919 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/cp-frr-files/0.log" Nov 22 10:07:32 crc kubenswrapper[4856]: I1122 10:07:32.891543 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/cp-reloader/0.log" Nov 22 10:07:33 crc kubenswrapper[4856]: I1122 10:07:33.115931 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/cp-metrics/0.log" Nov 22 10:07:33 crc kubenswrapper[4856]: I1122 10:07:33.128556 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/cp-frr-files/0.log" Nov 22 10:07:33 crc kubenswrapper[4856]: I1122 10:07:33.128718 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/cp-reloader/0.log" Nov 22 10:07:33 crc kubenswrapper[4856]: I1122 10:07:33.165369 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/cp-metrics/0.log" Nov 22 10:07:33 crc kubenswrapper[4856]: I1122 10:07:33.309763 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/cp-frr-files/0.log" Nov 22 10:07:33 crc kubenswrapper[4856]: I1122 10:07:33.313898 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/cp-metrics/0.log" Nov 22 10:07:33 crc kubenswrapper[4856]: I1122 10:07:33.318871 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/cp-reloader/0.log" Nov 22 10:07:33 crc kubenswrapper[4856]: I1122 10:07:33.375045 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/controller/0.log" Nov 22 10:07:33 crc kubenswrapper[4856]: I1122 10:07:33.525758 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/frr-metrics/0.log" Nov 22 10:07:33 crc kubenswrapper[4856]: I1122 10:07:33.533075 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/kube-rbac-proxy/0.log" Nov 22 10:07:33 crc kubenswrapper[4856]: I1122 10:07:33.560693 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/kube-rbac-proxy-frr/0.log" Nov 22 10:07:33 crc kubenswrapper[4856]: I1122 10:07:33.761952 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-zwkjl_7d96263d-56d8-4b14-a4ab-a5cd75432de3/frr-k8s-webhook-server/0.log" Nov 22 10:07:33 crc kubenswrapper[4856]: I1122 10:07:33.767138 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/reloader/0.log" Nov 22 10:07:34 crc kubenswrapper[4856]: I1122 10:07:34.045122 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-579ff74fd9-zgszm_cae02c81-3bae-4eb3-a934-f66f9e4c3ce2/manager/0.log" Nov 22 10:07:34 crc kubenswrapper[4856]: I1122 10:07:34.214000 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-77bfffbc85-hkqb7_9b8a077c-4fa3-419a-bcd1-12bd366a1ef8/webhook-server/0.log" Nov 22 10:07:34 crc kubenswrapper[4856]: I1122 10:07:34.314696 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pk8b9_b45b69d0-d481-44ef-a766-6c43dc57be23/kube-rbac-proxy/0.log" Nov 22 10:07:35 crc kubenswrapper[4856]: I1122 10:07:35.911588 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pk8b9_b45b69d0-d481-44ef-a766-6c43dc57be23/speaker/0.log" Nov 22 10:07:37 crc kubenswrapper[4856]: I1122 10:07:37.412084 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rdwqk_47dda6c4-0264-433f-9edd-4599ee978799/frr/0.log" Nov 22 10:07:41 crc kubenswrapper[4856]: I1122 10:07:41.709621 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:07:41 crc kubenswrapper[4856]: E1122 10:07:41.710344 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:07:46 crc kubenswrapper[4856]: I1122 10:07:46.998167 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln_53e0b47b-1bfb-4207-bcbe-37ab71f5a642/util/0.log" Nov 22 10:07:47 crc kubenswrapper[4856]: I1122 10:07:47.213653 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln_53e0b47b-1bfb-4207-bcbe-37ab71f5a642/pull/0.log" Nov 22 10:07:47 crc kubenswrapper[4856]: I1122 10:07:47.225624 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln_53e0b47b-1bfb-4207-bcbe-37ab71f5a642/util/0.log" Nov 22 10:07:47 crc kubenswrapper[4856]: I1122 10:07:47.276159 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln_53e0b47b-1bfb-4207-bcbe-37ab71f5a642/pull/0.log" Nov 22 10:07:47 crc kubenswrapper[4856]: I1122 10:07:47.387964 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln_53e0b47b-1bfb-4207-bcbe-37ab71f5a642/util/0.log" Nov 22 10:07:47 crc kubenswrapper[4856]: I1122 10:07:47.474414 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln_53e0b47b-1bfb-4207-bcbe-37ab71f5a642/pull/0.log" Nov 22 10:07:47 crc kubenswrapper[4856]: I1122 10:07:47.489115 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931app5ln_53e0b47b-1bfb-4207-bcbe-37ab71f5a642/extract/0.log" Nov 22 10:07:47 crc kubenswrapper[4856]: I1122 10:07:47.651374 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt_30e04f43-7f8f-41bf-9253-8628ff4bd88d/util/0.log" Nov 22 10:07:47 crc kubenswrapper[4856]: I1122 10:07:47.732781 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt_30e04f43-7f8f-41bf-9253-8628ff4bd88d/util/0.log" Nov 22 10:07:47 crc kubenswrapper[4856]: I1122 10:07:47.751743 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt_30e04f43-7f8f-41bf-9253-8628ff4bd88d/pull/0.log" Nov 22 10:07:47 crc kubenswrapper[4856]: I1122 10:07:47.806043 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt_30e04f43-7f8f-41bf-9253-8628ff4bd88d/pull/0.log" Nov 22 10:07:47 crc kubenswrapper[4856]: I1122 10:07:47.961639 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt_30e04f43-7f8f-41bf-9253-8628ff4bd88d/extract/0.log" Nov 22 10:07:48 crc kubenswrapper[4856]: I1122 10:07:48.010641 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt_30e04f43-7f8f-41bf-9253-8628ff4bd88d/pull/0.log" Nov 22 10:07:48 crc kubenswrapper[4856]: I1122 10:07:48.020900 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ez5gqt_30e04f43-7f8f-41bf-9253-8628ff4bd88d/util/0.log" Nov 22 10:07:48 crc kubenswrapper[4856]: I1122 10:07:48.221690 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g_57f3408b-029f-4f55-a8ee-d0dea3c82197/util/0.log" Nov 22 10:07:48 crc kubenswrapper[4856]: I1122 10:07:48.336268 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g_57f3408b-029f-4f55-a8ee-d0dea3c82197/pull/0.log" Nov 22 10:07:48 crc kubenswrapper[4856]: I1122 10:07:48.336478 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g_57f3408b-029f-4f55-a8ee-d0dea3c82197/pull/0.log" Nov 22 10:07:48 crc kubenswrapper[4856]: I1122 10:07:48.348217 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g_57f3408b-029f-4f55-a8ee-d0dea3c82197/util/0.log" Nov 22 10:07:48 crc kubenswrapper[4856]: I1122 10:07:48.501344 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g_57f3408b-029f-4f55-a8ee-d0dea3c82197/extract/0.log" Nov 22 10:07:48 crc kubenswrapper[4856]: I1122 10:07:48.587304 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g_57f3408b-029f-4f55-a8ee-d0dea3c82197/pull/0.log" Nov 22 10:07:48 crc kubenswrapper[4856]: I1122 10:07:48.601462 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210fqq5g_57f3408b-029f-4f55-a8ee-d0dea3c82197/util/0.log" Nov 22 10:07:48 crc kubenswrapper[4856]: I1122 10:07:48.747924 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqb7t_3c7b0aba-250c-483e-ba94-3dcc4b9c59bb/extract-utilities/0.log" Nov 22 10:07:48 crc kubenswrapper[4856]: I1122 10:07:48.917096 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqb7t_3c7b0aba-250c-483e-ba94-3dcc4b9c59bb/extract-content/0.log" Nov 22 10:07:48 crc kubenswrapper[4856]: I1122 10:07:48.941356 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqb7t_3c7b0aba-250c-483e-ba94-3dcc4b9c59bb/extract-utilities/0.log" Nov 22 10:07:48 crc kubenswrapper[4856]: I1122 10:07:48.955817 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqb7t_3c7b0aba-250c-483e-ba94-3dcc4b9c59bb/extract-content/0.log" Nov 22 10:07:49 crc kubenswrapper[4856]: I1122 10:07:49.116041 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqb7t_3c7b0aba-250c-483e-ba94-3dcc4b9c59bb/extract-content/0.log" Nov 22 10:07:49 crc kubenswrapper[4856]: I1122 10:07:49.131922 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqb7t_3c7b0aba-250c-483e-ba94-3dcc4b9c59bb/extract-utilities/0.log" Nov 22 10:07:49 crc kubenswrapper[4856]: I1122 10:07:49.317165 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tf2dt_ee03d935-6d6f-4d2d-ab4e-bc9e85256487/extract-utilities/0.log" Nov 22 10:07:49 crc kubenswrapper[4856]: I1122 10:07:49.534642 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tf2dt_ee03d935-6d6f-4d2d-ab4e-bc9e85256487/extract-utilities/0.log" Nov 22 10:07:49 crc kubenswrapper[4856]: I1122 10:07:49.552998 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tf2dt_ee03d935-6d6f-4d2d-ab4e-bc9e85256487/extract-content/0.log" Nov 22 10:07:49 crc kubenswrapper[4856]: I1122 10:07:49.562124 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tf2dt_ee03d935-6d6f-4d2d-ab4e-bc9e85256487/extract-content/0.log" Nov 22 10:07:49 crc kubenswrapper[4856]: I1122 10:07:49.778974 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tf2dt_ee03d935-6d6f-4d2d-ab4e-bc9e85256487/extract-content/0.log" Nov 22 10:07:49 crc kubenswrapper[4856]: I1122 10:07:49.803598 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tf2dt_ee03d935-6d6f-4d2d-ab4e-bc9e85256487/extract-utilities/0.log" Nov 22 10:07:50 crc kubenswrapper[4856]: I1122 10:07:50.007380 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k_f2efd150-a416-4567-8919-bfc240a93eb0/util/0.log" Nov 22 10:07:50 crc kubenswrapper[4856]: I1122 10:07:50.259921 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k_f2efd150-a416-4567-8919-bfc240a93eb0/pull/0.log" Nov 22 10:07:50 crc kubenswrapper[4856]: I1122 10:07:50.327403 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k_f2efd150-a416-4567-8919-bfc240a93eb0/util/0.log" Nov 22 10:07:50 crc kubenswrapper[4856]: I1122 10:07:50.500613 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k_f2efd150-a416-4567-8919-bfc240a93eb0/pull/0.log" Nov 22 10:07:50 crc kubenswrapper[4856]: I1122 10:07:50.687052 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k_f2efd150-a416-4567-8919-bfc240a93eb0/util/0.log" Nov 22 10:07:50 crc kubenswrapper[4856]: I1122 10:07:50.793283 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k_f2efd150-a416-4567-8919-bfc240a93eb0/pull/0.log" Nov 22 10:07:51 crc kubenswrapper[4856]: I1122 10:07:51.029864 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c689r8k_f2efd150-a416-4567-8919-bfc240a93eb0/extract/0.log" Nov 22 10:07:51 crc kubenswrapper[4856]: I1122 10:07:51.081244 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-kwqfg_e2a94a89-16b5-480b-b1fd-18af97bc38da/marketplace-operator/0.log" Nov 22 10:07:51 crc kubenswrapper[4856]: I1122 10:07:51.233036 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s8jpj_a8b51997-87ba-499c-903d-82c1b85c0968/extract-utilities/0.log" Nov 22 10:07:51 crc kubenswrapper[4856]: I1122 10:07:51.398826 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s8jpj_a8b51997-87ba-499c-903d-82c1b85c0968/extract-content/0.log" Nov 22 10:07:51 crc kubenswrapper[4856]: I1122 10:07:51.408281 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s8jpj_a8b51997-87ba-499c-903d-82c1b85c0968/extract-utilities/0.log" Nov 22 10:07:51 crc kubenswrapper[4856]: I1122 10:07:51.457393 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s8jpj_a8b51997-87ba-499c-903d-82c1b85c0968/extract-content/0.log" Nov 22 10:07:51 crc kubenswrapper[4856]: I1122 10:07:51.648546 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s8jpj_a8b51997-87ba-499c-903d-82c1b85c0968/extract-utilities/0.log" Nov 22 10:07:51 crc kubenswrapper[4856]: I1122 10:07:51.693076 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s8jpj_a8b51997-87ba-499c-903d-82c1b85c0968/extract-content/0.log" Nov 22 10:07:51 crc kubenswrapper[4856]: I1122 10:07:51.908592 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g4jn9_1966788b-abc1-4c4a-a29c-aaeba9a3ca65/extract-utilities/0.log" Nov 22 10:07:52 crc kubenswrapper[4856]: I1122 10:07:52.111199 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g4jn9_1966788b-abc1-4c4a-a29c-aaeba9a3ca65/extract-content/0.log" Nov 22 10:07:52 crc kubenswrapper[4856]: I1122 10:07:52.136439 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-rqb7t_3c7b0aba-250c-483e-ba94-3dcc4b9c59bb/registry-server/0.log" Nov 22 10:07:52 crc kubenswrapper[4856]: I1122 10:07:52.142175 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g4jn9_1966788b-abc1-4c4a-a29c-aaeba9a3ca65/extract-utilities/0.log" Nov 22 10:07:52 crc kubenswrapper[4856]: I1122 10:07:52.324131 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g4jn9_1966788b-abc1-4c4a-a29c-aaeba9a3ca65/extract-content/0.log" Nov 22 10:07:52 crc kubenswrapper[4856]: I1122 10:07:52.542886 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g4jn9_1966788b-abc1-4c4a-a29c-aaeba9a3ca65/extract-utilities/0.log" Nov 22 10:07:52 crc kubenswrapper[4856]: I1122 10:07:52.556387 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g4jn9_1966788b-abc1-4c4a-a29c-aaeba9a3ca65/extract-content/0.log" Nov 22 10:07:52 crc kubenswrapper[4856]: I1122 10:07:52.650760 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-tf2dt_ee03d935-6d6f-4d2d-ab4e-bc9e85256487/registry-server/0.log" Nov 22 10:07:52 crc kubenswrapper[4856]: I1122 10:07:52.974987 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-s8jpj_a8b51997-87ba-499c-903d-82c1b85c0968/registry-server/0.log" Nov 22 10:07:55 crc kubenswrapper[4856]: I1122 10:07:55.048696 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-g4jn9_1966788b-abc1-4c4a-a29c-aaeba9a3ca65/registry-server/0.log" Nov 22 10:07:55 crc kubenswrapper[4856]: I1122 10:07:55.710145 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:07:55 crc kubenswrapper[4856]: E1122 10:07:55.710479 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:08:04 crc kubenswrapper[4856]: I1122 10:08:04.498303 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-65pzs_6161a409-9230-4400-a777-a234bd4f9747/prometheus-operator/0.log" Nov 22 10:08:04 crc kubenswrapper[4856]: I1122 10:08:04.683238 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-54fff96fb4-9pdx5_efab8443-6c3b-47ee-9ba2-22a3e1f28892/prometheus-operator-admission-webhook/0.log" Nov 22 10:08:04 crc kubenswrapper[4856]: I1122 10:08:04.811894 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-54fff96fb4-b7q8r_71c7fcd5-848f-4503-b1be-09ae67600084/prometheus-operator-admission-webhook/0.log" Nov 22 10:08:04 crc kubenswrapper[4856]: I1122 10:08:04.925153 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-2zz67_6a78f586-cd46-4f0e-b24b-62b93885a986/operator/0.log" Nov 22 10:08:05 crc kubenswrapper[4856]: I1122 10:08:05.004576 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-nfw7d_5b1b8d7d-8d9e-4cfc-93ca-764793a0b848/perses-operator/0.log" Nov 22 10:08:07 crc kubenswrapper[4856]: I1122 10:08:07.710716 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:08:07 crc kubenswrapper[4856]: E1122 10:08:07.711379 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:08:20 crc kubenswrapper[4856]: E1122 10:08:20.022228 4856 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.132:33616->38.102.83.132:36441: write tcp 38.102.83.132:33616->38.102.83.132:36441: write: broken pipe Nov 22 10:08:20 crc kubenswrapper[4856]: I1122 10:08:20.710589 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:08:20 crc kubenswrapper[4856]: E1122 10:08:20.710867 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:08:31 crc kubenswrapper[4856]: I1122 10:08:31.710365 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:08:31 crc kubenswrapper[4856]: E1122 10:08:31.711304 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:08:45 crc kubenswrapper[4856]: I1122 10:08:45.709704 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:08:45 crc kubenswrapper[4856]: E1122 10:08:45.710656 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:08:58 crc kubenswrapper[4856]: I1122 10:08:58.716858 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:08:58 crc kubenswrapper[4856]: E1122 10:08:58.718257 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:09:09 crc kubenswrapper[4856]: I1122 10:09:09.709745 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:09:09 crc kubenswrapper[4856]: E1122 10:09:09.710481 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:09:20 crc kubenswrapper[4856]: I1122 10:09:20.710673 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:09:20 crc kubenswrapper[4856]: E1122 10:09:20.711654 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:09:31 crc kubenswrapper[4856]: I1122 10:09:31.709918 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:09:31 crc kubenswrapper[4856]: E1122 10:09:31.710472 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:09:43 crc kubenswrapper[4856]: I1122 10:09:43.710732 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:09:43 crc kubenswrapper[4856]: E1122 10:09:43.711500 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:09:57 crc kubenswrapper[4856]: I1122 10:09:57.709640 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:09:57 crc kubenswrapper[4856]: E1122 10:09:57.710377 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:10:09 crc kubenswrapper[4856]: I1122 10:10:09.710326 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:10:09 crc kubenswrapper[4856]: E1122 10:10:09.711130 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:10:22 crc kubenswrapper[4856]: I1122 10:10:22.713375 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:10:22 crc kubenswrapper[4856]: E1122 10:10:22.723552 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:10:23 crc kubenswrapper[4856]: I1122 10:10:23.055692 4856 generic.go:334] "Generic (PLEG): container finished" podID="1a1ca3cb-fb3f-420c-8caf-1787dd762c29" containerID="d0a9054d4703c7c961fdc2ced123462e2d3e8643152e6eb94c48b76222515a44" exitCode=0 Nov 22 10:10:23 crc kubenswrapper[4856]: I1122 10:10:23.055742 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5sxxb/must-gather-hj5wk" event={"ID":"1a1ca3cb-fb3f-420c-8caf-1787dd762c29","Type":"ContainerDied","Data":"d0a9054d4703c7c961fdc2ced123462e2d3e8643152e6eb94c48b76222515a44"} Nov 22 10:10:23 crc kubenswrapper[4856]: I1122 10:10:23.056719 4856 scope.go:117] "RemoveContainer" containerID="d0a9054d4703c7c961fdc2ced123462e2d3e8643152e6eb94c48b76222515a44" Nov 22 10:10:23 crc kubenswrapper[4856]: I1122 10:10:23.974163 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5sxxb_must-gather-hj5wk_1a1ca3cb-fb3f-420c-8caf-1787dd762c29/gather/0.log" Nov 22 10:10:32 crc kubenswrapper[4856]: I1122 10:10:32.758650 4856 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5sxxb/must-gather-hj5wk"] Nov 22 10:10:32 crc kubenswrapper[4856]: I1122 10:10:32.759438 4856 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-5sxxb/must-gather-hj5wk" podUID="1a1ca3cb-fb3f-420c-8caf-1787dd762c29" containerName="copy" containerID="cri-o://f953fe7f427535f2f5e24645c11e09aed13aebea34b8e12a7325003582fb3d5b" gracePeriod=2 Nov 22 10:10:32 crc kubenswrapper[4856]: I1122 10:10:32.777355 4856 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5sxxb/must-gather-hj5wk"] Nov 22 10:10:33 crc kubenswrapper[4856]: I1122 10:10:33.160628 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5sxxb_must-gather-hj5wk_1a1ca3cb-fb3f-420c-8caf-1787dd762c29/copy/0.log" Nov 22 10:10:33 crc kubenswrapper[4856]: I1122 10:10:33.164967 4856 generic.go:334] "Generic (PLEG): container finished" podID="1a1ca3cb-fb3f-420c-8caf-1787dd762c29" containerID="f953fe7f427535f2f5e24645c11e09aed13aebea34b8e12a7325003582fb3d5b" exitCode=143 Nov 22 10:10:33 crc kubenswrapper[4856]: I1122 10:10:33.289300 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5sxxb_must-gather-hj5wk_1a1ca3cb-fb3f-420c-8caf-1787dd762c29/copy/0.log" Nov 22 10:10:33 crc kubenswrapper[4856]: I1122 10:10:33.289697 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/must-gather-hj5wk" Nov 22 10:10:33 crc kubenswrapper[4856]: I1122 10:10:33.381157 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1a1ca3cb-fb3f-420c-8caf-1787dd762c29-must-gather-output\") pod \"1a1ca3cb-fb3f-420c-8caf-1787dd762c29\" (UID: \"1a1ca3cb-fb3f-420c-8caf-1787dd762c29\") " Nov 22 10:10:33 crc kubenswrapper[4856]: I1122 10:10:33.381298 4856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvvsf\" (UniqueName: \"kubernetes.io/projected/1a1ca3cb-fb3f-420c-8caf-1787dd762c29-kube-api-access-xvvsf\") pod \"1a1ca3cb-fb3f-420c-8caf-1787dd762c29\" (UID: \"1a1ca3cb-fb3f-420c-8caf-1787dd762c29\") " Nov 22 10:10:33 crc kubenswrapper[4856]: I1122 10:10:33.389174 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a1ca3cb-fb3f-420c-8caf-1787dd762c29-kube-api-access-xvvsf" (OuterVolumeSpecName: "kube-api-access-xvvsf") pod "1a1ca3cb-fb3f-420c-8caf-1787dd762c29" (UID: "1a1ca3cb-fb3f-420c-8caf-1787dd762c29"). InnerVolumeSpecName "kube-api-access-xvvsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 22 10:10:33 crc kubenswrapper[4856]: I1122 10:10:33.483869 4856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvvsf\" (UniqueName: \"kubernetes.io/projected/1a1ca3cb-fb3f-420c-8caf-1787dd762c29-kube-api-access-xvvsf\") on node \"crc\" DevicePath \"\"" Nov 22 10:10:33 crc kubenswrapper[4856]: I1122 10:10:33.606876 4856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a1ca3cb-fb3f-420c-8caf-1787dd762c29-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1a1ca3cb-fb3f-420c-8caf-1787dd762c29" (UID: "1a1ca3cb-fb3f-420c-8caf-1787dd762c29"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 22 10:10:33 crc kubenswrapper[4856]: I1122 10:10:33.687237 4856 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1a1ca3cb-fb3f-420c-8caf-1787dd762c29-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 22 10:10:33 crc kubenswrapper[4856]: I1122 10:10:33.709672 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:10:33 crc kubenswrapper[4856]: E1122 10:10:33.710110 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:10:34 crc kubenswrapper[4856]: I1122 10:10:34.177937 4856 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5sxxb_must-gather-hj5wk_1a1ca3cb-fb3f-420c-8caf-1787dd762c29/copy/0.log" Nov 22 10:10:34 crc kubenswrapper[4856]: I1122 10:10:34.178429 4856 scope.go:117] "RemoveContainer" containerID="f953fe7f427535f2f5e24645c11e09aed13aebea34b8e12a7325003582fb3d5b" Nov 22 10:10:34 crc kubenswrapper[4856]: I1122 10:10:34.178447 4856 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5sxxb/must-gather-hj5wk" Nov 22 10:10:34 crc kubenswrapper[4856]: I1122 10:10:34.209289 4856 scope.go:117] "RemoveContainer" containerID="d0a9054d4703c7c961fdc2ced123462e2d3e8643152e6eb94c48b76222515a44" Nov 22 10:10:34 crc kubenswrapper[4856]: I1122 10:10:34.721235 4856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a1ca3cb-fb3f-420c-8caf-1787dd762c29" path="/var/lib/kubelet/pods/1a1ca3cb-fb3f-420c-8caf-1787dd762c29/volumes" Nov 22 10:10:48 crc kubenswrapper[4856]: I1122 10:10:48.716147 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:10:48 crc kubenswrapper[4856]: E1122 10:10:48.716973 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:10:51 crc kubenswrapper[4856]: I1122 10:10:51.589881 4856 scope.go:117] "RemoveContainer" containerID="eb95c2301512441d74911286cee23b4d49f00ac92914ec24f612ac9d2e356376" Nov 22 10:11:03 crc kubenswrapper[4856]: I1122 10:11:03.709060 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:11:03 crc kubenswrapper[4856]: E1122 10:11:03.709646 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:11:18 crc kubenswrapper[4856]: I1122 10:11:18.715691 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:11:18 crc kubenswrapper[4856]: E1122 10:11:18.716704 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:11:29 crc kubenswrapper[4856]: I1122 10:11:29.709293 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:11:29 crc kubenswrapper[4856]: E1122 10:11:29.711416 4856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-klt85_openshift-machine-config-operator(0efefc3f-da5f-4035-81dc-6b5ab51e3df1)\"" pod="openshift-machine-config-operator/machine-config-daemon-klt85" podUID="0efefc3f-da5f-4035-81dc-6b5ab51e3df1" Nov 22 10:11:44 crc kubenswrapper[4856]: I1122 10:11:44.709392 4856 scope.go:117] "RemoveContainer" containerID="31a584a4353d5ba257d84479cb79af4cd3dc6c18e465f6e9204351bfdd549f77" Nov 22 10:11:45 crc kubenswrapper[4856]: I1122 10:11:45.899078 4856 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-klt85" event={"ID":"0efefc3f-da5f-4035-81dc-6b5ab51e3df1","Type":"ContainerStarted","Data":"ae3584054442eae668cd3441af6987c285d1b4b0b1b4dfb89f30871ca5d94585"}